Feb 26 11:05:32 crc systemd[1]: Starting Kubernetes Kubelet... Feb 26 11:05:32 crc restorecon[4574]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:32 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:05:33 crc restorecon[4574]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 11:05:33 crc restorecon[4574]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 26 11:05:33 crc kubenswrapper[4724]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 11:05:33 crc kubenswrapper[4724]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 26 11:05:33 crc kubenswrapper[4724]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 11:05:33 crc kubenswrapper[4724]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 11:05:33 crc kubenswrapper[4724]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 26 11:05:33 crc kubenswrapper[4724]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.769063 4724 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778028 4724 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778060 4724 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778066 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778070 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778076 4724 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778083 4724 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778087 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778091 4724 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778095 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778100 4724 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778105 4724 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778110 4724 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778115 4724 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778120 4724 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778123 4724 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778127 4724 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778131 4724 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778135 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778138 4724 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778142 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778146 4724 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778149 4724 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778174 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778192 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778196 4724 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778201 4724 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778205 4724 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778210 4724 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778214 4724 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778218 4724 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778222 4724 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778226 4724 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778230 4724 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778234 4724 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778238 4724 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778244 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778248 4724 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778252 4724 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778255 4724 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778259 4724 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778262 4724 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778266 4724 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778270 4724 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778274 4724 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778278 4724 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778282 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778285 4724 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778290 4724 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778294 4724 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778298 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778301 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778306 4724 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778310 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778314 4724 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778318 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778321 4724 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778324 4724 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778328 4724 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778331 4724 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778335 4724 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778338 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778342 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778346 4724 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778349 4724 feature_gate.go:330] unrecognized feature gate: Example Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778353 4724 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778357 4724 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778360 4724 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778364 4724 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778367 4724 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778371 4724 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.778375 4724 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779340 4724 flags.go:64] FLAG: --address="0.0.0.0" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779361 4724 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779372 4724 flags.go:64] FLAG: --anonymous-auth="true" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779380 4724 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779387 4724 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779393 4724 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779402 4724 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779409 4724 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779414 4724 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779420 4724 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779425 4724 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779431 4724 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779438 4724 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779444 4724 flags.go:64] FLAG: --cgroup-root="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779450 4724 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779455 4724 flags.go:64] FLAG: --client-ca-file="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779461 4724 flags.go:64] FLAG: --cloud-config="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779466 4724 flags.go:64] FLAG: --cloud-provider="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779471 4724 flags.go:64] FLAG: --cluster-dns="[]" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779482 4724 flags.go:64] FLAG: --cluster-domain="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779488 4724 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779493 4724 flags.go:64] FLAG: --config-dir="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779498 4724 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779505 4724 flags.go:64] FLAG: --container-log-max-files="5" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779513 4724 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779518 4724 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779525 4724 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779531 4724 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779537 4724 flags.go:64] FLAG: --contention-profiling="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779543 4724 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779548 4724 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779554 4724 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779559 4724 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779566 4724 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779572 4724 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779577 4724 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779583 4724 flags.go:64] FLAG: --enable-load-reader="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779588 4724 flags.go:64] FLAG: --enable-server="true" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779594 4724 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779600 4724 flags.go:64] FLAG: --event-burst="100" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779606 4724 flags.go:64] FLAG: --event-qps="50" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779611 4724 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779617 4724 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779622 4724 flags.go:64] FLAG: --eviction-hard="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779630 4724 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779635 4724 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779641 4724 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779647 4724 flags.go:64] FLAG: --eviction-soft="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779653 4724 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779658 4724 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779663 4724 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779668 4724 flags.go:64] FLAG: --experimental-mounter-path="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779673 4724 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779678 4724 flags.go:64] FLAG: --fail-swap-on="true" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779684 4724 flags.go:64] FLAG: --feature-gates="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779690 4724 flags.go:64] FLAG: --file-check-frequency="20s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779695 4724 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779701 4724 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779706 4724 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779715 4724 flags.go:64] FLAG: --healthz-port="10248" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779721 4724 flags.go:64] FLAG: --help="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779726 4724 flags.go:64] FLAG: --hostname-override="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779732 4724 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779737 4724 flags.go:64] FLAG: --http-check-frequency="20s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779743 4724 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779748 4724 flags.go:64] FLAG: --image-credential-provider-config="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779753 4724 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779758 4724 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779763 4724 flags.go:64] FLAG: --image-service-endpoint="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779769 4724 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779774 4724 flags.go:64] FLAG: --kube-api-burst="100" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779779 4724 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779785 4724 flags.go:64] FLAG: --kube-api-qps="50" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779790 4724 flags.go:64] FLAG: --kube-reserved="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779796 4724 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779801 4724 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779807 4724 flags.go:64] FLAG: --kubelet-cgroups="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779813 4724 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779818 4724 flags.go:64] FLAG: --lock-file="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779823 4724 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779830 4724 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779835 4724 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779848 4724 flags.go:64] FLAG: --log-json-split-stream="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779854 4724 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779859 4724 flags.go:64] FLAG: --log-text-split-stream="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779865 4724 flags.go:64] FLAG: --logging-format="text" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779870 4724 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779877 4724 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779882 4724 flags.go:64] FLAG: --manifest-url="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779888 4724 flags.go:64] FLAG: --manifest-url-header="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779895 4724 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779901 4724 flags.go:64] FLAG: --max-open-files="1000000" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779907 4724 flags.go:64] FLAG: --max-pods="110" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779913 4724 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779918 4724 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779923 4724 flags.go:64] FLAG: --memory-manager-policy="None" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779929 4724 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779934 4724 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779940 4724 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779945 4724 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779959 4724 flags.go:64] FLAG: --node-status-max-images="50" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779965 4724 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779970 4724 flags.go:64] FLAG: --oom-score-adj="-999" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779975 4724 flags.go:64] FLAG: --pod-cidr="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779980 4724 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779989 4724 flags.go:64] FLAG: --pod-manifest-path="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.779994 4724 flags.go:64] FLAG: --pod-max-pids="-1" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780000 4724 flags.go:64] FLAG: --pods-per-core="0" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780005 4724 flags.go:64] FLAG: --port="10250" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780011 4724 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780016 4724 flags.go:64] FLAG: --provider-id="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780022 4724 flags.go:64] FLAG: --qos-reserved="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780028 4724 flags.go:64] FLAG: --read-only-port="10255" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780033 4724 flags.go:64] FLAG: --register-node="true" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780039 4724 flags.go:64] FLAG: --register-schedulable="true" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780045 4724 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780056 4724 flags.go:64] FLAG: --registry-burst="10" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780061 4724 flags.go:64] FLAG: --registry-qps="5" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780067 4724 flags.go:64] FLAG: --reserved-cpus="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780073 4724 flags.go:64] FLAG: --reserved-memory="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780080 4724 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780085 4724 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780091 4724 flags.go:64] FLAG: --rotate-certificates="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780096 4724 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780101 4724 flags.go:64] FLAG: --runonce="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780106 4724 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780111 4724 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780116 4724 flags.go:64] FLAG: --seccomp-default="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780121 4724 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780126 4724 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780133 4724 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780139 4724 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780144 4724 flags.go:64] FLAG: --storage-driver-password="root" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780149 4724 flags.go:64] FLAG: --storage-driver-secure="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780154 4724 flags.go:64] FLAG: --storage-driver-table="stats" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780159 4724 flags.go:64] FLAG: --storage-driver-user="root" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780165 4724 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780170 4724 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780191 4724 flags.go:64] FLAG: --system-cgroups="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780197 4724 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780207 4724 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780212 4724 flags.go:64] FLAG: --tls-cert-file="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780217 4724 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780224 4724 flags.go:64] FLAG: --tls-min-version="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780230 4724 flags.go:64] FLAG: --tls-private-key-file="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780235 4724 flags.go:64] FLAG: --topology-manager-policy="none" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780240 4724 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780246 4724 flags.go:64] FLAG: --topology-manager-scope="container" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780252 4724 flags.go:64] FLAG: --v="2" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780259 4724 flags.go:64] FLAG: --version="false" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780266 4724 flags.go:64] FLAG: --vmodule="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780272 4724 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780278 4724 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780404 4724 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780410 4724 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780416 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780421 4724 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780425 4724 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780430 4724 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780434 4724 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780439 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780444 4724 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780454 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780459 4724 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780464 4724 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780470 4724 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780476 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780482 4724 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780488 4724 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780493 4724 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780498 4724 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780505 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780510 4724 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780514 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780519 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780523 4724 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780528 4724 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780532 4724 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780537 4724 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780541 4724 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780546 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780550 4724 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780555 4724 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780559 4724 feature_gate.go:330] unrecognized feature gate: Example Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780564 4724 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780570 4724 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780574 4724 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780579 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780584 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780588 4724 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780593 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780599 4724 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780604 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780608 4724 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780615 4724 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780619 4724 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780624 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780630 4724 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780635 4724 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780639 4724 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780646 4724 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780652 4724 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780658 4724 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780663 4724 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780668 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780673 4724 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780678 4724 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780683 4724 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780688 4724 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780693 4724 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780698 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780702 4724 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780707 4724 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780712 4724 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780717 4724 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780721 4724 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780727 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780731 4724 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780736 4724 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780742 4724 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780749 4724 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780755 4724 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780760 4724 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.780765 4724 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.780773 4724 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.788527 4724 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.788569 4724 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788644 4724 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788655 4724 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788663 4724 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788670 4724 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788676 4724 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788682 4724 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788687 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788691 4724 feature_gate.go:330] unrecognized feature gate: Example Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788696 4724 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788700 4724 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788704 4724 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788708 4724 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788712 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788716 4724 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788721 4724 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788725 4724 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788729 4724 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788733 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788737 4724 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788741 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788746 4724 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788751 4724 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788756 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788761 4724 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788768 4724 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788775 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788780 4724 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788785 4724 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788790 4724 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788795 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788800 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788804 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788810 4724 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788814 4724 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788820 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788825 4724 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788829 4724 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788834 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788838 4724 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788843 4724 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788848 4724 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788852 4724 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788857 4724 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788861 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788865 4724 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788870 4724 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788875 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788879 4724 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788884 4724 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788920 4724 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788925 4724 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788930 4724 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788935 4724 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788939 4724 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788944 4724 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788948 4724 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788956 4724 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788960 4724 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788964 4724 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788968 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788972 4724 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788977 4724 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788981 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788986 4724 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788990 4724 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.788995 4724 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789000 4724 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789004 4724 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789009 4724 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789016 4724 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789028 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.789037 4724 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789195 4724 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789217 4724 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789224 4724 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789230 4724 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789236 4724 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789241 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789246 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789251 4724 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789255 4724 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789260 4724 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789267 4724 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789273 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789278 4724 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789283 4724 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789289 4724 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789294 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789299 4724 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789307 4724 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789312 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789317 4724 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789323 4724 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789328 4724 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789332 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789337 4724 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789342 4724 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789346 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789351 4724 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789357 4724 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789363 4724 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789368 4724 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789373 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789378 4724 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789383 4724 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789387 4724 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789393 4724 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789398 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789403 4724 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789408 4724 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789412 4724 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789417 4724 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789421 4724 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789426 4724 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789430 4724 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789435 4724 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789440 4724 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789444 4724 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789449 4724 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789453 4724 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789457 4724 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789462 4724 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789467 4724 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789471 4724 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789476 4724 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789480 4724 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789484 4724 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789489 4724 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789493 4724 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789498 4724 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789502 4724 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789507 4724 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789512 4724 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789516 4724 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789522 4724 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789528 4724 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789533 4724 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789538 4724 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789542 4724 feature_gate.go:330] unrecognized feature gate: Example Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789547 4724 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789552 4724 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789557 4724 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.789562 4724 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.789570 4724 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.789757 4724 server.go:940] "Client rotation is on, will bootstrap in background" Feb 26 11:05:33 crc kubenswrapper[4724]: E0226 11:05:33.793241 4724 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.797436 4724 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.797573 4724 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.799444 4724 server.go:997] "Starting client certificate rotation" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.799474 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.799610 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.824588 4724 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.826112 4724 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 11:05:33 crc kubenswrapper[4724]: E0226 11:05:33.827324 4724 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.841338 4724 log.go:25] "Validated CRI v1 runtime API" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.876119 4724 log.go:25] "Validated CRI v1 image API" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.877678 4724 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.882806 4724 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-26-11-00-35-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.882840 4724 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.893694 4724 manager.go:217] Machine: {Timestamp:2026-02-26 11:05:33.891371081 +0000 UTC m=+0.547110216 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:68498961-1c21-4225-84c0-71d91bc5664e BootID:0c9118f1-edc7-4e76-b83d-ad0410c545bb Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:fd:70:3e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:fd:70:3e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:2b:39:e0 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:78:d6:67 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c2:af:c8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c8:4e:44 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7e:b9:19:ae:6c:d1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:de:e4:01:32:52:c8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.893891 4724 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.894056 4724 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.895329 4724 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.895476 4724 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.895506 4724 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.895684 4724 topology_manager.go:138] "Creating topology manager with none policy" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.895694 4724 container_manager_linux.go:303] "Creating device plugin manager" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.896145 4724 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.896171 4724 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.896880 4724 state_mem.go:36] "Initialized new in-memory state store" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.896960 4724 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.900650 4724 kubelet.go:418] "Attempting to sync node with API server" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.900695 4724 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.900783 4724 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.900804 4724 kubelet.go:324] "Adding apiserver pod source" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.900821 4724 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.905365 4724 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.906136 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Feb 26 11:05:33 crc kubenswrapper[4724]: E0226 11:05:33.906252 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.906371 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.906430 4724 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 26 11:05:33 crc kubenswrapper[4724]: E0226 11:05:33.906446 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.908667 4724 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.910196 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.910216 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.910223 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.910229 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.910239 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.910246 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.910253 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.910263 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.910272 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.910279 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.910289 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.910296 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.910844 4724 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.911292 4724 server.go:1280] "Started kubelet" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.912287 4724 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.912296 4724 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 26 11:05:33 crc systemd[1]: Started Kubernetes Kubelet. Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.913079 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.913681 4724 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.920103 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.920985 4724 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.921707 4724 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.921727 4724 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.921943 4724 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.923891 4724 factory.go:55] Registering systemd factory Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.923910 4724 factory.go:221] Registration of the systemd container factory successfully Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.925829 4724 server.go:460] "Adding debug handlers to kubelet server" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.926158 4724 factory.go:153] Registering CRI-O factory Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.926205 4724 factory.go:221] Registration of the crio container factory successfully Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.926338 4724 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.926365 4724 factory.go:103] Registering Raw factory Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.926386 4724 manager.go:1196] Started watching for new ooms in manager Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.929921 4724 manager.go:319] Starting recovery of all containers Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.929692 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Feb 26 11:05:33 crc kubenswrapper[4724]: E0226 11:05:33.932125 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:05:33 crc kubenswrapper[4724]: E0226 11:05:33.933145 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:05:33 crc kubenswrapper[4724]: E0226 11:05:33.933590 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="200ms" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.936455 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.936902 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.937016 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.937135 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.937292 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.937419 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: E0226 11:05:33.935335 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1897c7238dda72af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.911265967 +0000 UTC m=+0.567005082,LastTimestamp:2026-02-26 11:05:33.911265967 +0000 UTC m=+0.567005082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.937717 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.937914 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.938016 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.938102 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.938214 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.938305 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.938383 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.938461 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.938672 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.938765 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.938842 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.938919 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.938994 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.939066 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.939150 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.939264 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.939346 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.939422 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.939509 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.939584 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.940495 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.940591 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.940721 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.940819 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.940897 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.940999 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.941102 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.941231 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.941341 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.941445 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.941562 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.941673 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.941783 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.941889 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.941992 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.942093 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.942242 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.942366 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.942509 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.942623 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.942734 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.942838 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.942934 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.943777 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.943894 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.943974 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.944064 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.944143 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.944260 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.944355 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.944429 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.944515 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.944594 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.944670 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.944787 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.944866 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.944939 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.945020 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.945095 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.945168 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.945286 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.945366 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.945461 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.945539 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.945619 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.945694 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.945769 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.945850 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.945926 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.945998 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.946071 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.946146 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.946286 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.946389 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.946465 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.946548 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.946623 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.946698 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.946782 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.946858 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.948634 4724 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.948791 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.948875 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.949044 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.949126 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.949261 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.949361 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.949452 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.949536 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.949613 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.949692 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.949771 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.949852 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.949931 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.950018 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.950097 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.950172 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.950326 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.950404 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.950495 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.950574 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.950950 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.951039 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.951120 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.951233 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.951322 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.951398 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.951501 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.951587 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.951685 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.951782 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.951866 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.951955 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.952029 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.952107 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.952222 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.952310 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.952386 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.952460 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.952548 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.952644 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.952754 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.952833 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.952907 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.952992 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.953066 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.953150 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.953254 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.953338 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.953411 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.953483 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.953571 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.953648 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.953724 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.953800 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.953874 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.953958 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.954034 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.954107 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.954199 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.954279 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.954407 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.954516 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.954599 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.954673 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.954746 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.954819 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.954943 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.955018 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.955092 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.955201 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.955287 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.955384 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.955463 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.955540 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.955614 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.955687 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.956016 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.956112 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.956206 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.956285 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.956359 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.956431 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.956516 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.956591 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.956663 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.956766 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.956842 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.956922 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.956998 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.957074 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.957148 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.957242 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.957337 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.957415 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.957491 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.957565 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.957638 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.957712 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.957795 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.957878 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.957951 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.958026 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.958099 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.958195 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.958276 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.958356 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.958431 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.958504 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.958590 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.958667 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.958739 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.958819 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.958893 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.958965 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.959043 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.959117 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.959206 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.959299 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.959416 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.959519 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.959598 4724 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.964322 4724 reconstruct.go:97] "Volume reconstruction finished" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.964434 4724 reconciler.go:26] "Reconciler: start to sync state" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.963025 4724 manager.go:324] Recovery completed Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.971978 4724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.972876 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.974119 4724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.974154 4724 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.974226 4724 kubelet.go:2335] "Starting kubelet main sync loop" Feb 26 11:05:33 crc kubenswrapper[4724]: E0226 11:05:33.974271 4724 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 26 11:05:33 crc kubenswrapper[4724]: W0226 11:05:33.974721 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Feb 26 11:05:33 crc kubenswrapper[4724]: E0226 11:05:33.974755 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.974773 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.974798 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.974811 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.976156 4724 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.976171 4724 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.976204 4724 state_mem.go:36] "Initialized new in-memory state store" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.996447 4724 policy_none.go:49] "None policy: Start" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.997540 4724 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 26 11:05:33 crc kubenswrapper[4724]: I0226 11:05:33.997636 4724 state_mem.go:35] "Initializing new in-memory state store" Feb 26 11:05:34 crc kubenswrapper[4724]: E0226 11:05:34.033726 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.053491 4724 manager.go:334] "Starting Device Plugin manager" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.053592 4724 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.053602 4724 server.go:79] "Starting device plugin registration server" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.053987 4724 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.054003 4724 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.054120 4724 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.054228 4724 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.054238 4724 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 26 11:05:34 crc kubenswrapper[4724]: E0226 11:05:34.061408 4724 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.074897 4724 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.075012 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.075932 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.075971 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.075985 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.076145 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.076953 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.076981 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.076992 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.077290 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.077344 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.077382 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.077410 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.077344 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.078109 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.078133 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.078144 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.078480 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.078512 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.078523 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.078639 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.078799 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.078830 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.079038 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.079070 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.079081 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.079375 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.079395 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.079406 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.079913 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.079941 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.079951 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.080062 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.080158 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.080204 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.081106 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.081157 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.081173 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.081375 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.081399 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.081405 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.081445 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.081409 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.083156 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.083207 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.083219 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:34 crc kubenswrapper[4724]: E0226 11:05:34.134294 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="400ms" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.155045 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.156048 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.156084 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.156099 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.156126 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:05:34 crc kubenswrapper[4724]: E0226 11:05:34.156612 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.166808 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.166851 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.166907 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.166943 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.166971 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.166995 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.167023 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.167049 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.167076 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.167100 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.167227 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.167299 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.167346 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.167379 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.167409 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268362 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268430 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268454 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268473 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268494 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268512 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268532 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268549 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268569 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268600 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268602 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268638 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268696 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268703 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268666 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268682 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268673 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268748 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268769 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268762 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268789 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268620 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268858 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268890 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268905 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268917 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268950 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268982 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.268980 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.269016 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.357675 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.358753 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.358812 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.358833 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.358870 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:05:34 crc kubenswrapper[4724]: E0226 11:05:34.359469 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.410516 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.419816 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.436115 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.443771 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.450664 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 11:05:34 crc kubenswrapper[4724]: W0226 11:05:34.475779 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3f6a7a7a1305186d6a895a758f60c980e86a77900f6a6148dacda9f6ce079510 WatchSource:0}: Error finding container 3f6a7a7a1305186d6a895a758f60c980e86a77900f6a6148dacda9f6ce079510: Status 404 returned error can't find the container with id 3f6a7a7a1305186d6a895a758f60c980e86a77900f6a6148dacda9f6ce079510 Feb 26 11:05:34 crc kubenswrapper[4724]: W0226 11:05:34.481967 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-392a95e632b808b00cd8b8acc858ee98ea3a925cd75f6ba6724c8162b1c414dd WatchSource:0}: Error finding container 392a95e632b808b00cd8b8acc858ee98ea3a925cd75f6ba6724c8162b1c414dd: Status 404 returned error can't find the container with id 392a95e632b808b00cd8b8acc858ee98ea3a925cd75f6ba6724c8162b1c414dd Feb 26 11:05:34 crc kubenswrapper[4724]: W0226 11:05:34.484965 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-6bdb2acff0a3588d41f94e5b6996cefa3e451877e256ebe65ec708bb043738fe WatchSource:0}: Error finding container 6bdb2acff0a3588d41f94e5b6996cefa3e451877e256ebe65ec708bb043738fe: Status 404 returned error can't find the container with id 6bdb2acff0a3588d41f94e5b6996cefa3e451877e256ebe65ec708bb043738fe Feb 26 11:05:34 crc kubenswrapper[4724]: W0226 11:05:34.486418 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-9351dd032eec7998c9d651634fa0f66f884c493ee43b2e5bddc65c16080acd42 WatchSource:0}: Error finding container 9351dd032eec7998c9d651634fa0f66f884c493ee43b2e5bddc65c16080acd42: Status 404 returned error can't find the container with id 9351dd032eec7998c9d651634fa0f66f884c493ee43b2e5bddc65c16080acd42 Feb 26 11:05:34 crc kubenswrapper[4724]: W0226 11:05:34.489760 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-cf1d5409b61c82899f281b8d71a880bb02562a279c48e326d9678a5db7ed8d8b WatchSource:0}: Error finding container cf1d5409b61c82899f281b8d71a880bb02562a279c48e326d9678a5db7ed8d8b: Status 404 returned error can't find the container with id cf1d5409b61c82899f281b8d71a880bb02562a279c48e326d9678a5db7ed8d8b Feb 26 11:05:34 crc kubenswrapper[4724]: E0226 11:05:34.535409 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="800ms" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.759950 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.761701 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.761752 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.761766 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.761796 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:05:34 crc kubenswrapper[4724]: E0226 11:05:34.762307 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Feb 26 11:05:34 crc kubenswrapper[4724]: W0226 11:05:34.837959 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Feb 26 11:05:34 crc kubenswrapper[4724]: E0226 11:05:34.838038 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.914032 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.979424 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cf1d5409b61c82899f281b8d71a880bb02562a279c48e326d9678a5db7ed8d8b"} Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.980355 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9351dd032eec7998c9d651634fa0f66f884c493ee43b2e5bddc65c16080acd42"} Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.981766 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6bdb2acff0a3588d41f94e5b6996cefa3e451877e256ebe65ec708bb043738fe"} Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.982703 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"392a95e632b808b00cd8b8acc858ee98ea3a925cd75f6ba6724c8162b1c414dd"} Feb 26 11:05:34 crc kubenswrapper[4724]: I0226 11:05:34.983628 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3f6a7a7a1305186d6a895a758f60c980e86a77900f6a6148dacda9f6ce079510"} Feb 26 11:05:35 crc kubenswrapper[4724]: W0226 11:05:35.332452 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Feb 26 11:05:35 crc kubenswrapper[4724]: E0226 11:05:35.332529 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:05:35 crc kubenswrapper[4724]: E0226 11:05:35.336568 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="1.6s" Feb 26 11:05:35 crc kubenswrapper[4724]: W0226 11:05:35.414488 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Feb 26 11:05:35 crc kubenswrapper[4724]: E0226 11:05:35.414628 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:05:35 crc kubenswrapper[4724]: W0226 11:05:35.479968 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Feb 26 11:05:35 crc kubenswrapper[4724]: E0226 11:05:35.480095 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.563156 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.564381 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.564423 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.564438 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.564463 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:05:35 crc kubenswrapper[4724]: E0226 11:05:35.564920 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.913906 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.973387 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 11:05:35 crc kubenswrapper[4724]: E0226 11:05:35.974921 4724 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.989768 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d145aa52f5003862c62b869f473cfc5fe8ff7fb099013d711b206630407c5cd6"} Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.989821 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ebe8db8375d5a3f78d3345a3a3d9fd57496cbbf2338e3e6c9d7b9ecd638257f6"} Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.989833 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3d7dc32ab609486713001d26ff9b78c9f9004113c1ce295b049bb0645ac8cd78"} Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.989844 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"30fabdbc207283755407fe33ee609345de6dcab0c4a0272e0b04a3cf02daf7eb"} Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.989934 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.990895 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.990912 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.990920 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.992746 4724 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8551fd11942a1b1f140ac02b1b228c918a6f82b4613d7cc538a503eeaf5a1fe1" exitCode=0 Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.992799 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8551fd11942a1b1f140ac02b1b228c918a6f82b4613d7cc538a503eeaf5a1fe1"} Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.992882 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.994054 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.994088 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.994098 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.995414 4724 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564" exitCode=0 Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.995466 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564"} Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.995547 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.996514 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.996562 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.996580 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.999383 4724 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ccd81d006a26d0167fa7a35db8d58a0c58efd8e0a680fef2b505fe9bde1ccf89" exitCode=0 Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.999458 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ccd81d006a26d0167fa7a35db8d58a0c58efd8e0a680fef2b505fe9bde1ccf89"} Feb 26 11:05:35 crc kubenswrapper[4724]: I0226 11:05:35.999477 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:36 crc kubenswrapper[4724]: I0226 11:05:36.000413 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:36 crc kubenswrapper[4724]: I0226 11:05:36.000430 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:36 crc kubenswrapper[4724]: I0226 11:05:36.000438 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:36 crc kubenswrapper[4724]: I0226 11:05:36.002486 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2" exitCode=0 Feb 26 11:05:36 crc kubenswrapper[4724]: I0226 11:05:36.002523 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2"} Feb 26 11:05:36 crc kubenswrapper[4724]: I0226 11:05:36.002605 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:36 crc kubenswrapper[4724]: I0226 11:05:36.003546 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:36 crc kubenswrapper[4724]: I0226 11:05:36.003571 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:36 crc kubenswrapper[4724]: I0226 11:05:36.003579 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:36 crc kubenswrapper[4724]: I0226 11:05:36.005648 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:36 crc kubenswrapper[4724]: I0226 11:05:36.006948 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:36 crc kubenswrapper[4724]: I0226 11:05:36.006978 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:36 crc kubenswrapper[4724]: I0226 11:05:36.006990 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:36 crc kubenswrapper[4724]: E0226 11:05:36.176147 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1897c7238dda72af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.911265967 +0000 UTC m=+0.567005082,LastTimestamp:2026-02-26 11:05:33.911265967 +0000 UTC m=+0.567005082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:05:36 crc kubenswrapper[4724]: I0226 11:05:36.913618 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Feb 26 11:05:36 crc kubenswrapper[4724]: E0226 11:05:36.937461 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="3.2s" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.009508 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"74f80998dc803441859f91cbe710a9e6e76f3d68e2a008d304b198da52b7522b"} Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.009604 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.010364 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.010384 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.010411 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.014804 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.014785 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b2f1f6ee7310c86c0678a38e4a310a5a875177667cf63738f013eb4dc93e0ced"} Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.014928 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7dfcc0af5de38f5634f6a85b7c165bcc50a6c2e1b65d2d9d66127a67a165af2e"} Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.014950 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"22edd40f30fb82c8479e0a6d01bdf4d119bd515a7b7e70c6d7105677b618723b"} Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.015414 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.015440 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.015486 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.017639 4724 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3c0083f66b44d4a4907a631b1a5cebd789231a3a9ccb2ec23180da662d9e0a7a" exitCode=0 Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.017717 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3c0083f66b44d4a4907a631b1a5cebd789231a3a9ccb2ec23180da662d9e0a7a"} Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.017844 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.018580 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.018605 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.018615 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.020860 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.021273 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f"} Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.021301 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c"} Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.021311 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342"} Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.021320 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec"} Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.021689 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.021718 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.021728 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:37 crc kubenswrapper[4724]: W0226 11:05:37.030803 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Feb 26 11:05:37 crc kubenswrapper[4724]: E0226 11:05:37.030888 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.165023 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.166012 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.166043 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.166060 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:37 crc kubenswrapper[4724]: I0226 11:05:37.166103 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:05:37 crc kubenswrapper[4724]: E0226 11:05:37.166450 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Feb 26 11:05:37 crc kubenswrapper[4724]: W0226 11:05:37.183731 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Feb 26 11:05:37 crc kubenswrapper[4724]: E0226 11:05:37.183807 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.024809 4724 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f39a3c00519f0bd140f7b4ca72ae785fc0dc56bc23fdb7bfae991041dea8370e" exitCode=0 Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.024895 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f39a3c00519f0bd140f7b4ca72ae785fc0dc56bc23fdb7bfae991041dea8370e"} Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.024898 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.025828 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.025860 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.025870 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.026330 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.029108 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5547f82b2c3e8b1e46588857ef9678731869b4d2d152f650f0404a2d200181a8" exitCode=255 Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.029198 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5547f82b2c3e8b1e46588857ef9678731869b4d2d152f650f0404a2d200181a8"} Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.029214 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.029248 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.029278 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.029309 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.030239 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.030250 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.030268 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.030266 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.030305 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.030269 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.030340 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.030318 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.030281 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:38 crc kubenswrapper[4724]: I0226 11:05:38.034518 4724 scope.go:117] "RemoveContainer" containerID="5547f82b2c3e8b1e46588857ef9678731869b4d2d152f650f0404a2d200181a8" Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.003465 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.034246 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.036767 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ba0049f3874e2bca7066958588f265082c317070c02b1743c996cfb1cffec257"} Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.036795 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.036840 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.042304 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.042357 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.042376 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.045558 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2f8354a3cde0c7aca4a6e9131d8654ed3498e27c634ea4bb903b708853f67ae3"} Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.045599 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3d5f2a8d9b46117fbada78998ba1203e9ea5af9fa89a86dda7c27c0a4b6aa552"} Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.045611 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"148ac36c722086f3da666bd9d11b1732195ceb43811dcf0a3491c8f85ab56024"} Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.045621 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3d203c18e04f74a64ee80185ab1b24934a813cb2755b922d672db40a0dda14ab"} Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.045632 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f7ea02fbc7026314379f833340a9b4bed7dd57ba1bbfdd95be51bdb34be147d1"} Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.045655 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.046539 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.046584 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.046600 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:39 crc kubenswrapper[4724]: I0226 11:05:39.882473 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.048135 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.048229 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.048150 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.049779 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.049838 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.049859 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.049802 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.049985 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.050038 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.112054 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.366971 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.368624 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.368666 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.368682 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.368728 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.854578 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.854814 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.856336 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.856395 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.856414 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:40 crc kubenswrapper[4724]: I0226 11:05:40.866981 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:05:41 crc kubenswrapper[4724]: I0226 11:05:41.035310 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:05:41 crc kubenswrapper[4724]: I0226 11:05:41.050510 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:41 crc kubenswrapper[4724]: I0226 11:05:41.051271 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:41 crc kubenswrapper[4724]: I0226 11:05:41.051271 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:41 crc kubenswrapper[4724]: I0226 11:05:41.051775 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:41 crc kubenswrapper[4724]: I0226 11:05:41.051808 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:41 crc kubenswrapper[4724]: I0226 11:05:41.051824 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:41 crc kubenswrapper[4724]: I0226 11:05:41.052941 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:41 crc kubenswrapper[4724]: I0226 11:05:41.052976 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:41 crc kubenswrapper[4724]: I0226 11:05:41.052993 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:41 crc kubenswrapper[4724]: I0226 11:05:41.053467 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:41 crc kubenswrapper[4724]: I0226 11:05:41.053495 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:41 crc kubenswrapper[4724]: I0226 11:05:41.053505 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:41 crc kubenswrapper[4724]: I0226 11:05:41.289746 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.052674 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.052729 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.052778 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.053948 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.053993 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.054016 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.054039 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.054072 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.054083 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.253418 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.441407 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.719486 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.719770 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.721270 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.721353 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:42 crc kubenswrapper[4724]: I0226 11:05:42.721401 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:43 crc kubenswrapper[4724]: I0226 11:05:43.055122 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:43 crc kubenswrapper[4724]: I0226 11:05:43.056024 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:43 crc kubenswrapper[4724]: I0226 11:05:43.056096 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:43 crc kubenswrapper[4724]: I0226 11:05:43.056112 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:44 crc kubenswrapper[4724]: I0226 11:05:44.056887 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:44 crc kubenswrapper[4724]: I0226 11:05:44.057914 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:44 crc kubenswrapper[4724]: I0226 11:05:44.057967 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:44 crc kubenswrapper[4724]: I0226 11:05:44.057980 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:44 crc kubenswrapper[4724]: E0226 11:05:44.061506 4724 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:05:44 crc kubenswrapper[4724]: I0226 11:05:44.416389 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:05:44 crc kubenswrapper[4724]: I0226 11:05:44.416588 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:44 crc kubenswrapper[4724]: I0226 11:05:44.418001 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:44 crc kubenswrapper[4724]: I0226 11:05:44.418057 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:44 crc kubenswrapper[4724]: I0226 11:05:44.418066 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:45 crc kubenswrapper[4724]: I0226 11:05:45.253786 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:05:45 crc kubenswrapper[4724]: I0226 11:05:45.253873 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:05:47 crc kubenswrapper[4724]: W0226 11:05:47.725168 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 26 11:05:47 crc kubenswrapper[4724]: I0226 11:05:47.726287 4724 trace.go:236] Trace[319232566]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Feb-2026 11:05:37.723) (total time: 10002ms): Feb 26 11:05:47 crc kubenswrapper[4724]: Trace[319232566]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:05:47.725) Feb 26 11:05:47 crc kubenswrapper[4724]: Trace[319232566]: [10.002641973s] [10.002641973s] END Feb 26 11:05:47 crc kubenswrapper[4724]: E0226 11:05:47.726552 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 26 11:05:47 crc kubenswrapper[4724]: I0226 11:05:47.866223 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:47Z is after 2026-02-23T05:33:13Z Feb 26 11:05:47 crc kubenswrapper[4724]: E0226 11:05:47.875981 4724 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:47Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:05:47 crc kubenswrapper[4724]: W0226 11:05:47.877782 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:47Z is after 2026-02-23T05:33:13Z Feb 26 11:05:47 crc kubenswrapper[4724]: E0226 11:05:47.877857 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:47Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:05:47 crc kubenswrapper[4724]: E0226 11:05:47.878474 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:47Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 11:05:47 crc kubenswrapper[4724]: W0226 11:05:47.880575 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:47Z is after 2026-02-23T05:33:13Z Feb 26 11:05:47 crc kubenswrapper[4724]: E0226 11:05:47.880613 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:47Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:05:47 crc kubenswrapper[4724]: E0226 11:05:47.882373 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:47Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 26 11:05:47 crc kubenswrapper[4724]: E0226 11:05:47.884285 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:47Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897c7238dda72af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.911265967 +0000 UTC m=+0.567005082,LastTimestamp:2026-02-26 11:05:33.911265967 +0000 UTC m=+0.567005082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:05:47 crc kubenswrapper[4724]: W0226 11:05:47.886542 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:47Z is after 2026-02-23T05:33:13Z Feb 26 11:05:47 crc kubenswrapper[4724]: E0226 11:05:47.886587 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:47Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:05:47 crc kubenswrapper[4724]: I0226 11:05:47.887055 4724 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 11:05:47 crc kubenswrapper[4724]: I0226 11:05:47.887088 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 26 11:05:47 crc kubenswrapper[4724]: I0226 11:05:47.892899 4724 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 11:05:47 crc kubenswrapper[4724]: I0226 11:05:47.892948 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 26 11:05:47 crc kubenswrapper[4724]: I0226 11:05:47.918007 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:47Z is after 2026-02-23T05:33:13Z Feb 26 11:05:48 crc kubenswrapper[4724]: I0226 11:05:48.916330 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:48Z is after 2026-02-23T05:33:13Z Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.072143 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.072757 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.074795 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ba0049f3874e2bca7066958588f265082c317070c02b1743c996cfb1cffec257" exitCode=255 Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.074832 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ba0049f3874e2bca7066958588f265082c317070c02b1743c996cfb1cffec257"} Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.074883 4724 scope.go:117] "RemoveContainer" containerID="5547f82b2c3e8b1e46588857ef9678731869b4d2d152f650f0404a2d200181a8" Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.075041 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.076105 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.076166 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.076195 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.077150 4724 scope.go:117] "RemoveContainer" containerID="ba0049f3874e2bca7066958588f265082c317070c02b1743c996cfb1cffec257" Feb 26 11:05:49 crc kubenswrapper[4724]: E0226 11:05:49.077563 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.916696 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:49Z is after 2026-02-23T05:33:13Z Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.922383 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.922555 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.923553 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.923652 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.923724 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:49 crc kubenswrapper[4724]: I0226 11:05:49.934919 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 26 11:05:50 crc kubenswrapper[4724]: I0226 11:05:50.078381 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 11:05:50 crc kubenswrapper[4724]: I0226 11:05:50.080300 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:50 crc kubenswrapper[4724]: I0226 11:05:50.080973 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:50 crc kubenswrapper[4724]: I0226 11:05:50.081000 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:50 crc kubenswrapper[4724]: I0226 11:05:50.081010 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:50 crc kubenswrapper[4724]: I0226 11:05:50.916930 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:50Z is after 2026-02-23T05:33:13Z Feb 26 11:05:51 crc kubenswrapper[4724]: I0226 11:05:51.259462 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:51 crc kubenswrapper[4724]: I0226 11:05:51.259722 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:51 crc kubenswrapper[4724]: I0226 11:05:51.261487 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:51 crc kubenswrapper[4724]: I0226 11:05:51.261548 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:51 crc kubenswrapper[4724]: I0226 11:05:51.261568 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:51 crc kubenswrapper[4724]: I0226 11:05:51.262457 4724 scope.go:117] "RemoveContainer" containerID="ba0049f3874e2bca7066958588f265082c317070c02b1743c996cfb1cffec257" Feb 26 11:05:51 crc kubenswrapper[4724]: E0226 11:05:51.262752 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:05:51 crc kubenswrapper[4724]: I0226 11:05:51.294723 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:51 crc kubenswrapper[4724]: I0226 11:05:51.915877 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:51Z is after 2026-02-23T05:33:13Z Feb 26 11:05:52 crc kubenswrapper[4724]: I0226 11:05:52.085278 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:52 crc kubenswrapper[4724]: I0226 11:05:52.086382 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:52 crc kubenswrapper[4724]: I0226 11:05:52.086422 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:52 crc kubenswrapper[4724]: I0226 11:05:52.086436 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:52 crc kubenswrapper[4724]: I0226 11:05:52.087083 4724 scope.go:117] "RemoveContainer" containerID="ba0049f3874e2bca7066958588f265082c317070c02b1743c996cfb1cffec257" Feb 26 11:05:52 crc kubenswrapper[4724]: E0226 11:05:52.087285 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:05:52 crc kubenswrapper[4724]: I0226 11:05:52.089670 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:05:52 crc kubenswrapper[4724]: W0226 11:05:52.435054 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:52Z is after 2026-02-23T05:33:13Z Feb 26 11:05:52 crc kubenswrapper[4724]: E0226 11:05:52.435162 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:52Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:05:52 crc kubenswrapper[4724]: I0226 11:05:52.445885 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:05:52 crc kubenswrapper[4724]: I0226 11:05:52.446051 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:52 crc kubenswrapper[4724]: I0226 11:05:52.447530 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:52 crc kubenswrapper[4724]: I0226 11:05:52.447599 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:52 crc kubenswrapper[4724]: I0226 11:05:52.447614 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:52 crc kubenswrapper[4724]: I0226 11:05:52.916621 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:52Z is after 2026-02-23T05:33:13Z Feb 26 11:05:53 crc kubenswrapper[4724]: I0226 11:05:53.087921 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:53 crc kubenswrapper[4724]: I0226 11:05:53.089105 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:53 crc kubenswrapper[4724]: I0226 11:05:53.089139 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:53 crc kubenswrapper[4724]: I0226 11:05:53.089151 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:53 crc kubenswrapper[4724]: I0226 11:05:53.089877 4724 scope.go:117] "RemoveContainer" containerID="ba0049f3874e2bca7066958588f265082c317070c02b1743c996cfb1cffec257" Feb 26 11:05:53 crc kubenswrapper[4724]: E0226 11:05:53.090054 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:05:53 crc kubenswrapper[4724]: I0226 11:05:53.924611 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:53Z is after 2026-02-23T05:33:13Z Feb 26 11:05:54 crc kubenswrapper[4724]: W0226 11:05:54.015638 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:54Z is after 2026-02-23T05:33:13Z Feb 26 11:05:54 crc kubenswrapper[4724]: E0226 11:05:54.016128 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:54Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:05:54 crc kubenswrapper[4724]: E0226 11:05:54.061694 4724 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:05:54 crc kubenswrapper[4724]: I0226 11:05:54.279276 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:05:54 crc kubenswrapper[4724]: I0226 11:05:54.281119 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:05:54 crc kubenswrapper[4724]: I0226 11:05:54.281174 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:05:54 crc kubenswrapper[4724]: I0226 11:05:54.281208 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:05:54 crc kubenswrapper[4724]: I0226 11:05:54.281231 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:05:54 crc kubenswrapper[4724]: E0226 11:05:54.284652 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:54Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 11:05:54 crc kubenswrapper[4724]: E0226 11:05:54.287284 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:54Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 11:05:54 crc kubenswrapper[4724]: I0226 11:05:54.918288 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:54Z is after 2026-02-23T05:33:13Z Feb 26 11:05:55 crc kubenswrapper[4724]: I0226 11:05:55.253938 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:05:55 crc kubenswrapper[4724]: I0226 11:05:55.254012 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:05:55 crc kubenswrapper[4724]: I0226 11:05:55.917368 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:55Z is after 2026-02-23T05:33:13Z Feb 26 11:05:56 crc kubenswrapper[4724]: W0226 11:05:56.016669 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:56Z is after 2026-02-23T05:33:13Z Feb 26 11:05:56 crc kubenswrapper[4724]: E0226 11:05:56.016744 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:05:56 crc kubenswrapper[4724]: I0226 11:05:56.306977 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 11:05:56 crc kubenswrapper[4724]: E0226 11:05:56.310612 4724 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:05:56 crc kubenswrapper[4724]: I0226 11:05:56.917843 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:56Z is after 2026-02-23T05:33:13Z Feb 26 11:05:57 crc kubenswrapper[4724]: W0226 11:05:57.708826 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:57Z is after 2026-02-23T05:33:13Z Feb 26 11:05:57 crc kubenswrapper[4724]: E0226 11:05:57.708905 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:05:57 crc kubenswrapper[4724]: E0226 11:05:57.888452 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:57Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897c7238dda72af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.911265967 +0000 UTC m=+0.567005082,LastTimestamp:2026-02-26 11:05:33.911265967 +0000 UTC m=+0.567005082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:05:57 crc kubenswrapper[4724]: I0226 11:05:57.916443 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:57Z is after 2026-02-23T05:33:13Z Feb 26 11:05:58 crc kubenswrapper[4724]: I0226 11:05:58.916717 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:58Z is after 2026-02-23T05:33:13Z Feb 26 11:05:59 crc kubenswrapper[4724]: I0226 11:05:59.918297 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:05:59Z is after 2026-02-23T05:33:13Z Feb 26 11:06:00 crc kubenswrapper[4724]: W0226 11:06:00.054748 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:00Z is after 2026-02-23T05:33:13Z Feb 26 11:06:00 crc kubenswrapper[4724]: E0226 11:06:00.054852 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:06:00 crc kubenswrapper[4724]: I0226 11:06:00.917410 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:00Z is after 2026-02-23T05:33:13Z Feb 26 11:06:01 crc kubenswrapper[4724]: I0226 11:06:01.285100 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:01 crc kubenswrapper[4724]: I0226 11:06:01.287320 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:01 crc kubenswrapper[4724]: I0226 11:06:01.287401 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:01 crc kubenswrapper[4724]: I0226 11:06:01.287427 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:01 crc kubenswrapper[4724]: I0226 11:06:01.287471 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:06:01 crc kubenswrapper[4724]: E0226 11:06:01.291138 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:01Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 11:06:01 crc kubenswrapper[4724]: E0226 11:06:01.294073 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:01Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 11:06:01 crc kubenswrapper[4724]: I0226 11:06:01.917491 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:01Z is after 2026-02-23T05:33:13Z Feb 26 11:06:02 crc kubenswrapper[4724]: I0226 11:06:02.917814 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:02Z is after 2026-02-23T05:33:13Z Feb 26 11:06:03 crc kubenswrapper[4724]: I0226 11:06:03.916846 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:03Z is after 2026-02-23T05:33:13Z Feb 26 11:06:04 crc kubenswrapper[4724]: E0226 11:06:04.061927 4724 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:06:04 crc kubenswrapper[4724]: I0226 11:06:04.918972 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:04Z is after 2026-02-23T05:33:13Z Feb 26 11:06:04 crc kubenswrapper[4724]: I0226 11:06:04.975066 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:04 crc kubenswrapper[4724]: I0226 11:06:04.976396 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:04 crc kubenswrapper[4724]: I0226 11:06:04.976439 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:04 crc kubenswrapper[4724]: I0226 11:06:04.976451 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:04 crc kubenswrapper[4724]: I0226 11:06:04.977006 4724 scope.go:117] "RemoveContainer" containerID="ba0049f3874e2bca7066958588f265082c317070c02b1743c996cfb1cffec257" Feb 26 11:06:05 crc kubenswrapper[4724]: I0226 11:06:05.254899 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:06:05 crc kubenswrapper[4724]: I0226 11:06:05.255008 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:06:05 crc kubenswrapper[4724]: I0226 11:06:05.255081 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:06:05 crc kubenswrapper[4724]: I0226 11:06:05.255307 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:05 crc kubenswrapper[4724]: I0226 11:06:05.256919 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:05 crc kubenswrapper[4724]: I0226 11:06:05.256980 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:05 crc kubenswrapper[4724]: I0226 11:06:05.256995 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:05 crc kubenswrapper[4724]: I0226 11:06:05.257738 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"3d7dc32ab609486713001d26ff9b78c9f9004113c1ce295b049bb0645ac8cd78"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 26 11:06:05 crc kubenswrapper[4724]: I0226 11:06:05.257942 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://3d7dc32ab609486713001d26ff9b78c9f9004113c1ce295b049bb0645ac8cd78" gracePeriod=30 Feb 26 11:06:05 crc kubenswrapper[4724]: W0226 11:06:05.483895 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:05Z is after 2026-02-23T05:33:13Z Feb 26 11:06:05 crc kubenswrapper[4724]: E0226 11:06:05.483985 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:05Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:06:05 crc kubenswrapper[4724]: I0226 11:06:05.916416 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:05Z is after 2026-02-23T05:33:13Z Feb 26 11:06:06 crc kubenswrapper[4724]: I0226 11:06:06.123364 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 11:06:06 crc kubenswrapper[4724]: I0226 11:06:06.124095 4724 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3d7dc32ab609486713001d26ff9b78c9f9004113c1ce295b049bb0645ac8cd78" exitCode=255 Feb 26 11:06:06 crc kubenswrapper[4724]: I0226 11:06:06.124229 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3d7dc32ab609486713001d26ff9b78c9f9004113c1ce295b049bb0645ac8cd78"} Feb 26 11:06:06 crc kubenswrapper[4724]: I0226 11:06:06.124314 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ae1f53d018ded65ea8f7f942ccaf1686887229ec9ddc478ce0676bc5e2d92279"} Feb 26 11:06:06 crc kubenswrapper[4724]: I0226 11:06:06.124495 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:06 crc kubenswrapper[4724]: I0226 11:06:06.126223 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:06 crc kubenswrapper[4724]: I0226 11:06:06.126287 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:06 crc kubenswrapper[4724]: I0226 11:06:06.126312 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:06 crc kubenswrapper[4724]: I0226 11:06:06.127133 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 11:06:06 crc kubenswrapper[4724]: I0226 11:06:06.129839 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6d2380388305c625fdb50dabcd45323b51b92aaf952535cf20508bd93e3b7842"} Feb 26 11:06:06 crc kubenswrapper[4724]: I0226 11:06:06.130042 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:06 crc kubenswrapper[4724]: I0226 11:06:06.131292 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:06 crc kubenswrapper[4724]: I0226 11:06:06.131360 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:06 crc kubenswrapper[4724]: I0226 11:06:06.131384 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:06 crc kubenswrapper[4724]: I0226 11:06:06.916221 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:06Z is after 2026-02-23T05:33:13Z Feb 26 11:06:07 crc kubenswrapper[4724]: I0226 11:06:07.133763 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 11:06:07 crc kubenswrapper[4724]: I0226 11:06:07.134316 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 11:06:07 crc kubenswrapper[4724]: I0226 11:06:07.135988 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6d2380388305c625fdb50dabcd45323b51b92aaf952535cf20508bd93e3b7842" exitCode=255 Feb 26 11:06:07 crc kubenswrapper[4724]: I0226 11:06:07.136024 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6d2380388305c625fdb50dabcd45323b51b92aaf952535cf20508bd93e3b7842"} Feb 26 11:06:07 crc kubenswrapper[4724]: I0226 11:06:07.136059 4724 scope.go:117] "RemoveContainer" containerID="ba0049f3874e2bca7066958588f265082c317070c02b1743c996cfb1cffec257" Feb 26 11:06:07 crc kubenswrapper[4724]: I0226 11:06:07.136150 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:07 crc kubenswrapper[4724]: I0226 11:06:07.136843 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:07 crc kubenswrapper[4724]: I0226 11:06:07.136881 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:07 crc kubenswrapper[4724]: I0226 11:06:07.136896 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:07 crc kubenswrapper[4724]: I0226 11:06:07.137493 4724 scope.go:117] "RemoveContainer" containerID="6d2380388305c625fdb50dabcd45323b51b92aaf952535cf20508bd93e3b7842" Feb 26 11:06:07 crc kubenswrapper[4724]: E0226 11:06:07.137691 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:06:07 crc kubenswrapper[4724]: I0226 11:06:07.457618 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:06:07 crc kubenswrapper[4724]: E0226 11:06:07.892503 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:07Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897c7238dda72af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.911265967 +0000 UTC m=+0.567005082,LastTimestamp:2026-02-26 11:05:33.911265967 +0000 UTC m=+0.567005082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:07 crc kubenswrapper[4724]: I0226 11:06:07.915496 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:07Z is after 2026-02-23T05:33:13Z Feb 26 11:06:08 crc kubenswrapper[4724]: I0226 11:06:08.142108 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 11:06:08 crc kubenswrapper[4724]: I0226 11:06:08.144923 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:08 crc kubenswrapper[4724]: I0226 11:06:08.146124 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:08 crc kubenswrapper[4724]: I0226 11:06:08.146285 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:08 crc kubenswrapper[4724]: I0226 11:06:08.146545 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:08 crc kubenswrapper[4724]: I0226 11:06:08.147326 4724 scope.go:117] "RemoveContainer" containerID="6d2380388305c625fdb50dabcd45323b51b92aaf952535cf20508bd93e3b7842" Feb 26 11:06:08 crc kubenswrapper[4724]: E0226 11:06:08.147599 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:06:08 crc kubenswrapper[4724]: I0226 11:06:08.294833 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:08 crc kubenswrapper[4724]: E0226 11:06:08.295571 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:08Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 11:06:08 crc kubenswrapper[4724]: I0226 11:06:08.296082 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:08 crc kubenswrapper[4724]: I0226 11:06:08.296204 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:08 crc kubenswrapper[4724]: I0226 11:06:08.296276 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:08 crc kubenswrapper[4724]: I0226 11:06:08.296354 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:06:08 crc kubenswrapper[4724]: E0226 11:06:08.299460 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:08Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 11:06:08 crc kubenswrapper[4724]: I0226 11:06:08.916433 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:08Z is after 2026-02-23T05:33:13Z Feb 26 11:06:09 crc kubenswrapper[4724]: I0226 11:06:09.916693 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:09Z is after 2026-02-23T05:33:13Z Feb 26 11:06:10 crc kubenswrapper[4724]: I0226 11:06:10.918214 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:10Z is after 2026-02-23T05:33:13Z Feb 26 11:06:11 crc kubenswrapper[4724]: I0226 11:06:11.035737 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:06:11 crc kubenswrapper[4724]: I0226 11:06:11.036019 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:11 crc kubenswrapper[4724]: I0226 11:06:11.037448 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:11 crc kubenswrapper[4724]: I0226 11:06:11.037553 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:11 crc kubenswrapper[4724]: I0226 11:06:11.037618 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:11 crc kubenswrapper[4724]: I0226 11:06:11.258523 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:06:11 crc kubenswrapper[4724]: I0226 11:06:11.258764 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:11 crc kubenswrapper[4724]: I0226 11:06:11.260733 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:11 crc kubenswrapper[4724]: I0226 11:06:11.260854 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:11 crc kubenswrapper[4724]: I0226 11:06:11.260932 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:11 crc kubenswrapper[4724]: I0226 11:06:11.261544 4724 scope.go:117] "RemoveContainer" containerID="6d2380388305c625fdb50dabcd45323b51b92aaf952535cf20508bd93e3b7842" Feb 26 11:06:11 crc kubenswrapper[4724]: E0226 11:06:11.261775 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:06:11 crc kubenswrapper[4724]: I0226 11:06:11.919237 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:11Z is after 2026-02-23T05:33:13Z Feb 26 11:06:12 crc kubenswrapper[4724]: I0226 11:06:12.254033 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:06:12 crc kubenswrapper[4724]: I0226 11:06:12.254591 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:12 crc kubenswrapper[4724]: I0226 11:06:12.255802 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:12 crc kubenswrapper[4724]: I0226 11:06:12.255877 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:12 crc kubenswrapper[4724]: I0226 11:06:12.255904 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:12 crc kubenswrapper[4724]: I0226 11:06:12.711170 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 11:06:12 crc kubenswrapper[4724]: E0226 11:06:12.714279 4724 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:12Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:06:12 crc kubenswrapper[4724]: E0226 11:06:12.715428 4724 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Feb 26 11:06:12 crc kubenswrapper[4724]: I0226 11:06:12.917278 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:12Z is after 2026-02-23T05:33:13Z Feb 26 11:06:13 crc kubenswrapper[4724]: I0226 11:06:13.916503 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:13Z is after 2026-02-23T05:33:13Z Feb 26 11:06:14 crc kubenswrapper[4724]: E0226 11:06:14.062344 4724 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:06:14 crc kubenswrapper[4724]: I0226 11:06:14.916955 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:14Z is after 2026-02-23T05:33:13Z Feb 26 11:06:15 crc kubenswrapper[4724]: W0226 11:06:15.218535 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:15Z is after 2026-02-23T05:33:13Z Feb 26 11:06:15 crc kubenswrapper[4724]: E0226 11:06:15.218603 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:15Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:06:15 crc kubenswrapper[4724]: I0226 11:06:15.254734 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:06:15 crc kubenswrapper[4724]: I0226 11:06:15.255171 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:06:15 crc kubenswrapper[4724]: E0226 11:06:15.298422 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:15Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 11:06:15 crc kubenswrapper[4724]: I0226 11:06:15.299603 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:15 crc kubenswrapper[4724]: I0226 11:06:15.300803 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:15 crc kubenswrapper[4724]: I0226 11:06:15.300932 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:15 crc kubenswrapper[4724]: I0226 11:06:15.301013 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:15 crc kubenswrapper[4724]: I0226 11:06:15.301111 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:06:15 crc kubenswrapper[4724]: E0226 11:06:15.305920 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:15Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 11:06:15 crc kubenswrapper[4724]: I0226 11:06:15.919798 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:15Z is after 2026-02-23T05:33:13Z Feb 26 11:06:16 crc kubenswrapper[4724]: I0226 11:06:16.919506 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:16Z is after 2026-02-23T05:33:13Z Feb 26 11:06:17 crc kubenswrapper[4724]: W0226 11:06:17.761297 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:17Z is after 2026-02-23T05:33:13Z Feb 26 11:06:17 crc kubenswrapper[4724]: E0226 11:06:17.761450 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:17Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:06:17 crc kubenswrapper[4724]: E0226 11:06:17.898518 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:17Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897c7238dda72af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.911265967 +0000 UTC m=+0.567005082,LastTimestamp:2026-02-26 11:05:33.911265967 +0000 UTC m=+0.567005082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:17 crc kubenswrapper[4724]: I0226 11:06:17.917254 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:17Z is after 2026-02-23T05:33:13Z Feb 26 11:06:18 crc kubenswrapper[4724]: I0226 11:06:18.917630 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:18Z is after 2026-02-23T05:33:13Z Feb 26 11:06:19 crc kubenswrapper[4724]: I0226 11:06:19.917539 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:19Z is after 2026-02-23T05:33:13Z Feb 26 11:06:20 crc kubenswrapper[4724]: I0226 11:06:20.916601 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:20Z is after 2026-02-23T05:33:13Z Feb 26 11:06:21 crc kubenswrapper[4724]: W0226 11:06:21.399511 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:21Z is after 2026-02-23T05:33:13Z Feb 26 11:06:21 crc kubenswrapper[4724]: E0226 11:06:21.400171 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:21Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:06:21 crc kubenswrapper[4724]: I0226 11:06:21.916730 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:21Z is after 2026-02-23T05:33:13Z Feb 26 11:06:21 crc kubenswrapper[4724]: I0226 11:06:21.974933 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:21 crc kubenswrapper[4724]: I0226 11:06:21.975934 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:21 crc kubenswrapper[4724]: I0226 11:06:21.975959 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:21 crc kubenswrapper[4724]: I0226 11:06:21.975969 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:21 crc kubenswrapper[4724]: I0226 11:06:21.979275 4724 scope.go:117] "RemoveContainer" containerID="6d2380388305c625fdb50dabcd45323b51b92aaf952535cf20508bd93e3b7842" Feb 26 11:06:21 crc kubenswrapper[4724]: E0226 11:06:21.979737 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:06:22 crc kubenswrapper[4724]: E0226 11:06:22.302380 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:22Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 11:06:22 crc kubenswrapper[4724]: I0226 11:06:22.306547 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:22 crc kubenswrapper[4724]: I0226 11:06:22.307698 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:22 crc kubenswrapper[4724]: I0226 11:06:22.307741 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:22 crc kubenswrapper[4724]: I0226 11:06:22.307759 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:22 crc kubenswrapper[4724]: I0226 11:06:22.307789 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:06:22 crc kubenswrapper[4724]: E0226 11:06:22.310377 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:22Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 11:06:22 crc kubenswrapper[4724]: I0226 11:06:22.917334 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:22Z is after 2026-02-23T05:33:13Z Feb 26 11:06:23 crc kubenswrapper[4724]: I0226 11:06:23.919053 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:23Z is after 2026-02-23T05:33:13Z Feb 26 11:06:24 crc kubenswrapper[4724]: E0226 11:06:24.063446 4724 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:06:24 crc kubenswrapper[4724]: W0226 11:06:24.172279 4724 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:24Z is after 2026-02-23T05:33:13Z Feb 26 11:06:24 crc kubenswrapper[4724]: E0226 11:06:24.172355 4724 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:24Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 11:06:24 crc kubenswrapper[4724]: I0226 11:06:24.420671 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 11:06:24 crc kubenswrapper[4724]: I0226 11:06:24.420897 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:24 crc kubenswrapper[4724]: I0226 11:06:24.422417 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:24 crc kubenswrapper[4724]: I0226 11:06:24.422610 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:24 crc kubenswrapper[4724]: I0226 11:06:24.422698 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:24 crc kubenswrapper[4724]: I0226 11:06:24.919583 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:24Z is after 2026-02-23T05:33:13Z Feb 26 11:06:25 crc kubenswrapper[4724]: I0226 11:06:25.254261 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:06:25 crc kubenswrapper[4724]: I0226 11:06:25.254392 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:06:25 crc kubenswrapper[4724]: I0226 11:06:25.920616 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:06:25Z is after 2026-02-23T05:33:13Z Feb 26 11:06:26 crc kubenswrapper[4724]: I0226 11:06:26.918725 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.906207 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c7238dda72af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.911265967 +0000 UTC m=+0.567005082,LastTimestamp:2026-02-26 11:05:33.911265967 +0000 UTC m=+0.567005082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.911365 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3c5a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974791588 +0000 UTC m=+0.630530703,LastTimestamp:2026-02-26 11:05:33.974791588 +0000 UTC m=+0.630530703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: I0226 11:06:27.915530 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.915587 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3ffae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974806446 +0000 UTC m=+0.630545561,LastTimestamp:2026-02-26 11:05:33.974806446 +0000 UTC m=+0.630545561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.921934 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a42de3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974818275 +0000 UTC m=+0.630557390,LastTimestamp:2026-02-26 11:05:33.974818275 +0000 UTC m=+0.630557390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.927810 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c723967e2b98 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:34.0562134 +0000 UTC m=+0.711952515,LastTimestamp:2026-02-26 11:05:34.0562134 +0000 UTC m=+0.711952515,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.933405 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a3c5a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3c5a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974791588 +0000 UTC m=+0.630530703,LastTimestamp:2026-02-26 11:05:34.075963275 +0000 UTC m=+0.731702390,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.938488 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a3ffae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3ffae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974806446 +0000 UTC m=+0.630545561,LastTimestamp:2026-02-26 11:05:34.075979213 +0000 UTC m=+0.731718338,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.942817 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a42de3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a42de3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974818275 +0000 UTC m=+0.630557390,LastTimestamp:2026-02-26 11:05:34.075991742 +0000 UTC m=+0.731730857,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.947109 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a3c5a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3c5a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974791588 +0000 UTC m=+0.630530703,LastTimestamp:2026-02-26 11:05:34.076969712 +0000 UTC m=+0.732708837,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.952200 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a3ffae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3ffae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974806446 +0000 UTC m=+0.630545561,LastTimestamp:2026-02-26 11:05:34.07698869 +0000 UTC m=+0.732727805,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.957767 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a42de3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a42de3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974818275 +0000 UTC m=+0.630557390,LastTimestamp:2026-02-26 11:05:34.076998408 +0000 UTC m=+0.732737523,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.963120 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a3c5a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3c5a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974791588 +0000 UTC m=+0.630530703,LastTimestamp:2026-02-26 11:05:34.078127389 +0000 UTC m=+0.733866504,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.968449 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a3ffae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3ffae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974806446 +0000 UTC m=+0.630545561,LastTimestamp:2026-02-26 11:05:34.078140657 +0000 UTC m=+0.733879772,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.974303 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a42de3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a42de3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974818275 +0000 UTC m=+0.630557390,LastTimestamp:2026-02-26 11:05:34.078149676 +0000 UTC m=+0.733888791,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.981002 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a3c5a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3c5a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974791588 +0000 UTC m=+0.630530703,LastTimestamp:2026-02-26 11:05:34.07849984 +0000 UTC m=+0.734238965,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.984710 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a3ffae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3ffae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974806446 +0000 UTC m=+0.630545561,LastTimestamp:2026-02-26 11:05:34.078519447 +0000 UTC m=+0.734258572,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.989997 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a42de3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a42de3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974818275 +0000 UTC m=+0.630557390,LastTimestamp:2026-02-26 11:05:34.078529846 +0000 UTC m=+0.734268981,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:27 crc kubenswrapper[4724]: E0226 11:06:27.994908 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a3c5a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3c5a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974791588 +0000 UTC m=+0.630530703,LastTimestamp:2026-02-26 11:05:34.079057126 +0000 UTC m=+0.734796251,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:27.999950 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a3ffae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3ffae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974806446 +0000 UTC m=+0.630545561,LastTimestamp:2026-02-26 11:05:34.079077173 +0000 UTC m=+0.734816298,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.003916 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a42de3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a42de3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974818275 +0000 UTC m=+0.630557390,LastTimestamp:2026-02-26 11:05:34.079087242 +0000 UTC m=+0.734826377,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.008162 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a3c5a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3c5a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974791588 +0000 UTC m=+0.630530703,LastTimestamp:2026-02-26 11:05:34.079389672 +0000 UTC m=+0.735128807,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.012926 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a3ffae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3ffae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974806446 +0000 UTC m=+0.630545561,LastTimestamp:2026-02-26 11:05:34.07940172 +0000 UTC m=+0.735140845,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.017495 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a42de3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a42de3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974818275 +0000 UTC m=+0.630557390,LastTimestamp:2026-02-26 11:05:34.079412819 +0000 UTC m=+0.735151944,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.023036 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a3c5a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3c5a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974791588 +0000 UTC m=+0.630530703,LastTimestamp:2026-02-26 11:05:34.07993116 +0000 UTC m=+0.735670275,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.028298 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c72391a3ffae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c72391a3ffae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:33.974806446 +0000 UTC m=+0.630545561,LastTimestamp:2026-02-26 11:05:34.079947088 +0000 UTC m=+0.735686193,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.032943 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c723b00297e9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:34.484322281 +0000 UTC m=+1.140061416,LastTimestamp:2026-02-26 11:05:34.484322281 +0000 UTC m=+1.140061416,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.037514 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c723b0131c89 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:34.485404809 +0000 UTC m=+1.141143924,LastTimestamp:2026-02-26 11:05:34.485404809 +0000 UTC m=+1.141143924,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.041857 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c723b042759a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:34.488507802 +0000 UTC m=+1.144246917,LastTimestamp:2026-02-26 11:05:34.488507802 +0000 UTC m=+1.144246917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.047519 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c723b0517d86 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:34.48949287 +0000 UTC m=+1.145231985,LastTimestamp:2026-02-26 11:05:34.48949287 +0000 UTC m=+1.145231985,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.051260 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c723b0c1c0f2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:34.496850162 +0000 UTC m=+1.152589277,LastTimestamp:2026-02-26 11:05:34.496850162 +0000 UTC m=+1.152589277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.054948 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c723d7332576 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.14181567 +0000 UTC m=+1.797554805,LastTimestamp:2026-02-26 11:05:35.14181567 +0000 UTC m=+1.797554805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.060486 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c723d73f379a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.142606746 +0000 UTC m=+1.798345871,LastTimestamp:2026-02-26 11:05:35.142606746 +0000 UTC m=+1.798345871,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.064459 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c723d74653e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.143072741 +0000 UTC m=+1.798811896,LastTimestamp:2026-02-26 11:05:35.143072741 +0000 UTC m=+1.798811896,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.068105 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c723d7562010 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.144108048 +0000 UTC m=+1.799847173,LastTimestamp:2026-02-26 11:05:35.144108048 +0000 UTC m=+1.799847173,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.071383 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c723d75e5763 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.144646499 +0000 UTC m=+1.800385624,LastTimestamp:2026-02-26 11:05:35.144646499 +0000 UTC m=+1.800385624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.075018 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c723d7e162d8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.153234648 +0000 UTC m=+1.808973783,LastTimestamp:2026-02-26 11:05:35.153234648 +0000 UTC m=+1.808973783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.078506 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c723d806c48a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.15568449 +0000 UTC m=+1.811423615,LastTimestamp:2026-02-26 11:05:35.15568449 +0000 UTC m=+1.811423615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.082361 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c723d8126c9d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.156448413 +0000 UTC m=+1.812187538,LastTimestamp:2026-02-26 11:05:35.156448413 +0000 UTC m=+1.812187538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.086040 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c723d826cb79 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.157783417 +0000 UTC m=+1.813522542,LastTimestamp:2026-02-26 11:05:35.157783417 +0000 UTC m=+1.813522542,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.090565 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c723d8528e00 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.160651264 +0000 UTC m=+1.816390399,LastTimestamp:2026-02-26 11:05:35.160651264 +0000 UTC m=+1.816390399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.094176 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c723d8dfb414 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.169901588 +0000 UTC m=+1.825640723,LastTimestamp:2026-02-26 11:05:35.169901588 +0000 UTC m=+1.825640723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.097997 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c723e97662df openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.448212191 +0000 UTC m=+2.103951316,LastTimestamp:2026-02-26 11:05:35.448212191 +0000 UTC m=+2.103951316,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.102927 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c723ea56fd67 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.462931815 +0000 UTC m=+2.118670940,LastTimestamp:2026-02-26 11:05:35.462931815 +0000 UTC m=+2.118670940,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.105364 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c723ea6e492e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.464458542 +0000 UTC m=+2.120197657,LastTimestamp:2026-02-26 11:05:35.464458542 +0000 UTC m=+2.120197657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.107278 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c723f45c69f1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.631059441 +0000 UTC m=+2.286798556,LastTimestamp:2026-02-26 11:05:35.631059441 +0000 UTC m=+2.286798556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.112014 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c723f538e073 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.645507699 +0000 UTC m=+2.301246814,LastTimestamp:2026-02-26 11:05:35.645507699 +0000 UTC m=+2.301246814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.117112 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c723f54a3bc2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.646645186 +0000 UTC m=+2.302384291,LastTimestamp:2026-02-26 11:05:35.646645186 +0000 UTC m=+2.302384291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.121367 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c7240100e77c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.843166076 +0000 UTC m=+2.498905191,LastTimestamp:2026-02-26 11:05:35.843166076 +0000 UTC m=+2.498905191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.125454 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c724031e5b6a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.87865073 +0000 UTC m=+2.534389845,LastTimestamp:2026-02-26 11:05:35.87865073 +0000 UTC m=+2.534389845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.129385 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c7240a172790 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.995619216 +0000 UTC m=+2.651358331,LastTimestamp:2026-02-26 11:05:35.995619216 +0000 UTC m=+2.651358331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.133058 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c7240a4c5837 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.999105079 +0000 UTC m=+2.654844214,LastTimestamp:2026-02-26 11:05:35.999105079 +0000 UTC m=+2.654844214,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.138089 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c7240aad2ae8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.005450472 +0000 UTC m=+2.661189597,LastTimestamp:2026-02-26 11:05:36.005450472 +0000 UTC m=+2.661189597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.142877 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c7240ab1f3aa openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.00576401 +0000 UTC m=+2.661503135,LastTimestamp:2026-02-26 11:05:36.00576401 +0000 UTC m=+2.661503135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.147971 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c72416022b38 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.195570488 +0000 UTC m=+2.851309603,LastTimestamp:2026-02-26 11:05:36.195570488 +0000 UTC m=+2.851309603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.152724 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c72416376c98 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.199060632 +0000 UTC m=+2.854799747,LastTimestamp:2026-02-26 11:05:36.199060632 +0000 UTC m=+2.854799747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.156690 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c72416b156c2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.207050434 +0000 UTC m=+2.862789549,LastTimestamp:2026-02-26 11:05:36.207050434 +0000 UTC m=+2.862789549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.160851 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c72416d715c4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.209524164 +0000 UTC m=+2.865263279,LastTimestamp:2026-02-26 11:05:36.209524164 +0000 UTC m=+2.865263279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.165208 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c72416da0aee openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.209717998 +0000 UTC m=+2.865457113,LastTimestamp:2026-02-26 11:05:36.209717998 +0000 UTC m=+2.865457113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.169563 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c72416e9efa0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.210759584 +0000 UTC m=+2.866498699,LastTimestamp:2026-02-26 11:05:36.210759584 +0000 UTC m=+2.866498699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.173264 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c7241709012d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.212795693 +0000 UTC m=+2.868534808,LastTimestamp:2026-02-26 11:05:36.212795693 +0000 UTC m=+2.868534808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.176979 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c724183cd3b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.232969138 +0000 UTC m=+2.888708253,LastTimestamp:2026-02-26 11:05:36.232969138 +0000 UTC m=+2.888708253,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.181589 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c7241853e922 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.234481954 +0000 UTC m=+2.890221059,LastTimestamp:2026-02-26 11:05:36.234481954 +0000 UTC m=+2.890221059,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.186893 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c7241880be5d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.237420125 +0000 UTC m=+2.893159240,LastTimestamp:2026-02-26 11:05:36.237420125 +0000 UTC m=+2.893159240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.191155 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c7242296d9de openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.406641118 +0000 UTC m=+3.062380223,LastTimestamp:2026-02-26 11:05:36.406641118 +0000 UTC m=+3.062380223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.195519 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c72423fc8445 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.430081093 +0000 UTC m=+3.085820208,LastTimestamp:2026-02-26 11:05:36.430081093 +0000 UTC m=+3.085820208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.200867 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c7242408c9b6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.430885302 +0000 UTC m=+3.086624427,LastTimestamp:2026-02-26 11:05:36.430885302 +0000 UTC m=+3.086624427,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.207675 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c72424139d38 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.431594808 +0000 UTC m=+3.087333933,LastTimestamp:2026-02-26 11:05:36.431594808 +0000 UTC m=+3.087333933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.211825 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c724258480a8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.45577028 +0000 UTC m=+3.111509395,LastTimestamp:2026-02-26 11:05:36.45577028 +0000 UTC m=+3.111509395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.216097 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c72425a2b308 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.457749256 +0000 UTC m=+3.113488371,LastTimestamp:2026-02-26 11:05:36.457749256 +0000 UTC m=+3.113488371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.219896 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c7242e621903 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.604510467 +0000 UTC m=+3.260249582,LastTimestamp:2026-02-26 11:05:36.604510467 +0000 UTC m=+3.260249582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.225216 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c7242f44968e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.619353742 +0000 UTC m=+3.275092857,LastTimestamp:2026-02-26 11:05:36.619353742 +0000 UTC m=+3.275092857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.229034 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c72430176b7d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.633170813 +0000 UTC m=+3.288909928,LastTimestamp:2026-02-26 11:05:36.633170813 +0000 UTC m=+3.288909928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.232815 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c7243117c3ac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.649970604 +0000 UTC m=+3.305709719,LastTimestamp:2026-02-26 11:05:36.649970604 +0000 UTC m=+3.305709719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.236275 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c724312cfcce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.651361486 +0000 UTC m=+3.307100601,LastTimestamp:2026-02-26 11:05:36.651361486 +0000 UTC m=+3.307100601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.240759 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c7243d4169ab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.854026667 +0000 UTC m=+3.509765792,LastTimestamp:2026-02-26 11:05:36.854026667 +0000 UTC m=+3.509765792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.245235 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c7243e244b9a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.868895642 +0000 UTC m=+3.524634757,LastTimestamp:2026-02-26 11:05:36.868895642 +0000 UTC m=+3.524634757,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.249588 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c7243e3a9430 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.870356016 +0000 UTC m=+3.526095141,LastTimestamp:2026-02-26 11:05:36.870356016 +0000 UTC m=+3.526095141,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.255567 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c724472793ff openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:37.020105727 +0000 UTC m=+3.675844842,LastTimestamp:2026-02-26 11:05:37.020105727 +0000 UTC m=+3.675844842,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.261051 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c724476b0b05 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:37.024527109 +0000 UTC m=+3.680266224,LastTimestamp:2026-02-26 11:05:37.024527109 +0000 UTC m=+3.680266224,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.266200 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c72448a106bc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:37.044842172 +0000 UTC m=+3.700581287,LastTimestamp:2026-02-26 11:05:37.044842172 +0000 UTC m=+3.700581287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.275704 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c724526d3030 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:37.209217072 +0000 UTC m=+3.864956187,LastTimestamp:2026-02-26 11:05:37.209217072 +0000 UTC m=+3.864956187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.281248 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c724531fe66e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:37.220929134 +0000 UTC m=+3.876668249,LastTimestamp:2026-02-26 11:05:37.220929134 +0000 UTC m=+3.876668249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.287226 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c724832cf5fc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:38.027091452 +0000 UTC m=+4.682830567,LastTimestamp:2026-02-26 11:05:38.027091452 +0000 UTC m=+4.682830567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.293322 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897c7243e3a9430\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c7243e3a9430 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:36.870356016 +0000 UTC m=+3.526095141,LastTimestamp:2026-02-26 11:05:38.040138165 +0000 UTC m=+4.695877280,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.298308 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897c724476b0b05\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c724476b0b05 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:37.024527109 +0000 UTC m=+3.680266224,LastTimestamp:2026-02-26 11:05:38.191241007 +0000 UTC m=+4.846980112,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.302706 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c7248cf85daf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:38.191416751 +0000 UTC m=+4.847155866,LastTimestamp:2026-02-26 11:05:38.191416751 +0000 UTC m=+4.847155866,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.308823 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897c72448a106bc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c72448a106bc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:37.044842172 +0000 UTC m=+3.700581287,LastTimestamp:2026-02-26 11:05:38.198872214 +0000 UTC m=+4.854611339,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.315699 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c7248d758538 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:38.199618872 +0000 UTC m=+4.855357987,LastTimestamp:2026-02-26 11:05:38.199618872 +0000 UTC m=+4.855357987,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.318194 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c7248d86b908 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:38.200746248 +0000 UTC m=+4.856485363,LastTimestamp:2026-02-26 11:05:38.200746248 +0000 UTC m=+4.856485363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.324766 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c724994f2724 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:38.398431012 +0000 UTC m=+5.054170127,LastTimestamp:2026-02-26 11:05:38.398431012 +0000 UTC m=+5.054170127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.331543 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c72499f21573 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:38.409108851 +0000 UTC m=+5.064847966,LastTimestamp:2026-02-26 11:05:38.409108851 +0000 UTC m=+5.064847966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.337387 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c7249a02529d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:38.410173085 +0000 UTC m=+5.065912200,LastTimestamp:2026-02-26 11:05:38.410173085 +0000 UTC m=+5.065912200,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.342334 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c724a35b9e1d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:38.567020061 +0000 UTC m=+5.222759176,LastTimestamp:2026-02-26 11:05:38.567020061 +0000 UTC m=+5.222759176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.346518 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c724a3fa8403 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:38.577433603 +0000 UTC m=+5.233172728,LastTimestamp:2026-02-26 11:05:38.577433603 +0000 UTC m=+5.233172728,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.350793 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c724a40ae564 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:38.578507108 +0000 UTC m=+5.234246223,LastTimestamp:2026-02-26 11:05:38.578507108 +0000 UTC m=+5.234246223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.364406 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c724adbdff8a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:38.74123969 +0000 UTC m=+5.396978805,LastTimestamp:2026-02-26 11:05:38.74123969 +0000 UTC m=+5.396978805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.369204 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c724af17e16d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:38.763907437 +0000 UTC m=+5.419646552,LastTimestamp:2026-02-26 11:05:38.763907437 +0000 UTC m=+5.419646552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.374597 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c724af2ec98c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:38.765408652 +0000 UTC m=+5.421147767,LastTimestamp:2026-02-26 11:05:38.765408652 +0000 UTC m=+5.421147767,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.379810 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c724b972aac8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:38.937629384 +0000 UTC m=+5.593368499,LastTimestamp:2026-02-26 11:05:38.937629384 +0000 UTC m=+5.593368499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.384199 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c724ba693a87 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:38.953788039 +0000 UTC m=+5.609527154,LastTimestamp:2026-02-26 11:05:38.953788039 +0000 UTC m=+5.609527154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.391030 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 11:06:28 crc kubenswrapper[4724]: &Event{ObjectMeta:{kube-controller-manager-crc.1897c72631ec989a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 26 11:06:28 crc kubenswrapper[4724]: body: Feb 26 11:06:28 crc kubenswrapper[4724]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:45.253853338 +0000 UTC m=+11.909592453,LastTimestamp:2026-02-26 11:05:45.253853338 +0000 UTC m=+11.909592453,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 11:06:28 crc kubenswrapper[4724]: > Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.395728 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c72631ed91bf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:45.253917119 +0000 UTC m=+11.909656234,LastTimestamp:2026-02-26 11:05:45.253917119 +0000 UTC m=+11.909656234,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.400247 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 11:06:28 crc kubenswrapper[4724]: &Event{ObjectMeta:{kube-apiserver-crc.1897c726cee06e9e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 26 11:06:28 crc kubenswrapper[4724]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 11:06:28 crc kubenswrapper[4724]: Feb 26 11:06:28 crc kubenswrapper[4724]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:47.88707907 +0000 UTC m=+14.542818185,LastTimestamp:2026-02-26 11:05:47.88707907 +0000 UTC m=+14.542818185,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 11:06:28 crc kubenswrapper[4724]: > Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.405497 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c726cee0d4e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:47.88710525 +0000 UTC m=+14.542844365,LastTimestamp:2026-02-26 11:05:47.88710525 +0000 UTC m=+14.542844365,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.409579 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897c726cee06e9e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 11:06:28 crc kubenswrapper[4724]: &Event{ObjectMeta:{kube-apiserver-crc.1897c726cee06e9e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 26 11:06:28 crc kubenswrapper[4724]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 11:06:28 crc kubenswrapper[4724]: Feb 26 11:06:28 crc kubenswrapper[4724]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:47.88707907 +0000 UTC m=+14.542818185,LastTimestamp:2026-02-26 11:05:47.892935426 +0000 UTC m=+14.548674541,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 11:06:28 crc kubenswrapper[4724]: > Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.416579 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897c726cee0d4e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c726cee0d4e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:47.88710525 +0000 UTC m=+14.542844365,LastTimestamp:2026-02-26 11:05:47.892965077 +0000 UTC m=+14.548704192,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.422045 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 11:06:28 crc kubenswrapper[4724]: &Event{ObjectMeta:{kube-controller-manager-crc.1897c72885faab4b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 11:06:28 crc kubenswrapper[4724]: body: Feb 26 11:06:28 crc kubenswrapper[4724]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:55.253996363 +0000 UTC m=+21.909735468,LastTimestamp:2026-02-26 11:05:55.253996363 +0000 UTC m=+21.909735468,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 11:06:28 crc kubenswrapper[4724]: > Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.425822 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c72885fb4a66 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:55.254037094 +0000 UTC m=+21.909776209,LastTimestamp:2026-02-26 11:05:55.254037094 +0000 UTC m=+21.909776209,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.430091 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c72885faab4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 11:06:28 crc kubenswrapper[4724]: &Event{ObjectMeta:{kube-controller-manager-crc.1897c72885faab4b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 11:06:28 crc kubenswrapper[4724]: body: Feb 26 11:06:28 crc kubenswrapper[4724]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:55.253996363 +0000 UTC m=+21.909735468,LastTimestamp:2026-02-26 11:06:05.254982127 +0000 UTC m=+31.910721242,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 11:06:28 crc kubenswrapper[4724]: > Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.436835 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c72885fb4a66\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c72885fb4a66 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:55.254037094 +0000 UTC m=+21.909776209,LastTimestamp:2026-02-26 11:06:05.255037839 +0000 UTC m=+31.910776954,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.440521 4724 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c72ada426776 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:06:05.257918326 +0000 UTC m=+31.913657461,LastTimestamp:2026-02-26 11:06:05.257918326 +0000 UTC m=+31.913657461,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.444340 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c723d806c48a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c723d806c48a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.15568449 +0000 UTC m=+1.811423615,LastTimestamp:2026-02-26 11:06:05.412680963 +0000 UTC m=+32.068420078,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.447509 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c723e97662df\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c723e97662df openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.448212191 +0000 UTC m=+2.103951316,LastTimestamp:2026-02-26 11:06:05.567811298 +0000 UTC m=+32.223550413,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.452725 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c723ea56fd67\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c723ea56fd67 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:35.462931815 +0000 UTC m=+2.118670940,LastTimestamp:2026-02-26 11:06:05.592545293 +0000 UTC m=+32.248284408,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.458884 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c72885faab4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 11:06:28 crc kubenswrapper[4724]: &Event{ObjectMeta:{kube-controller-manager-crc.1897c72885faab4b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 11:06:28 crc kubenswrapper[4724]: body: Feb 26 11:06:28 crc kubenswrapper[4724]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:55.253996363 +0000 UTC m=+21.909735468,LastTimestamp:2026-02-26 11:06:15.255122252 +0000 UTC m=+41.910861447,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 11:06:28 crc kubenswrapper[4724]: > Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.465024 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c72885fb4a66\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c72885fb4a66 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:55.254037094 +0000 UTC m=+21.909776209,LastTimestamp:2026-02-26 11:06:15.25545857 +0000 UTC m=+41.911197785,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:06:28 crc kubenswrapper[4724]: E0226 11:06:28.469901 4724 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c72885faab4b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 11:06:28 crc kubenswrapper[4724]: &Event{ObjectMeta:{kube-controller-manager-crc.1897c72885faab4b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 11:06:28 crc kubenswrapper[4724]: body: Feb 26 11:06:28 crc kubenswrapper[4724]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:05:55.253996363 +0000 UTC m=+21.909735468,LastTimestamp:2026-02-26 11:06:25.254367508 +0000 UTC m=+51.910106623,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 11:06:28 crc kubenswrapper[4724]: > Feb 26 11:06:28 crc kubenswrapper[4724]: I0226 11:06:28.918932 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:29 crc kubenswrapper[4724]: E0226 11:06:29.309480 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 11:06:29 crc kubenswrapper[4724]: I0226 11:06:29.310472 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:29 crc kubenswrapper[4724]: I0226 11:06:29.311692 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:29 crc kubenswrapper[4724]: I0226 11:06:29.311721 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:29 crc kubenswrapper[4724]: I0226 11:06:29.311750 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:29 crc kubenswrapper[4724]: I0226 11:06:29.311772 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:06:29 crc kubenswrapper[4724]: E0226 11:06:29.320261 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 11:06:29 crc kubenswrapper[4724]: I0226 11:06:29.919737 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:30 crc kubenswrapper[4724]: I0226 11:06:30.920129 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:31 crc kubenswrapper[4724]: I0226 11:06:31.918998 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:32 crc kubenswrapper[4724]: I0226 11:06:32.918951 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:32 crc kubenswrapper[4724]: I0226 11:06:32.975301 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:32 crc kubenswrapper[4724]: I0226 11:06:32.976768 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:32 crc kubenswrapper[4724]: I0226 11:06:32.976837 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:32 crc kubenswrapper[4724]: I0226 11:06:32.976856 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:32 crc kubenswrapper[4724]: I0226 11:06:32.977687 4724 scope.go:117] "RemoveContainer" containerID="6d2380388305c625fdb50dabcd45323b51b92aaf952535cf20508bd93e3b7842" Feb 26 11:06:33 crc kubenswrapper[4724]: I0226 11:06:33.205873 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 11:06:33 crc kubenswrapper[4724]: I0226 11:06:33.207426 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75"} Feb 26 11:06:33 crc kubenswrapper[4724]: I0226 11:06:33.207566 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:33 crc kubenswrapper[4724]: I0226 11:06:33.208272 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:33 crc kubenswrapper[4724]: I0226 11:06:33.208295 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:33 crc kubenswrapper[4724]: I0226 11:06:33.208303 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:33 crc kubenswrapper[4724]: I0226 11:06:33.919344 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:34 crc kubenswrapper[4724]: E0226 11:06:34.064215 4724 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:06:34 crc kubenswrapper[4724]: I0226 11:06:34.210930 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 11:06:34 crc kubenswrapper[4724]: I0226 11:06:34.211311 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 11:06:34 crc kubenswrapper[4724]: I0226 11:06:34.213100 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75" exitCode=255 Feb 26 11:06:34 crc kubenswrapper[4724]: I0226 11:06:34.213137 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75"} Feb 26 11:06:34 crc kubenswrapper[4724]: I0226 11:06:34.213172 4724 scope.go:117] "RemoveContainer" containerID="6d2380388305c625fdb50dabcd45323b51b92aaf952535cf20508bd93e3b7842" Feb 26 11:06:34 crc kubenswrapper[4724]: I0226 11:06:34.213335 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:34 crc kubenswrapper[4724]: I0226 11:06:34.214077 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:34 crc kubenswrapper[4724]: I0226 11:06:34.214103 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:34 crc kubenswrapper[4724]: I0226 11:06:34.214113 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:34 crc kubenswrapper[4724]: I0226 11:06:34.214602 4724 scope.go:117] "RemoveContainer" containerID="a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75" Feb 26 11:06:34 crc kubenswrapper[4724]: E0226 11:06:34.214863 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:06:34 crc kubenswrapper[4724]: I0226 11:06:34.917882 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:35 crc kubenswrapper[4724]: I0226 11:06:35.217131 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 11:06:35 crc kubenswrapper[4724]: I0226 11:06:35.254099 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:06:35 crc kubenswrapper[4724]: I0226 11:06:35.254213 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:06:35 crc kubenswrapper[4724]: I0226 11:06:35.254278 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:06:35 crc kubenswrapper[4724]: I0226 11:06:35.254429 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:35 crc kubenswrapper[4724]: I0226 11:06:35.255968 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:35 crc kubenswrapper[4724]: I0226 11:06:35.256013 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:35 crc kubenswrapper[4724]: I0226 11:06:35.256028 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:35 crc kubenswrapper[4724]: I0226 11:06:35.256658 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"ae1f53d018ded65ea8f7f942ccaf1686887229ec9ddc478ce0676bc5e2d92279"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 26 11:06:35 crc kubenswrapper[4724]: I0226 11:06:35.256823 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://ae1f53d018ded65ea8f7f942ccaf1686887229ec9ddc478ce0676bc5e2d92279" gracePeriod=30 Feb 26 11:06:35 crc kubenswrapper[4724]: I0226 11:06:35.918265 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.224431 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.225853 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.226158 4724 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ae1f53d018ded65ea8f7f942ccaf1686887229ec9ddc478ce0676bc5e2d92279" exitCode=255 Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.226207 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ae1f53d018ded65ea8f7f942ccaf1686887229ec9ddc478ce0676bc5e2d92279"} Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.226384 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e92d51ca0bcde355c08989b23b0a74610f818b9d28a946d9260561e934dfea5c"} Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.226479 4724 scope.go:117] "RemoveContainer" containerID="3d7dc32ab609486713001d26ff9b78c9f9004113c1ce295b049bb0645ac8cd78" Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.226479 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.228238 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.228272 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.228282 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:36 crc kubenswrapper[4724]: E0226 11:06:36.314902 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.321102 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.322354 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.322488 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.322552 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.322631 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:06:36 crc kubenswrapper[4724]: E0226 11:06:36.329135 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 11:06:36 crc kubenswrapper[4724]: I0226 11:06:36.921256 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:37 crc kubenswrapper[4724]: I0226 11:06:37.230887 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 11:06:37 crc kubenswrapper[4724]: I0226 11:06:37.232780 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:37 crc kubenswrapper[4724]: I0226 11:06:37.233551 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:37 crc kubenswrapper[4724]: I0226 11:06:37.233575 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:37 crc kubenswrapper[4724]: I0226 11:06:37.233584 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:37 crc kubenswrapper[4724]: I0226 11:06:37.457924 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:06:37 crc kubenswrapper[4724]: I0226 11:06:37.458131 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:37 crc kubenswrapper[4724]: I0226 11:06:37.459056 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:37 crc kubenswrapper[4724]: I0226 11:06:37.459077 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:37 crc kubenswrapper[4724]: I0226 11:06:37.459086 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:37 crc kubenswrapper[4724]: I0226 11:06:37.459557 4724 scope.go:117] "RemoveContainer" containerID="a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75" Feb 26 11:06:37 crc kubenswrapper[4724]: E0226 11:06:37.459713 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:06:37 crc kubenswrapper[4724]: I0226 11:06:37.925468 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:38 crc kubenswrapper[4724]: I0226 11:06:38.921045 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:39 crc kubenswrapper[4724]: I0226 11:06:39.918294 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:40 crc kubenswrapper[4724]: I0226 11:06:40.919482 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:41 crc kubenswrapper[4724]: I0226 11:06:41.035556 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:06:41 crc kubenswrapper[4724]: I0226 11:06:41.035734 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:41 crc kubenswrapper[4724]: I0226 11:06:41.036761 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:41 crc kubenswrapper[4724]: I0226 11:06:41.036802 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:41 crc kubenswrapper[4724]: I0226 11:06:41.036815 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:41 crc kubenswrapper[4724]: I0226 11:06:41.258381 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:06:41 crc kubenswrapper[4724]: I0226 11:06:41.258555 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:41 crc kubenswrapper[4724]: I0226 11:06:41.259487 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:41 crc kubenswrapper[4724]: I0226 11:06:41.259532 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:41 crc kubenswrapper[4724]: I0226 11:06:41.259544 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:41 crc kubenswrapper[4724]: I0226 11:06:41.260102 4724 scope.go:117] "RemoveContainer" containerID="a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75" Feb 26 11:06:41 crc kubenswrapper[4724]: E0226 11:06:41.260281 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:06:41 crc kubenswrapper[4724]: I0226 11:06:41.916908 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:42 crc kubenswrapper[4724]: I0226 11:06:42.256606 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:06:42 crc kubenswrapper[4724]: I0226 11:06:42.257294 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:42 crc kubenswrapper[4724]: I0226 11:06:42.258138 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:42 crc kubenswrapper[4724]: I0226 11:06:42.258205 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:42 crc kubenswrapper[4724]: I0226 11:06:42.258238 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:42 crc kubenswrapper[4724]: I0226 11:06:42.259708 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:06:42 crc kubenswrapper[4724]: I0226 11:06:42.917443 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:43 crc kubenswrapper[4724]: I0226 11:06:43.245899 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:43 crc kubenswrapper[4724]: I0226 11:06:43.246724 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:43 crc kubenswrapper[4724]: I0226 11:06:43.246770 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:43 crc kubenswrapper[4724]: I0226 11:06:43.246782 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:43 crc kubenswrapper[4724]: E0226 11:06:43.321088 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 11:06:43 crc kubenswrapper[4724]: I0226 11:06:43.329424 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:43 crc kubenswrapper[4724]: I0226 11:06:43.330566 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:43 crc kubenswrapper[4724]: I0226 11:06:43.330629 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:43 crc kubenswrapper[4724]: I0226 11:06:43.330642 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:43 crc kubenswrapper[4724]: I0226 11:06:43.330666 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:06:43 crc kubenswrapper[4724]: E0226 11:06:43.335132 4724 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 11:06:43 crc kubenswrapper[4724]: I0226 11:06:43.918450 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:44 crc kubenswrapper[4724]: E0226 11:06:44.065358 4724 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:06:44 crc kubenswrapper[4724]: I0226 11:06:44.717483 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 11:06:44 crc kubenswrapper[4724]: I0226 11:06:44.732597 4724 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 11:06:44 crc kubenswrapper[4724]: I0226 11:06:44.920766 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:45 crc kubenswrapper[4724]: I0226 11:06:45.917308 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:46 crc kubenswrapper[4724]: I0226 11:06:46.918412 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:47 crc kubenswrapper[4724]: I0226 11:06:47.918910 4724 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 11:06:48 crc kubenswrapper[4724]: I0226 11:06:48.847515 4724 csr.go:261] certificate signing request csr-6wlg5 is approved, waiting to be issued Feb 26 11:06:48 crc kubenswrapper[4724]: I0226 11:06:48.853582 4724 csr.go:257] certificate signing request csr-6wlg5 is issued Feb 26 11:06:48 crc kubenswrapper[4724]: I0226 11:06:48.955018 4724 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 26 11:06:49 crc kubenswrapper[4724]: I0226 11:06:49.301881 4724 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 26 11:06:49 crc kubenswrapper[4724]: I0226 11:06:49.799524 4724 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 26 11:06:49 crc kubenswrapper[4724]: W0226 11:06:49.800031 4724 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 26 11:06:49 crc kubenswrapper[4724]: I0226 11:06:49.855871 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-22 10:08:17.119500371 +0000 UTC Feb 26 11:06:49 crc kubenswrapper[4724]: I0226 11:06:49.856114 4724 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6455h1m27.263389987s for next certificate rotation Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.335313 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.339244 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.339292 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.339327 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.339462 4724 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.351333 4724 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.351841 4724 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 26 11:06:50 crc kubenswrapper[4724]: E0226 11:06:50.351871 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.355756 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.355791 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.355807 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.355823 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.355836 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:06:50Z","lastTransitionTime":"2026-02-26T11:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:06:50 crc kubenswrapper[4724]: E0226 11:06:50.367828 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.377339 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.377584 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.377651 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.377725 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.377821 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:06:50Z","lastTransitionTime":"2026-02-26T11:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:06:50 crc kubenswrapper[4724]: E0226 11:06:50.389937 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.400578 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.400868 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.400967 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.401064 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.401166 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:06:50Z","lastTransitionTime":"2026-02-26T11:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:06:50 crc kubenswrapper[4724]: E0226 11:06:50.412564 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.420895 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.420935 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.420950 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.420966 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:06:50 crc kubenswrapper[4724]: I0226 11:06:50.420979 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:06:50Z","lastTransitionTime":"2026-02-26T11:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:06:50 crc kubenswrapper[4724]: E0226 11:06:50.431991 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:06:50 crc kubenswrapper[4724]: E0226 11:06:50.432153 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:06:50 crc kubenswrapper[4724]: E0226 11:06:50.432211 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:50 crc kubenswrapper[4724]: E0226 11:06:50.532860 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:50 crc kubenswrapper[4724]: E0226 11:06:50.633579 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:50 crc kubenswrapper[4724]: E0226 11:06:50.734459 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:50 crc kubenswrapper[4724]: E0226 11:06:50.835643 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:50 crc kubenswrapper[4724]: E0226 11:06:50.936724 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:51 crc kubenswrapper[4724]: E0226 11:06:51.037950 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:51 crc kubenswrapper[4724]: I0226 11:06:51.039862 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:06:51 crc kubenswrapper[4724]: I0226 11:06:51.039999 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:51 crc kubenswrapper[4724]: I0226 11:06:51.041473 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:51 crc kubenswrapper[4724]: I0226 11:06:51.041530 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:51 crc kubenswrapper[4724]: I0226 11:06:51.041546 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:51 crc kubenswrapper[4724]: E0226 11:06:51.138561 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:51 crc kubenswrapper[4724]: E0226 11:06:51.239522 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:51 crc kubenswrapper[4724]: E0226 11:06:51.340291 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:51 crc kubenswrapper[4724]: E0226 11:06:51.441579 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:51 crc kubenswrapper[4724]: E0226 11:06:51.542409 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:51 crc kubenswrapper[4724]: E0226 11:06:51.643231 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:51 crc kubenswrapper[4724]: E0226 11:06:51.744397 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:51 crc kubenswrapper[4724]: E0226 11:06:51.844860 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:51 crc kubenswrapper[4724]: E0226 11:06:51.945494 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:52 crc kubenswrapper[4724]: E0226 11:06:52.046300 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:52 crc kubenswrapper[4724]: E0226 11:06:52.146953 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:52 crc kubenswrapper[4724]: E0226 11:06:52.247982 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:52 crc kubenswrapper[4724]: E0226 11:06:52.348666 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:52 crc kubenswrapper[4724]: E0226 11:06:52.449555 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:52 crc kubenswrapper[4724]: E0226 11:06:52.550252 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:52 crc kubenswrapper[4724]: E0226 11:06:52.651313 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:52 crc kubenswrapper[4724]: E0226 11:06:52.751794 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:52 crc kubenswrapper[4724]: E0226 11:06:52.852560 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:52 crc kubenswrapper[4724]: E0226 11:06:52.952895 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:52 crc kubenswrapper[4724]: I0226 11:06:52.974810 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:52 crc kubenswrapper[4724]: I0226 11:06:52.975890 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:52 crc kubenswrapper[4724]: I0226 11:06:52.975998 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:52 crc kubenswrapper[4724]: I0226 11:06:52.976008 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:52 crc kubenswrapper[4724]: I0226 11:06:52.976585 4724 scope.go:117] "RemoveContainer" containerID="a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75" Feb 26 11:06:52 crc kubenswrapper[4724]: E0226 11:06:52.976731 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:06:53 crc kubenswrapper[4724]: E0226 11:06:53.053895 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:53 crc kubenswrapper[4724]: E0226 11:06:53.155010 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:53 crc kubenswrapper[4724]: E0226 11:06:53.255725 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:53 crc kubenswrapper[4724]: E0226 11:06:53.356759 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:53 crc kubenswrapper[4724]: E0226 11:06:53.457853 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:53 crc kubenswrapper[4724]: E0226 11:06:53.558707 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:53 crc kubenswrapper[4724]: E0226 11:06:53.659624 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:53 crc kubenswrapper[4724]: E0226 11:06:53.760074 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:53 crc kubenswrapper[4724]: E0226 11:06:53.861212 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:53 crc kubenswrapper[4724]: E0226 11:06:53.961623 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:54 crc kubenswrapper[4724]: E0226 11:06:54.062513 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:54 crc kubenswrapper[4724]: E0226 11:06:54.066372 4724 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:06:54 crc kubenswrapper[4724]: E0226 11:06:54.163633 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:54 crc kubenswrapper[4724]: E0226 11:06:54.264605 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:54 crc kubenswrapper[4724]: E0226 11:06:54.365174 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:54 crc kubenswrapper[4724]: E0226 11:06:54.465302 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:54 crc kubenswrapper[4724]: E0226 11:06:54.566037 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:54 crc kubenswrapper[4724]: E0226 11:06:54.666790 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:54 crc kubenswrapper[4724]: E0226 11:06:54.767531 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:54 crc kubenswrapper[4724]: E0226 11:06:54.868372 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:54 crc kubenswrapper[4724]: E0226 11:06:54.969412 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:55 crc kubenswrapper[4724]: E0226 11:06:55.070043 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:55 crc kubenswrapper[4724]: E0226 11:06:55.171249 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:55 crc kubenswrapper[4724]: E0226 11:06:55.271428 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:55 crc kubenswrapper[4724]: E0226 11:06:55.371763 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:55 crc kubenswrapper[4724]: E0226 11:06:55.472699 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:55 crc kubenswrapper[4724]: E0226 11:06:55.573540 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:55 crc kubenswrapper[4724]: E0226 11:06:55.674172 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:55 crc kubenswrapper[4724]: E0226 11:06:55.774915 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:55 crc kubenswrapper[4724]: E0226 11:06:55.875877 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:55 crc kubenswrapper[4724]: E0226 11:06:55.976202 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:56 crc kubenswrapper[4724]: E0226 11:06:56.076828 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:56 crc kubenswrapper[4724]: E0226 11:06:56.177742 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:56 crc kubenswrapper[4724]: E0226 11:06:56.278080 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:56 crc kubenswrapper[4724]: E0226 11:06:56.378479 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:56 crc kubenswrapper[4724]: E0226 11:06:56.478642 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:56 crc kubenswrapper[4724]: E0226 11:06:56.579474 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:56 crc kubenswrapper[4724]: I0226 11:06:56.585116 4724 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 26 11:06:56 crc kubenswrapper[4724]: E0226 11:06:56.680041 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:56 crc kubenswrapper[4724]: E0226 11:06:56.780354 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:56 crc kubenswrapper[4724]: E0226 11:06:56.880873 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:56 crc kubenswrapper[4724]: I0226 11:06:56.975085 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:06:56 crc kubenswrapper[4724]: I0226 11:06:56.977025 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:06:56 crc kubenswrapper[4724]: I0226 11:06:56.977155 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:06:56 crc kubenswrapper[4724]: I0226 11:06:56.977279 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:06:56 crc kubenswrapper[4724]: E0226 11:06:56.981168 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:57 crc kubenswrapper[4724]: E0226 11:06:57.081286 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:57 crc kubenswrapper[4724]: E0226 11:06:57.181875 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:57 crc kubenswrapper[4724]: E0226 11:06:57.282843 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:57 crc kubenswrapper[4724]: E0226 11:06:57.383067 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:57 crc kubenswrapper[4724]: E0226 11:06:57.484421 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:57 crc kubenswrapper[4724]: E0226 11:06:57.585362 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:57 crc kubenswrapper[4724]: E0226 11:06:57.686262 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:57 crc kubenswrapper[4724]: E0226 11:06:57.786618 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:57 crc kubenswrapper[4724]: E0226 11:06:57.887248 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:57 crc kubenswrapper[4724]: E0226 11:06:57.987832 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:58 crc kubenswrapper[4724]: E0226 11:06:58.088272 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:58 crc kubenswrapper[4724]: E0226 11:06:58.188845 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:58 crc kubenswrapper[4724]: E0226 11:06:58.289562 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:58 crc kubenswrapper[4724]: E0226 11:06:58.389880 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:58 crc kubenswrapper[4724]: E0226 11:06:58.490594 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:58 crc kubenswrapper[4724]: E0226 11:06:58.590779 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:58 crc kubenswrapper[4724]: E0226 11:06:58.691753 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:58 crc kubenswrapper[4724]: E0226 11:06:58.792703 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:58 crc kubenswrapper[4724]: E0226 11:06:58.893004 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:58 crc kubenswrapper[4724]: E0226 11:06:58.993649 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:59 crc kubenswrapper[4724]: E0226 11:06:59.094668 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:59 crc kubenswrapper[4724]: E0226 11:06:59.195779 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:59 crc kubenswrapper[4724]: E0226 11:06:59.296169 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:59 crc kubenswrapper[4724]: E0226 11:06:59.397346 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:59 crc kubenswrapper[4724]: E0226 11:06:59.498478 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:59 crc kubenswrapper[4724]: E0226 11:06:59.598903 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:59 crc kubenswrapper[4724]: E0226 11:06:59.699845 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:59 crc kubenswrapper[4724]: E0226 11:06:59.800925 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:06:59 crc kubenswrapper[4724]: E0226 11:06:59.901913 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.002232 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.102599 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.203381 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.303677 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.404660 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.505255 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.539688 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.543732 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.543760 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.543768 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.543781 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.543789 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:00Z","lastTransitionTime":"2026-02-26T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.553496 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.556546 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.556573 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.556582 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.556595 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.556604 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:00Z","lastTransitionTime":"2026-02-26T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.566637 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.570098 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.570139 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.570150 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.570166 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.570207 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:00Z","lastTransitionTime":"2026-02-26T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.581412 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.586934 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.587014 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.587030 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.587047 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:00 crc kubenswrapper[4724]: I0226 11:07:00.587058 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:00Z","lastTransitionTime":"2026-02-26T11:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.596935 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.597043 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.606268 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.707172 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.808262 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:00 crc kubenswrapper[4724]: E0226 11:07:00.909335 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:01 crc kubenswrapper[4724]: E0226 11:07:01.010119 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:01 crc kubenswrapper[4724]: E0226 11:07:01.110808 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:01 crc kubenswrapper[4724]: E0226 11:07:01.211999 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:01 crc kubenswrapper[4724]: E0226 11:07:01.312191 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:01 crc kubenswrapper[4724]: E0226 11:07:01.412351 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:01 crc kubenswrapper[4724]: E0226 11:07:01.512747 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:01 crc kubenswrapper[4724]: E0226 11:07:01.613523 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:01 crc kubenswrapper[4724]: E0226 11:07:01.713862 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:01 crc kubenswrapper[4724]: E0226 11:07:01.814621 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:01 crc kubenswrapper[4724]: E0226 11:07:01.914897 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:01 crc kubenswrapper[4724]: I0226 11:07:01.974470 4724 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 11:07:01 crc kubenswrapper[4724]: I0226 11:07:01.975872 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:01 crc kubenswrapper[4724]: I0226 11:07:01.975899 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:01 crc kubenswrapper[4724]: I0226 11:07:01.975909 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:02 crc kubenswrapper[4724]: E0226 11:07:02.015553 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:02 crc kubenswrapper[4724]: E0226 11:07:02.115627 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:02 crc kubenswrapper[4724]: E0226 11:07:02.216695 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:02 crc kubenswrapper[4724]: E0226 11:07:02.317646 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:02 crc kubenswrapper[4724]: E0226 11:07:02.418137 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:02 crc kubenswrapper[4724]: E0226 11:07:02.518353 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:02 crc kubenswrapper[4724]: E0226 11:07:02.619252 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:02 crc kubenswrapper[4724]: E0226 11:07:02.720101 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:02 crc kubenswrapper[4724]: E0226 11:07:02.820544 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:02 crc kubenswrapper[4724]: E0226 11:07:02.921194 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:03 crc kubenswrapper[4724]: E0226 11:07:03.022338 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:03 crc kubenswrapper[4724]: E0226 11:07:03.122857 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:03 crc kubenswrapper[4724]: E0226 11:07:03.223499 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:03 crc kubenswrapper[4724]: E0226 11:07:03.324027 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:03 crc kubenswrapper[4724]: E0226 11:07:03.425023 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:03 crc kubenswrapper[4724]: E0226 11:07:03.525728 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:03 crc kubenswrapper[4724]: E0226 11:07:03.626581 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:03 crc kubenswrapper[4724]: E0226 11:07:03.726920 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:03 crc kubenswrapper[4724]: E0226 11:07:03.827378 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:03 crc kubenswrapper[4724]: E0226 11:07:03.928479 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:03 crc kubenswrapper[4724]: I0226 11:07:03.976746 4724 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 26 11:07:04 crc kubenswrapper[4724]: E0226 11:07:04.029159 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:04 crc kubenswrapper[4724]: E0226 11:07:04.067514 4724 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 11:07:04 crc kubenswrapper[4724]: E0226 11:07:04.130141 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:04 crc kubenswrapper[4724]: E0226 11:07:04.230802 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:04 crc kubenswrapper[4724]: E0226 11:07:04.331800 4724 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.427258 4724 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.434187 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.434218 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.434227 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.434244 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.434255 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:04Z","lastTransitionTime":"2026-02-26T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.537228 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.537276 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.537285 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.537305 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.537316 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:04Z","lastTransitionTime":"2026-02-26T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.639545 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.639582 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.639593 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.639609 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.639621 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:04Z","lastTransitionTime":"2026-02-26T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.742599 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.742638 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.742649 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.742664 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.742676 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:04Z","lastTransitionTime":"2026-02-26T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.844517 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.844568 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.844578 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.844595 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.844622 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:04Z","lastTransitionTime":"2026-02-26T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.947538 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.947588 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.947600 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.947620 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.947634 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:04Z","lastTransitionTime":"2026-02-26T11:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.952684 4724 apiserver.go:52] "Watching apiserver" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.963536 4724 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.963929 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.964569 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.964632 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:04 crc kubenswrapper[4724]: E0226 11:07:04.964693 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.965075 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:04 crc kubenswrapper[4724]: E0226 11:07:04.965110 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.965197 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:04 crc kubenswrapper[4724]: E0226 11:07:04.965239 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.965276 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.965420 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.967449 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.967787 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.968152 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.968330 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.968534 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.968710 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.968532 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.968948 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.969167 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 26 11:07:04 crc kubenswrapper[4724]: I0226 11:07:04.999776 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.011013 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.020565 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.022629 4724 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.029899 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.040631 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.050458 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.050575 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.050624 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.050656 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.050676 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.050687 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:05Z","lastTransitionTime":"2026-02-26T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.060256 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.072772 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086012 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086071 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086095 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086117 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086134 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086150 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086197 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086221 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086243 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086265 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086287 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086310 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086329 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086352 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086376 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086395 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086414 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086432 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086449 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086465 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086484 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086501 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086521 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086539 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086557 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086556 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086701 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086736 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086762 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086785 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086809 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086840 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086864 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086887 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086911 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086935 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086962 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086989 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087017 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087044 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087069 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087093 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087120 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087144 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087170 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087221 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087246 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087277 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087303 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087328 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087395 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087423 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087446 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087495 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087517 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087573 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087597 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087620 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087642 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087664 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087694 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087731 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087759 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087783 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087811 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087839 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087865 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087896 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087927 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087952 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087979 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088005 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088029 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088056 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088081 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088107 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088134 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088161 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088203 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088239 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088262 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088284 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088308 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088334 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088357 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088380 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088404 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088427 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088451 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088474 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088499 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088524 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088547 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088569 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088594 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088617 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088643 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088665 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088687 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088710 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088732 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088754 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088778 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088809 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088863 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088888 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088912 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088936 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088959 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088984 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089010 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089035 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089059 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089083 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089107 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089135 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089159 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089264 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089291 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089316 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089341 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089366 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089391 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089414 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089463 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089491 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089514 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089539 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089565 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089589 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089615 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089638 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089660 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089685 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089709 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089746 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089772 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089797 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089824 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089849 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089874 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089902 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089926 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089950 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089987 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.090013 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086691 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086752 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086919 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.086972 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087320 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087390 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087618 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087698 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087751 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.090193 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087808 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087879 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087955 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.087971 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088098 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088111 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088221 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088471 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088412 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088572 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088795 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.088886 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089104 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089332 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089442 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089533 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089762 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089859 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.089893 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.090021 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.090517 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.090562 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.090564 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.090637 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.090732 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.090814 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091023 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091097 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091107 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091083 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091197 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091436 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091445 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.090037 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091570 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091594 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091619 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091640 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091660 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091661 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091677 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091736 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091751 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091771 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091790 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091808 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091829 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091850 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091871 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091892 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091915 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091935 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091957 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091980 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092004 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092024 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092058 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092087 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092116 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092143 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092167 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092211 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092235 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092255 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092272 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092292 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092311 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092330 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092349 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092368 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092386 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092403 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092420 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092436 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092455 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092473 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092491 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092508 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092526 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092544 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092561 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092581 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092599 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092619 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092643 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093151 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093281 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093339 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093394 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093442 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093493 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093546 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093608 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093657 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091804 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.091923 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092197 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092270 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092294 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092543 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092558 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092578 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.092605 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093308 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093423 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093609 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093486 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093745 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093824 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.093978 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.094194 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.094484 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.094861 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.095013 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.095281 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.095504 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.095609 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.095865 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.096333 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.096343 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.096381 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.096803 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.097427 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.097940 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.098077 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.098117 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:07:05.598080801 +0000 UTC m=+92.253819926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.107161 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.107266 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.107781 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.107847 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.108139 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.108214 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.108533 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.108576 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.108862 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.109196 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.109756 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.110901 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.110950 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.111000 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.110937 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.099577 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.099580 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.099875 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.099959 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.100744 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.101276 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.101425 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.101645 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.101824 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.101951 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.102575 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.102777 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.111311 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.102930 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.103546 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.103597 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.104030 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.104434 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.104754 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.111392 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.105521 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.105538 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.105955 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.106052 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.106326 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.111665 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.111811 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.111857 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.111900 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.111909 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.112342 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.112307 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.112529 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.112705 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.112741 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.112809 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.113062 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.113346 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.113381 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.113519 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.113531 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.113812 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.114172 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.114230 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.114299 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.114639 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.114720 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.114803 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.114801 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.114972 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.115092 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.117294 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.115107 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.115206 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.115247 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.115395 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.111656 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.115580 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.098633 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.115707 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.115747 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.115806 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.115945 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.116102 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.113264 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.113106 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.114445 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.114617 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.117293 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.117315 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.115364 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.117632 4724 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.117682 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.117816 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.117944 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.118033 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.118083 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.118303 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.118368 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.113982 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.118458 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.118471 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.118523 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.119208 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.119230 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.119233 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.118536 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.118710 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.118880 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.119045 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.119479 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.120857 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.121152 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.121326 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.121352 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.122959 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.123446 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.123545 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:05.623497355 +0000 UTC m=+92.279236470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.125235 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.125312 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.118519 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.109222 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.125890 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.127117 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.127168 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.127224 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.127410 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.127750 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:05.627531607 +0000 UTC m=+92.283270722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.132054 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.132935 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.133870 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.133906 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.133933 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.134606 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:05.634580752 +0000 UTC m=+92.290319857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134728 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134748 4724 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134767 4724 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134779 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134790 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134802 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134820 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134832 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134844 4724 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134856 4724 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134870 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134892 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134903 4724 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134915 4724 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134929 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134939 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134951 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134966 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134977 4724 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134753 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135009 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135021 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135036 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135049 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135060 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135070 4724 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135082 4724 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135092 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135128 4724 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135140 4724 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135154 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135195 4724 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135208 4724 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135222 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135233 4724 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135244 4724 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135253 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135265 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135274 4724 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135286 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135296 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135309 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135319 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135348 4724 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135362 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135391 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135416 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135425 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135454 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135464 4724 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135521 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135534 4724 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135547 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135556 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135565 4724 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135574 4724 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135586 4724 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135596 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135607 4724 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135638 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135648 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135657 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135668 4724 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135681 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135692 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135701 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135712 4724 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135724 4724 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135734 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135743 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135756 4724 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135765 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135776 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135786 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135800 4724 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135809 4724 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135820 4724 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135830 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135843 4724 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135853 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135864 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135874 4724 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135888 4724 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135898 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135908 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135921 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135933 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135944 4724 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135954 4724 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135984 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.135994 4724 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136005 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136015 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136030 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136041 4724 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136051 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136063 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136074 4724 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136084 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136093 4724 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136105 4724 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136116 4724 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136129 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136138 4724 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136150 4724 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136159 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136173 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136198 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136212 4724 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136222 4724 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136233 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136250 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136271 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136283 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136295 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136308 4724 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136318 4724 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136327 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136338 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136351 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136361 4724 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136371 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136383 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136392 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136404 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136414 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136426 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136436 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136446 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136457 4724 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136470 4724 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136479 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136489 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136498 4724 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136510 4724 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136537 4724 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136548 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136562 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136572 4724 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136581 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136593 4724 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136605 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136615 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136624 4724 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136634 4724 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136647 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136656 4724 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136665 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136675 4724 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136687 4724 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136697 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136706 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136718 4724 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136728 4724 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136764 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136785 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136799 4724 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136808 4724 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136819 4724 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136829 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136841 4724 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136851 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136862 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136875 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136887 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136899 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136910 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136925 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136935 4724 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136944 4724 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136983 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.136998 4724 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137007 4724 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137016 4724 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137025 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137036 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137046 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137055 4724 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137068 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137077 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137086 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137096 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137107 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137133 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137142 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137151 4724 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137163 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137190 4724 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.137217 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.134956 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.143733 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.146697 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.146736 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.146755 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.146837 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:05.646812531 +0000 UTC m=+92.302551646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.152502 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.153016 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.154366 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.156736 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.160617 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.160659 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.160671 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.160692 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.160703 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:05Z","lastTransitionTime":"2026-02-26T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.161627 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.175646 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.177827 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.237804 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.237861 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.237879 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.237885 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.237933 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.237945 4724 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.237957 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.237967 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.237977 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.237995 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.263812 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.264176 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.264372 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.264504 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.264651 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:05Z","lastTransitionTime":"2026-02-26T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.281068 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.290352 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.298043 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 11:07:05 crc kubenswrapper[4724]: W0226 11:07:05.305681 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-3340c19f25b868e99b73dd0a3ebded8638c36c6c590e68b285e755263cb73fea WatchSource:0}: Error finding container 3340c19f25b868e99b73dd0a3ebded8638c36c6c590e68b285e755263cb73fea: Status 404 returned error can't find the container with id 3340c19f25b868e99b73dd0a3ebded8638c36c6c590e68b285e755263cb73fea Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.306892 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7edc6943e0f682c621f5d4a77054e1700781a244b99da2a5cf26824631b62acb"} Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.371549 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.371579 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.371589 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.371606 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.371619 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:05Z","lastTransitionTime":"2026-02-26T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.473516 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.473553 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.473565 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.473580 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.473591 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:05Z","lastTransitionTime":"2026-02-26T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.575769 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.575807 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.575819 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.575836 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.575849 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:05Z","lastTransitionTime":"2026-02-26T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.639498 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.639545 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.639567 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.639597 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.639712 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.639732 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.639743 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.639793 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:06.639775171 +0000 UTC m=+93.295514286 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.640136 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:07:06.640127581 +0000 UTC m=+93.295866696 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.640204 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.640229 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:06.640223263 +0000 UTC m=+93.295962379 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.640257 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.640348 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:06.640335207 +0000 UTC m=+93.296074322 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.678117 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.678157 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.678167 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.678199 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.678211 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:05Z","lastTransitionTime":"2026-02-26T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.740483 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.740649 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.740709 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.740724 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.740795 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:06.74076611 +0000 UTC m=+93.396505225 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.780505 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.780535 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.780544 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.780557 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.780567 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:05Z","lastTransitionTime":"2026-02-26T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.883485 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.883528 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.883539 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.883555 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.883566 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:05Z","lastTransitionTime":"2026-02-26T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.974802 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:05 crc kubenswrapper[4724]: E0226 11:07:05.974954 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.978459 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.979414 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.980678 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.981490 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.982668 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.983304 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.983999 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.985641 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.986441 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.987520 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.988094 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.989763 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.989800 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.989810 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.989825 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.989836 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:05Z","lastTransitionTime":"2026-02-26T11:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.990259 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.990951 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.991567 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.992682 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.993460 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.994603 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.995090 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.995755 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.996956 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.997696 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.999009 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 26 11:07:05 crc kubenswrapper[4724]: I0226 11:07:05.999604 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.000823 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.001437 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.002309 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.003606 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.004171 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.006411 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.007226 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.008482 4724 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.008647 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.011207 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.012317 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.012855 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.014705 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.015611 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.016707 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.017496 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.018510 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.018976 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.019879 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.020498 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.021513 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.021944 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.022854 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.023334 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.024407 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.024851 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.025715 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.026151 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.027618 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.028153 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.028718 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.093037 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.093079 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.093091 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.093105 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.093115 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:06Z","lastTransitionTime":"2026-02-26T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.195445 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.195483 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.195494 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.195527 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.195554 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:06Z","lastTransitionTime":"2026-02-26T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.297682 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.297726 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.297778 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.297811 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.297830 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:06Z","lastTransitionTime":"2026-02-26T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.310341 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5"} Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.311811 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f10683330f3def8f2b5ef7bb687b35fdbb02a5568014dfecbc5c1654c5ef1bc3"} Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.313381 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f"} Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.313418 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e"} Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.313428 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3340c19f25b868e99b73dd0a3ebded8638c36c6c590e68b285e755263cb73fea"} Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.324730 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.336689 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.355011 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.368639 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.384034 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.397724 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.400501 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.400547 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.400564 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.400617 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.400635 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:06Z","lastTransitionTime":"2026-02-26T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.415644 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.429143 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.442780 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.456739 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.468150 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.480426 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:06Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.503109 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.503155 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.503171 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.503205 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.503219 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:06Z","lastTransitionTime":"2026-02-26T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.605289 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.605327 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.605336 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.605351 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.605362 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:06Z","lastTransitionTime":"2026-02-26T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.647003 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.647074 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.647108 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.647153 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:06 crc kubenswrapper[4724]: E0226 11:07:06.647271 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:07:06 crc kubenswrapper[4724]: E0226 11:07:06.647328 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:08.64731176 +0000 UTC m=+95.303050875 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:07:06 crc kubenswrapper[4724]: E0226 11:07:06.647712 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:07:08.647699621 +0000 UTC m=+95.303438736 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:07:06 crc kubenswrapper[4724]: E0226 11:07:06.647797 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:07:06 crc kubenswrapper[4724]: E0226 11:07:06.647813 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:07:06 crc kubenswrapper[4724]: E0226 11:07:06.647825 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:06 crc kubenswrapper[4724]: E0226 11:07:06.647857 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:08.647846035 +0000 UTC m=+95.303585150 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:06 crc kubenswrapper[4724]: E0226 11:07:06.647907 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:07:06 crc kubenswrapper[4724]: E0226 11:07:06.647934 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:08.647926367 +0000 UTC m=+95.303665482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.707311 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.707351 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.707363 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.707379 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.707389 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:06Z","lastTransitionTime":"2026-02-26T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.747740 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:06 crc kubenswrapper[4724]: E0226 11:07:06.747941 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:07:06 crc kubenswrapper[4724]: E0226 11:07:06.747984 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:07:06 crc kubenswrapper[4724]: E0226 11:07:06.747997 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:06 crc kubenswrapper[4724]: E0226 11:07:06.748071 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:08.748049422 +0000 UTC m=+95.403788537 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.809396 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.809458 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.809482 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.809530 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.809555 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:06Z","lastTransitionTime":"2026-02-26T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.912013 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.912063 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.912076 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.912093 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.912106 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:06Z","lastTransitionTime":"2026-02-26T11:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.975027 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:06 crc kubenswrapper[4724]: I0226 11:07:06.975072 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:06 crc kubenswrapper[4724]: E0226 11:07:06.975160 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:06 crc kubenswrapper[4724]: E0226 11:07:06.975372 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.014479 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.014518 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.014530 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.014548 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.014562 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:07Z","lastTransitionTime":"2026-02-26T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.117378 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.117436 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.117452 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.117494 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.117510 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:07Z","lastTransitionTime":"2026-02-26T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.219944 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.219982 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.220000 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.220016 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.220029 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:07Z","lastTransitionTime":"2026-02-26T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.322242 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.322277 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.322286 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.322302 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.322311 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:07Z","lastTransitionTime":"2026-02-26T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.424170 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.424275 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.424291 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.424313 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.424330 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:07Z","lastTransitionTime":"2026-02-26T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.526506 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.526548 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.526558 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.526574 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.526585 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:07Z","lastTransitionTime":"2026-02-26T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.628975 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.629010 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.629019 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.629031 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.629040 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:07Z","lastTransitionTime":"2026-02-26T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.731310 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.731596 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.731686 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.731771 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.731880 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:07Z","lastTransitionTime":"2026-02-26T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.833911 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.833959 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.833970 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.833986 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.833997 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:07Z","lastTransitionTime":"2026-02-26T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.938567 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.938611 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.938622 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.938637 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.938651 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:07Z","lastTransitionTime":"2026-02-26T11:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.974561 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:07 crc kubenswrapper[4724]: E0226 11:07:07.974675 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.988947 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 11:07:07 crc kubenswrapper[4724]: I0226 11:07:07.988954 4724 scope.go:117] "RemoveContainer" containerID="a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75" Feb 26 11:07:07 crc kubenswrapper[4724]: E0226 11:07:07.989194 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.041339 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.041377 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.041388 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.041402 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.041414 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:08Z","lastTransitionTime":"2026-02-26T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.143162 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.143213 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.143223 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.143240 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.143252 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:08Z","lastTransitionTime":"2026-02-26T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.245382 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.245424 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.245439 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.245455 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.245468 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:08Z","lastTransitionTime":"2026-02-26T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.321091 4724 scope.go:117] "RemoveContainer" containerID="a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75" Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.321476 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.322035 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be"} Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.336563 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:08Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.348032 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.348068 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.348080 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.348095 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.348106 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:08Z","lastTransitionTime":"2026-02-26T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.349113 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:08Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.362651 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:08Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.379805 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:08Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.391031 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:08Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.403791 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:08Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.415997 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:08Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.450779 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.450817 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.450826 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.450840 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.450853 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:08Z","lastTransitionTime":"2026-02-26T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.553071 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.553112 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.553125 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.553141 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.553153 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:08Z","lastTransitionTime":"2026-02-26T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.655857 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.655908 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.655926 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.655944 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.655957 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:08Z","lastTransitionTime":"2026-02-26T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.663549 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.663655 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.663690 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.663717 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.663749 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:07:12.663724735 +0000 UTC m=+99.319463860 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.663794 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.663827 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.663831 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.663877 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.663925 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.663853 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:12.663842588 +0000 UTC m=+99.319581713 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.663966 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:12.663952471 +0000 UTC m=+99.319691586 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.663985 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:12.663975032 +0000 UTC m=+99.319714217 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.758474 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.758510 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.758519 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.758533 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.758542 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:08Z","lastTransitionTime":"2026-02-26T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.764721 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.764856 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.764876 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.764890 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.764940 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:12.764924879 +0000 UTC m=+99.420663994 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.860853 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.860906 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.860920 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.860938 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.860951 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:08Z","lastTransitionTime":"2026-02-26T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.962808 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.962852 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.962863 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.962877 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.962885 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:08Z","lastTransitionTime":"2026-02-26T11:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.975428 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:08 crc kubenswrapper[4724]: I0226 11:07:08.975467 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.975575 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:08 crc kubenswrapper[4724]: E0226 11:07:08.975680 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.065488 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.065533 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.065546 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.065563 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.065577 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:09Z","lastTransitionTime":"2026-02-26T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.167870 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.167916 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.167926 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.167942 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.167955 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:09Z","lastTransitionTime":"2026-02-26T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.269933 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.269964 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.269972 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.269986 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.269995 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:09Z","lastTransitionTime":"2026-02-26T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.371873 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.371928 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.371941 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.371958 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.371974 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:09Z","lastTransitionTime":"2026-02-26T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.474533 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.474585 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.474597 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.474613 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.474627 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:09Z","lastTransitionTime":"2026-02-26T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.577136 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.577164 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.577172 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.577197 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.577206 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:09Z","lastTransitionTime":"2026-02-26T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.679795 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.679828 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.679841 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.679857 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.679871 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:09Z","lastTransitionTime":"2026-02-26T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.782681 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.782732 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.782745 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.782764 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.782776 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:09Z","lastTransitionTime":"2026-02-26T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.884812 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.884851 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.884861 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.884885 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.884897 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:09Z","lastTransitionTime":"2026-02-26T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.975270 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:09 crc kubenswrapper[4724]: E0226 11:07:09.975409 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.986575 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.986778 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.986887 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.986973 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:09 crc kubenswrapper[4724]: I0226 11:07:09.987046 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:09Z","lastTransitionTime":"2026-02-26T11:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.088671 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.088713 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.088725 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.088741 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.088752 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:10Z","lastTransitionTime":"2026-02-26T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.190972 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.191047 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.191058 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.191072 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.191081 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:10Z","lastTransitionTime":"2026-02-26T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.292832 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.292863 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.292873 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.292885 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.292894 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:10Z","lastTransitionTime":"2026-02-26T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.394853 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.394893 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.394908 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.394927 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.394939 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:10Z","lastTransitionTime":"2026-02-26T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.497072 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.497105 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.497115 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.497130 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.497141 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:10Z","lastTransitionTime":"2026-02-26T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.599874 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.599915 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.599930 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.599950 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.599966 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:10Z","lastTransitionTime":"2026-02-26T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.702698 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.702744 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.702761 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.702779 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.702794 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:10Z","lastTransitionTime":"2026-02-26T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.804363 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.804402 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.804413 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.804426 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.804435 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:10Z","lastTransitionTime":"2026-02-26T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.906100 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.906140 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.906151 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.906167 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.906195 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:10Z","lastTransitionTime":"2026-02-26T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.958736 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.958779 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.958792 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.958808 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.958820 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:10Z","lastTransitionTime":"2026-02-26T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.975006 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:10 crc kubenswrapper[4724]: E0226 11:07:10.975093 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:10 crc kubenswrapper[4724]: E0226 11:07:10.975048 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.975281 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:10 crc kubenswrapper[4724]: E0226 11:07:10.975417 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.978882 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.978908 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.978919 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.978934 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.978947 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:10Z","lastTransitionTime":"2026-02-26T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:10 crc kubenswrapper[4724]: E0226 11:07:10.991390 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.994797 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.994827 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.994839 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.994857 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:10 crc kubenswrapper[4724]: I0226 11:07:10.994873 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:10Z","lastTransitionTime":"2026-02-26T11:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:11 crc kubenswrapper[4724]: E0226 11:07:11.008613 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.012421 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.012479 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.012496 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.012518 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.012535 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:11Z","lastTransitionTime":"2026-02-26T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:11 crc kubenswrapper[4724]: E0226 11:07:11.026328 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.030132 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.030174 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.030220 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.030240 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.030253 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:11Z","lastTransitionTime":"2026-02-26T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:11 crc kubenswrapper[4724]: E0226 11:07:11.044367 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:11 crc kubenswrapper[4724]: E0226 11:07:11.044483 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.045791 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.045814 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.045825 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.045840 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.045851 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:11Z","lastTransitionTime":"2026-02-26T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.149076 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.149150 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.149175 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.149239 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.149267 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:11Z","lastTransitionTime":"2026-02-26T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.252290 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.252340 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.252356 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.252412 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.252432 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:11Z","lastTransitionTime":"2026-02-26T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.355727 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.355795 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.355814 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.355841 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.355860 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:11Z","lastTransitionTime":"2026-02-26T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.458490 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.458524 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.458535 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.458546 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.458555 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:11Z","lastTransitionTime":"2026-02-26T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.561356 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.561393 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.561402 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.561415 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.561424 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:11Z","lastTransitionTime":"2026-02-26T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.664386 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.664441 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.664460 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.664484 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.664501 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:11Z","lastTransitionTime":"2026-02-26T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.766862 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.766918 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.766931 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.766949 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.766961 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:11Z","lastTransitionTime":"2026-02-26T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.869505 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.869550 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.869561 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.869575 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.869585 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:11Z","lastTransitionTime":"2026-02-26T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.971629 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.971662 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.971685 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.971698 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.971709 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:11Z","lastTransitionTime":"2026-02-26T11:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:11 crc kubenswrapper[4724]: I0226 11:07:11.974867 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:11 crc kubenswrapper[4724]: E0226 11:07:11.974949 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.073920 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.073957 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.073973 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.073992 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.074004 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:12Z","lastTransitionTime":"2026-02-26T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.176564 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.176611 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.176625 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.176641 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.176654 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:12Z","lastTransitionTime":"2026-02-26T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.279430 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.279477 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.279488 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.279506 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.279519 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:12Z","lastTransitionTime":"2026-02-26T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.381873 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.381955 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.381968 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.381992 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.382006 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:12Z","lastTransitionTime":"2026-02-26T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.488613 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.488667 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.488679 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.488697 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.488713 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:12Z","lastTransitionTime":"2026-02-26T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.590472 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.590514 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.590524 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.590539 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.590549 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:12Z","lastTransitionTime":"2026-02-26T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.692802 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.692845 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.692856 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.692872 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.692887 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:12Z","lastTransitionTime":"2026-02-26T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.697267 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:07:12 crc kubenswrapper[4724]: E0226 11:07:12.697408 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:07:20.697390037 +0000 UTC m=+107.353129162 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.697405 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.697455 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.697485 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:12 crc kubenswrapper[4724]: E0226 11:07:12.697535 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:07:12 crc kubenswrapper[4724]: E0226 11:07:12.697555 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:07:12 crc kubenswrapper[4724]: E0226 11:07:12.697571 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:20.697562382 +0000 UTC m=+107.353301497 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:07:12 crc kubenswrapper[4724]: E0226 11:07:12.697584 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:07:12 crc kubenswrapper[4724]: E0226 11:07:12.697600 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:07:12 crc kubenswrapper[4724]: E0226 11:07:12.697603 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:12 crc kubenswrapper[4724]: E0226 11:07:12.697640 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:20.697628644 +0000 UTC m=+107.353367759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:07:12 crc kubenswrapper[4724]: E0226 11:07:12.697664 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:20.697647175 +0000 UTC m=+107.353386330 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.795801 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.795881 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.795902 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.795930 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.795958 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:12Z","lastTransitionTime":"2026-02-26T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.798568 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:12 crc kubenswrapper[4724]: E0226 11:07:12.798802 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:07:12 crc kubenswrapper[4724]: E0226 11:07:12.798846 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:07:12 crc kubenswrapper[4724]: E0226 11:07:12.798865 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:12 crc kubenswrapper[4724]: E0226 11:07:12.798937 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:20.798906901 +0000 UTC m=+107.454646036 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.898313 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.898356 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.898366 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.898380 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.898392 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:12Z","lastTransitionTime":"2026-02-26T11:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.974489 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:12 crc kubenswrapper[4724]: E0226 11:07:12.974661 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:12 crc kubenswrapper[4724]: I0226 11:07:12.975138 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:12 crc kubenswrapper[4724]: E0226 11:07:12.975254 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.001049 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.001130 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.001148 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.001169 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.001220 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:13Z","lastTransitionTime":"2026-02-26T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.103121 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.103156 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.103167 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.103201 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.103212 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:13Z","lastTransitionTime":"2026-02-26T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.205337 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.205368 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.205378 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.205394 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.205406 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:13Z","lastTransitionTime":"2026-02-26T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.307491 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.307528 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.307560 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.307576 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.307587 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:13Z","lastTransitionTime":"2026-02-26T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.409793 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.409836 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.409845 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.409860 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.409869 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:13Z","lastTransitionTime":"2026-02-26T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.511859 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.511887 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.511898 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.511915 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.511927 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:13Z","lastTransitionTime":"2026-02-26T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.614361 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.614395 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.614413 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.614429 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.614442 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:13Z","lastTransitionTime":"2026-02-26T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.716904 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.716947 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.716959 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.716974 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.716986 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:13Z","lastTransitionTime":"2026-02-26T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.818676 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.818795 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.818817 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.818833 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.818843 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:13Z","lastTransitionTime":"2026-02-26T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.920665 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.920708 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.920721 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.920738 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.920751 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:13Z","lastTransitionTime":"2026-02-26T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.975355 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:13 crc kubenswrapper[4724]: E0226 11:07:13.975577 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:13 crc kubenswrapper[4724]: I0226 11:07:13.990963 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.005748 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.019233 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.023457 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.023502 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.023513 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.023530 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.023542 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:14Z","lastTransitionTime":"2026-02-26T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.032535 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.049081 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.064377 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.085217 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.125610 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.125676 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.125690 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.125733 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.125748 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:14Z","lastTransitionTime":"2026-02-26T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.228898 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.228970 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.228983 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.229034 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.229073 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:14Z","lastTransitionTime":"2026-02-26T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.330509 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.330536 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.330545 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.330558 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.330566 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:14Z","lastTransitionTime":"2026-02-26T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.432614 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.432672 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.432681 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.432694 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.432703 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:14Z","lastTransitionTime":"2026-02-26T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.535252 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.535304 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.535315 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.535329 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.535338 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:14Z","lastTransitionTime":"2026-02-26T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.637594 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.637638 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.637649 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.637664 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.637676 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:14Z","lastTransitionTime":"2026-02-26T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.740251 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.740289 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.740298 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.740312 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.740321 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:14Z","lastTransitionTime":"2026-02-26T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.842950 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.843011 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.843026 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.843045 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.843060 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:14Z","lastTransitionTime":"2026-02-26T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.945635 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.945688 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.945704 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.945723 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.945762 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:14Z","lastTransitionTime":"2026-02-26T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.974959 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:14 crc kubenswrapper[4724]: I0226 11:07:14.975008 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:14 crc kubenswrapper[4724]: E0226 11:07:14.975079 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:14 crc kubenswrapper[4724]: E0226 11:07:14.975335 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.049540 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.049594 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.049613 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.049637 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.049655 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:15Z","lastTransitionTime":"2026-02-26T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.152694 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.152762 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.152841 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.152872 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.152893 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:15Z","lastTransitionTime":"2026-02-26T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.254673 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.254714 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.254726 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.254741 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.254762 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:15Z","lastTransitionTime":"2026-02-26T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.356935 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.356972 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.356985 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.357000 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.357012 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:15Z","lastTransitionTime":"2026-02-26T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.459778 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.459854 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.459871 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.459892 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.459908 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:15Z","lastTransitionTime":"2026-02-26T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.562916 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.562948 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.562957 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.562970 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.562980 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:15Z","lastTransitionTime":"2026-02-26T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.666649 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.666727 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.666754 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.666791 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.666818 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:15Z","lastTransitionTime":"2026-02-26T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.769814 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.769869 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.769886 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.769911 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.769929 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:15Z","lastTransitionTime":"2026-02-26T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.871876 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.871920 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.871933 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.871948 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.871960 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:15Z","lastTransitionTime":"2026-02-26T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.974597 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.974724 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.974761 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.974774 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.974794 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:15 crc kubenswrapper[4724]: I0226 11:07:15.974809 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:15Z","lastTransitionTime":"2026-02-26T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:15 crc kubenswrapper[4724]: E0226 11:07:15.974912 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.077568 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.077603 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.077612 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.077624 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.077633 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:16Z","lastTransitionTime":"2026-02-26T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.180475 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.180517 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.180532 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.180549 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.180562 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:16Z","lastTransitionTime":"2026-02-26T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.192786 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-zfscs"] Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.193130 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zfscs" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.195033 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.195651 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.196875 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.210241 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.223078 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.226263 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9b41f80b-f054-43aa-9b24-64d58d45f72f-hosts-file\") pod \"node-resolver-zfscs\" (UID: \"9b41f80b-f054-43aa-9b24-64d58d45f72f\") " pod="openshift-dns/node-resolver-zfscs" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.226304 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cln7m\" (UniqueName: \"kubernetes.io/projected/9b41f80b-f054-43aa-9b24-64d58d45f72f-kube-api-access-cln7m\") pod \"node-resolver-zfscs\" (UID: \"9b41f80b-f054-43aa-9b24-64d58d45f72f\") " pod="openshift-dns/node-resolver-zfscs" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.236894 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.246124 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.260833 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.273960 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.282998 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.283026 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.283034 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.283047 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.283056 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:16Z","lastTransitionTime":"2026-02-26T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.289303 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.300377 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.327362 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9b41f80b-f054-43aa-9b24-64d58d45f72f-hosts-file\") pod \"node-resolver-zfscs\" (UID: \"9b41f80b-f054-43aa-9b24-64d58d45f72f\") " pod="openshift-dns/node-resolver-zfscs" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.327405 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cln7m\" (UniqueName: \"kubernetes.io/projected/9b41f80b-f054-43aa-9b24-64d58d45f72f-kube-api-access-cln7m\") pod \"node-resolver-zfscs\" (UID: \"9b41f80b-f054-43aa-9b24-64d58d45f72f\") " pod="openshift-dns/node-resolver-zfscs" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.327720 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9b41f80b-f054-43aa-9b24-64d58d45f72f-hosts-file\") pod \"node-resolver-zfscs\" (UID: \"9b41f80b-f054-43aa-9b24-64d58d45f72f\") " pod="openshift-dns/node-resolver-zfscs" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.348751 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cln7m\" (UniqueName: \"kubernetes.io/projected/9b41f80b-f054-43aa-9b24-64d58d45f72f-kube-api-access-cln7m\") pod \"node-resolver-zfscs\" (UID: \"9b41f80b-f054-43aa-9b24-64d58d45f72f\") " pod="openshift-dns/node-resolver-zfscs" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.386155 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.386217 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.386230 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.386245 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.386261 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:16Z","lastTransitionTime":"2026-02-26T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.488629 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.488680 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.488697 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.488714 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.488725 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:16Z","lastTransitionTime":"2026-02-26T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.506883 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zfscs" Feb 26 11:07:16 crc kubenswrapper[4724]: W0226 11:07:16.549559 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b41f80b_f054_43aa_9b24_64d58d45f72f.slice/crio-69684e81053e48438f503a623e63c3ff8ff8b9b298c16612b0917970c0f43053 WatchSource:0}: Error finding container 69684e81053e48438f503a623e63c3ff8ff8b9b298c16612b0917970c0f43053: Status 404 returned error can't find the container with id 69684e81053e48438f503a623e63c3ff8ff8b9b298c16612b0917970c0f43053 Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.563767 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5gv7d"] Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.564440 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.565691 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-wtm5h"] Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.567024 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-ns2kr"] Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.567488 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.567673 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.568216 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.568791 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.569990 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.570448 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.570614 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.571932 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.572223 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.572449 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.572679 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.573395 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.573857 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.577238 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.588123 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.597018 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.597306 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.597451 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.597579 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.597697 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:16Z","lastTransitionTime":"2026-02-26T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.610066 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.622662 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630489 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-system-cni-dir\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630528 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/332754e6-e64b-4e47-988d-6f1ddbe4912e-cni-binary-copy\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630550 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/332754e6-e64b-4e47-988d-6f1ddbe4912e-multus-daemon-config\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630571 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-var-lib-cni-bin\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630592 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2405c92-e87c-4e60-ac28-0cd51800d9df-proxy-tls\") pod \"machine-config-daemon-5gv7d\" (UID: \"b2405c92-e87c-4e60-ac28-0cd51800d9df\") " pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630612 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-var-lib-kubelet\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630641 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630658 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-var-lib-cni-multus\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630703 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq495\" (UniqueName: \"kubernetes.io/projected/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-kube-api-access-hq495\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630720 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-system-cni-dir\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630736 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-run-multus-certs\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630751 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-os-release\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630766 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-run-k8s-cni-cncf-io\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630781 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-multus-socket-dir-parent\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630794 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-multus-cni-dir\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630808 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pck7f\" (UniqueName: \"kubernetes.io/projected/b2405c92-e87c-4e60-ac28-0cd51800d9df-kube-api-access-pck7f\") pod \"machine-config-daemon-5gv7d\" (UID: \"b2405c92-e87c-4e60-ac28-0cd51800d9df\") " pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630821 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-etc-kubernetes\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630836 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-cnibin\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630850 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630869 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-os-release\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630883 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b2405c92-e87c-4e60-ac28-0cd51800d9df-mcd-auth-proxy-config\") pod \"machine-config-daemon-5gv7d\" (UID: \"b2405c92-e87c-4e60-ac28-0cd51800d9df\") " pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630899 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-cni-binary-copy\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630912 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b2405c92-e87c-4e60-ac28-0cd51800d9df-rootfs\") pod \"machine-config-daemon-5gv7d\" (UID: \"b2405c92-e87c-4e60-ac28-0cd51800d9df\") " pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630924 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-cnibin\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630937 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-hostroot\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630949 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-multus-conf-dir\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630968 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn4f6\" (UniqueName: \"kubernetes.io/projected/332754e6-e64b-4e47-988d-6f1ddbe4912e-kube-api-access-rn4f6\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.630980 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-run-netns\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.634335 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.670527 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.691466 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.710120 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.710163 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.710175 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.710206 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.710218 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:16Z","lastTransitionTime":"2026-02-26T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.716355 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731341 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731376 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-var-lib-cni-multus\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731395 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq495\" (UniqueName: \"kubernetes.io/projected/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-kube-api-access-hq495\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731411 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-system-cni-dir\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731427 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-os-release\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731458 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-run-multus-certs\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731479 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-run-k8s-cni-cncf-io\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731491 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-var-lib-cni-multus\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731563 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-system-cni-dir\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731596 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-run-multus-certs\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731579 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-os-release\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731579 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-multus-cni-dir\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731508 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-multus-cni-dir\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731643 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-run-k8s-cni-cncf-io\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731657 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-multus-socket-dir-parent\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731680 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pck7f\" (UniqueName: \"kubernetes.io/projected/b2405c92-e87c-4e60-ac28-0cd51800d9df-kube-api-access-pck7f\") pod \"machine-config-daemon-5gv7d\" (UID: \"b2405c92-e87c-4e60-ac28-0cd51800d9df\") " pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731703 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-etc-kubernetes\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731725 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731753 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-cnibin\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731772 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-os-release\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731790 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b2405c92-e87c-4e60-ac28-0cd51800d9df-mcd-auth-proxy-config\") pod \"machine-config-daemon-5gv7d\" (UID: \"b2405c92-e87c-4e60-ac28-0cd51800d9df\") " pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731820 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b2405c92-e87c-4e60-ac28-0cd51800d9df-rootfs\") pod \"machine-config-daemon-5gv7d\" (UID: \"b2405c92-e87c-4e60-ac28-0cd51800d9df\") " pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731835 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-cnibin\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731848 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-hostroot\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731861 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-multus-conf-dir\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731883 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-cni-binary-copy\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731901 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-run-netns\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731915 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn4f6\" (UniqueName: \"kubernetes.io/projected/332754e6-e64b-4e47-988d-6f1ddbe4912e-kube-api-access-rn4f6\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731921 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-os-release\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731954 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-system-cni-dir\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731957 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-hostroot\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731932 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-system-cni-dir\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731980 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/332754e6-e64b-4e47-988d-6f1ddbe4912e-cni-binary-copy\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731984 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-etc-kubernetes\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731994 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/332754e6-e64b-4e47-988d-6f1ddbe4912e-multus-daemon-config\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.732011 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-var-lib-cni-bin\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.732027 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2405c92-e87c-4e60-ac28-0cd51800d9df-proxy-tls\") pod \"machine-config-daemon-5gv7d\" (UID: \"b2405c92-e87c-4e60-ac28-0cd51800d9df\") " pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.732036 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-cnibin\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.732042 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-var-lib-kubelet\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731928 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-multus-socket-dir-parent\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.732575 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b2405c92-e87c-4e60-ac28-0cd51800d9df-mcd-auth-proxy-config\") pod \"machine-config-daemon-5gv7d\" (UID: \"b2405c92-e87c-4e60-ac28-0cd51800d9df\") " pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.732613 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b2405c92-e87c-4e60-ac28-0cd51800d9df-rootfs\") pod \"machine-config-daemon-5gv7d\" (UID: \"b2405c92-e87c-4e60-ac28-0cd51800d9df\") " pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.732628 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.732643 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-cnibin\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.731997 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.732668 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-var-lib-cni-bin\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.732690 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-var-lib-kubelet\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.732732 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/332754e6-e64b-4e47-988d-6f1ddbe4912e-cni-binary-copy\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.732652 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/332754e6-e64b-4e47-988d-6f1ddbe4912e-multus-daemon-config\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.732771 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-host-run-netns\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.732790 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/332754e6-e64b-4e47-988d-6f1ddbe4912e-multus-conf-dir\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.734333 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-cni-binary-copy\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.735704 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.742456 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2405c92-e87c-4e60-ac28-0cd51800d9df-proxy-tls\") pod \"machine-config-daemon-5gv7d\" (UID: \"b2405c92-e87c-4e60-ac28-0cd51800d9df\") " pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.749867 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq495\" (UniqueName: \"kubernetes.io/projected/23294c7d-d7c0-4b51-92a0-f7df8c67ff0e-kube-api-access-hq495\") pod \"multus-additional-cni-plugins-wtm5h\" (UID: \"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\") " pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.750868 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pck7f\" (UniqueName: \"kubernetes.io/projected/b2405c92-e87c-4e60-ac28-0cd51800d9df-kube-api-access-pck7f\") pod \"machine-config-daemon-5gv7d\" (UID: \"b2405c92-e87c-4e60-ac28-0cd51800d9df\") " pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.752403 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn4f6\" (UniqueName: \"kubernetes.io/projected/332754e6-e64b-4e47-988d-6f1ddbe4912e-kube-api-access-rn4f6\") pod \"multus-ns2kr\" (UID: \"332754e6-e64b-4e47-988d-6f1ddbe4912e\") " pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.753467 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.770313 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.784054 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.796499 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.806909 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.812700 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.812733 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.812744 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.812758 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.812769 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:16Z","lastTransitionTime":"2026-02-26T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.819981 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.830243 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.840418 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.850760 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.859832 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.869979 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.883733 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.905003 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.913817 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ns2kr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.915034 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.915061 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.915071 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.915087 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.915097 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:16Z","lastTransitionTime":"2026-02-26T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:16 crc kubenswrapper[4724]: W0226 11:07:16.924710 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod332754e6_e64b_4e47_988d_6f1ddbe4912e.slice/crio-8808ac3b47d22a7a94169fb56a1eabde7d0c576168ea21093fb8d93e58ff2d73 WatchSource:0}: Error finding container 8808ac3b47d22a7a94169fb56a1eabde7d0c576168ea21093fb8d93e58ff2d73: Status 404 returned error can't find the container with id 8808ac3b47d22a7a94169fb56a1eabde7d0c576168ea21093fb8d93e58ff2d73 Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.931987 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" Feb 26 11:07:16 crc kubenswrapper[4724]: W0226 11:07:16.969635 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23294c7d_d7c0_4b51_92a0_f7df8c67ff0e.slice/crio-d713f1af40d78383e6f620118d4784b64d6e9bb12dbaad5f788c431d6fdfd043 WatchSource:0}: Error finding container d713f1af40d78383e6f620118d4784b64d6e9bb12dbaad5f788c431d6fdfd043: Status 404 returned error can't find the container with id d713f1af40d78383e6f620118d4784b64d6e9bb12dbaad5f788c431d6fdfd043 Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.974354 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:16 crc kubenswrapper[4724]: E0226 11:07:16.974450 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.974492 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:16 crc kubenswrapper[4724]: E0226 11:07:16.974533 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.978341 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z56jr"] Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.979276 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.981520 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.981829 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.982127 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.982134 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.982318 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.982379 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.982447 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 26 11:07:16 crc kubenswrapper[4724]: I0226 11:07:16.996467 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.013058 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.017273 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.017328 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.017340 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.017376 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.017389 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:17Z","lastTransitionTime":"2026-02-26T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.028523 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033496 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-slash\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033581 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-node-log\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033605 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-cni-bin\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033656 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-ovn\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033677 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c1140bb-3473-456a-b916-cfef4d4b7222-ovn-node-metrics-cert\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033718 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-run-netns\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033735 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-ovnkube-config\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033751 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-etc-openvswitch\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033765 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-log-socket\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033827 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-ovnkube-script-lib\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033861 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-kubelet\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033877 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-run-ovn-kubernetes\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033915 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-openvswitch\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033948 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-var-lib-openvswitch\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033967 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-cni-netd\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.033983 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-env-overrides\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.034031 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-systemd-units\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.034198 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvffk\" (UniqueName: \"kubernetes.io/projected/4c1140bb-3473-456a-b916-cfef4d4b7222-kube-api-access-wvffk\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.034271 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-systemd\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.034305 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.043490 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.057844 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.076495 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.102055 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.118848 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.121138 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.121165 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.121189 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.121208 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.121223 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:17Z","lastTransitionTime":"2026-02-26T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.133231 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135363 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-etc-openvswitch\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135404 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-ovnkube-script-lib\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135439 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-etc-openvswitch\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135445 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-log-socket\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135497 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-kubelet\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135472 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-log-socket\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135516 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-run-ovn-kubernetes\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135541 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-run-ovn-kubernetes\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135557 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-var-lib-openvswitch\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135561 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-kubelet\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135583 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-openvswitch\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135610 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-var-lib-openvswitch\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135617 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-cni-netd\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135645 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-cni-netd\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135649 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-env-overrides\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135683 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-systemd-units\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135686 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-openvswitch\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135716 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvffk\" (UniqueName: \"kubernetes.io/projected/4c1140bb-3473-456a-b916-cfef4d4b7222-kube-api-access-wvffk\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135740 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-systemd\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135748 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-systemd-units\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135766 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135778 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-systemd\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135821 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.135927 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-slash\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.136679 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-env-overrides\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.136773 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-ovnkube-script-lib\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.141462 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-slash\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.141548 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-node-log\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.141575 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-cni-bin\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.141613 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-node-log\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.141633 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-ovn\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.141671 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c1140bb-3473-456a-b916-cfef4d4b7222-ovn-node-metrics-cert\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.141675 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-ovn\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.141683 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-cni-bin\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.141697 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-run-netns\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.141730 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-ovnkube-config\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.141751 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-run-netns\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.142195 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-ovnkube-config\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.145991 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c1140bb-3473-456a-b916-cfef4d4b7222-ovn-node-metrics-cert\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.149713 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvffk\" (UniqueName: \"kubernetes.io/projected/4c1140bb-3473-456a-b916-cfef4d4b7222-kube-api-access-wvffk\") pod \"ovnkube-node-z56jr\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.150495 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.159856 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.169141 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.223722 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.223803 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.223820 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.223843 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.223858 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:17Z","lastTransitionTime":"2026-02-26T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.327107 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.327754 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.327842 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.327919 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.327995 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:17Z","lastTransitionTime":"2026-02-26T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.344167 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" event={"ID":"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e","Type":"ContainerStarted","Data":"eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.344405 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" event={"ID":"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e","Type":"ContainerStarted","Data":"d713f1af40d78383e6f620118d4784b64d6e9bb12dbaad5f788c431d6fdfd043"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.350495 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.350558 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"ae0e37cdb931ac7a9a9e147fd1650fa3a6748e998d242eb297d75880dd2099ff"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.352666 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zfscs" event={"ID":"9b41f80b-f054-43aa-9b24-64d58d45f72f","Type":"ContainerStarted","Data":"e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.352710 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zfscs" event={"ID":"9b41f80b-f054-43aa-9b24-64d58d45f72f","Type":"ContainerStarted","Data":"69684e81053e48438f503a623e63c3ff8ff8b9b298c16612b0917970c0f43053"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.354550 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ns2kr" event={"ID":"332754e6-e64b-4e47-988d-6f1ddbe4912e","Type":"ContainerStarted","Data":"f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.354707 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ns2kr" event={"ID":"332754e6-e64b-4e47-988d-6f1ddbe4912e","Type":"ContainerStarted","Data":"8808ac3b47d22a7a94169fb56a1eabde7d0c576168ea21093fb8d93e58ff2d73"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.364357 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.370716 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.379085 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: W0226 11:07:17.389540 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c1140bb_3473_456a_b916_cfef4d4b7222.slice/crio-e5cd0dc09af5164561011dce55ac433aa2030a83598390a79ac165522fb761e7 WatchSource:0}: Error finding container e5cd0dc09af5164561011dce55ac433aa2030a83598390a79ac165522fb761e7: Status 404 returned error can't find the container with id e5cd0dc09af5164561011dce55ac433aa2030a83598390a79ac165522fb761e7 Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.392093 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.402339 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.413935 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.426055 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.430102 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.430131 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.430142 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.430155 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.430165 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:17Z","lastTransitionTime":"2026-02-26T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.447413 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.463166 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.476762 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.490354 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.503430 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.516059 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.532716 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.532754 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.532763 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.532776 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.532786 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:17Z","lastTransitionTime":"2026-02-26T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.535951 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.553703 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.568741 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.583709 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.595875 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.612520 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.626595 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.634945 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.634986 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.634996 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.635013 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.635022 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:17Z","lastTransitionTime":"2026-02-26T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.644107 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.657310 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.671478 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.684096 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.710596 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.737874 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.737907 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.737917 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.737935 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.737946 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:17Z","lastTransitionTime":"2026-02-26T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.841476 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.841519 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.841530 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.841546 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.841556 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:17Z","lastTransitionTime":"2026-02-26T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.946345 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.946390 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.946401 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.946426 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.946440 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:17Z","lastTransitionTime":"2026-02-26T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:17 crc kubenswrapper[4724]: I0226 11:07:17.974790 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:17 crc kubenswrapper[4724]: E0226 11:07:17.974980 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.048844 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.049247 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.049263 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.049280 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.049290 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:18Z","lastTransitionTime":"2026-02-26T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.152290 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.152346 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.152359 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.152379 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.152393 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:18Z","lastTransitionTime":"2026-02-26T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.254945 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.254996 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.255013 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.255034 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.255047 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:18Z","lastTransitionTime":"2026-02-26T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.361468 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.361528 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.361547 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.361570 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.361588 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:18Z","lastTransitionTime":"2026-02-26T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.365570 4724 generic.go:334] "Generic (PLEG): container finished" podID="23294c7d-d7c0-4b51-92a0-f7df8c67ff0e" containerID="eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e" exitCode=0 Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.365660 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" event={"ID":"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e","Type":"ContainerDied","Data":"eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e"} Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.369486 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82"} Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.373521 4724 generic.go:334] "Generic (PLEG): container finished" podID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerID="381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497" exitCode=0 Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.373801 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerDied","Data":"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497"} Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.373879 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerStarted","Data":"e5cd0dc09af5164561011dce55ac433aa2030a83598390a79ac165522fb761e7"} Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.391229 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.418020 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.430935 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.445223 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.458797 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.471868 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.471905 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.471915 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.471929 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.471939 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:18Z","lastTransitionTime":"2026-02-26T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.474745 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.495535 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.516047 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.531429 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.546914 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.560676 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.574385 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.574435 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.574449 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.574467 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.574480 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:18Z","lastTransitionTime":"2026-02-26T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.584338 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.600990 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.614134 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.629344 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.643304 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.653332 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.666428 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.676686 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.676722 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.676731 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.676746 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.676764 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:18Z","lastTransitionTime":"2026-02-26T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.682249 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.696829 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.707959 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.717903 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.728928 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.745610 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.779782 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.780304 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.780407 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.780509 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.780587 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:18Z","lastTransitionTime":"2026-02-26T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.883142 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.884036 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.884154 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.884280 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.884377 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:18Z","lastTransitionTime":"2026-02-26T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.975018 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:18 crc kubenswrapper[4724]: E0226 11:07:18.975397 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.975129 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:18 crc kubenswrapper[4724]: E0226 11:07:18.975612 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.986403 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.986443 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.986456 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.986473 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:18 crc kubenswrapper[4724]: I0226 11:07:18.986487 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:18Z","lastTransitionTime":"2026-02-26T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.088694 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.088755 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.088767 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.088783 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.088798 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:19Z","lastTransitionTime":"2026-02-26T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.190397 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.190436 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.190448 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.190466 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.190479 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:19Z","lastTransitionTime":"2026-02-26T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.292741 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.292778 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.292788 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.292803 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.292814 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:19Z","lastTransitionTime":"2026-02-26T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.378241 4724 generic.go:334] "Generic (PLEG): container finished" podID="23294c7d-d7c0-4b51-92a0-f7df8c67ff0e" containerID="292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35" exitCode=0 Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.378336 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" event={"ID":"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e","Type":"ContainerDied","Data":"292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35"} Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.387090 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerStarted","Data":"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87"} Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.387244 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerStarted","Data":"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374"} Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.387345 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerStarted","Data":"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071"} Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.394263 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.394379 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.394520 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.394716 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.394854 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:19Z","lastTransitionTime":"2026-02-26T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.395536 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.408605 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.425956 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.438397 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.452702 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.468705 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.481492 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.495981 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.499046 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.499071 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.499079 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.499092 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.499102 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:19Z","lastTransitionTime":"2026-02-26T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.506299 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.522683 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.537889 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.550954 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.602141 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.602191 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.602201 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.602220 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.602230 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:19Z","lastTransitionTime":"2026-02-26T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.704438 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.704463 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.704471 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.704491 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.704501 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:19Z","lastTransitionTime":"2026-02-26T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.806169 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.806488 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.806500 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.806516 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.806527 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:19Z","lastTransitionTime":"2026-02-26T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.908489 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.908529 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.908543 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.908558 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.908571 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:19Z","lastTransitionTime":"2026-02-26T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.974791 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:19 crc kubenswrapper[4724]: E0226 11:07:19.974975 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:19 crc kubenswrapper[4724]: I0226 11:07:19.975894 4724 scope.go:117] "RemoveContainer" containerID="a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.010859 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.010902 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.010911 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.010928 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.010941 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:20Z","lastTransitionTime":"2026-02-26T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.114815 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.114850 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.114862 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.114877 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.114889 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:20Z","lastTransitionTime":"2026-02-26T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.216849 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.216875 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.216883 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.216895 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.216904 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:20Z","lastTransitionTime":"2026-02-26T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.319127 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.319165 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.319190 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.319207 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.319218 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:20Z","lastTransitionTime":"2026-02-26T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.391621 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.393251 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8"} Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.393546 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.395006 4724 generic.go:334] "Generic (PLEG): container finished" podID="23294c7d-d7c0-4b51-92a0-f7df8c67ff0e" containerID="35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a" exitCode=0 Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.395064 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" event={"ID":"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e","Type":"ContainerDied","Data":"35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a"} Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.400407 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerStarted","Data":"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e"} Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.400450 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerStarted","Data":"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de"} Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.400465 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerStarted","Data":"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4"} Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.421296 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.421328 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.421339 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.421353 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.421365 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:20Z","lastTransitionTime":"2026-02-26T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.425736 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.442037 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.453999 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.467169 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.478973 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.491750 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.503867 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.518876 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.528361 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.528400 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.528415 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.528430 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.528442 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:20Z","lastTransitionTime":"2026-02-26T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.533247 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.546828 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.557795 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.573496 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.585543 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.597386 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.609330 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.623283 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.633032 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.633057 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.633067 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.633079 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.633088 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:20Z","lastTransitionTime":"2026-02-26T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.644363 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.654585 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.671924 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.684763 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.701794 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.715321 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.727489 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.734882 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.734921 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.734932 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.734949 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.734961 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:20Z","lastTransitionTime":"2026-02-26T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.739398 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:20Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.788929 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.789022 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.789090 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:20 crc kubenswrapper[4724]: E0226 11:07:20.789108 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:07:20 crc kubenswrapper[4724]: E0226 11:07:20.789188 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:07:36.789158891 +0000 UTC m=+123.444898006 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.789218 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:20 crc kubenswrapper[4724]: E0226 11:07:20.789249 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:07:20 crc kubenswrapper[4724]: E0226 11:07:20.789264 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:07:20 crc kubenswrapper[4724]: E0226 11:07:20.789274 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:20 crc kubenswrapper[4724]: E0226 11:07:20.789308 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:36.789300365 +0000 UTC m=+123.445039480 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:20 crc kubenswrapper[4724]: E0226 11:07:20.789327 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:07:20 crc kubenswrapper[4724]: E0226 11:07:20.789356 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:36.789348396 +0000 UTC m=+123.445087521 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:07:20 crc kubenswrapper[4724]: E0226 11:07:20.789376 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:36.789370847 +0000 UTC m=+123.445109962 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.837165 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.837387 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.837468 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.837580 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.837671 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:20Z","lastTransitionTime":"2026-02-26T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.890098 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:20 crc kubenswrapper[4724]: E0226 11:07:20.890257 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:07:20 crc kubenswrapper[4724]: E0226 11:07:20.890282 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:07:20 crc kubenswrapper[4724]: E0226 11:07:20.890294 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:20 crc kubenswrapper[4724]: E0226 11:07:20.890349 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:36.890323624 +0000 UTC m=+123.546062739 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.940117 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.940171 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.940201 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.940220 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.940231 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:20Z","lastTransitionTime":"2026-02-26T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.974563 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:20 crc kubenswrapper[4724]: E0226 11:07:20.974706 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:20 crc kubenswrapper[4724]: I0226 11:07:20.974863 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:20 crc kubenswrapper[4724]: E0226 11:07:20.975017 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.042072 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.042112 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.042124 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.042141 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.042152 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:21Z","lastTransitionTime":"2026-02-26T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.143910 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.144121 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.144214 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.144289 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.144374 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:21Z","lastTransitionTime":"2026-02-26T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.246598 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.246633 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.246643 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.246660 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.246672 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:21Z","lastTransitionTime":"2026-02-26T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.349313 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.349364 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.349375 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.349389 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.349398 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:21Z","lastTransitionTime":"2026-02-26T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.393168 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.393330 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.393350 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.393368 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.393394 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:21Z","lastTransitionTime":"2026-02-26T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:21 crc kubenswrapper[4724]: E0226 11:07:21.407464 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.410258 4724 generic.go:334] "Generic (PLEG): container finished" podID="23294c7d-d7c0-4b51-92a0-f7df8c67ff0e" containerID="4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6" exitCode=0 Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.410302 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" event={"ID":"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e","Type":"ContainerDied","Data":"4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6"} Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.411058 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.411094 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.411106 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.411121 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.411132 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:21Z","lastTransitionTime":"2026-02-26T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.437507 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: E0226 11:07:21.437798 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.442068 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.442098 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.442108 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.442123 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.442134 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:21Z","lastTransitionTime":"2026-02-26T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:21 crc kubenswrapper[4724]: E0226 11:07:21.458298 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.461636 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.461669 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.461680 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.461695 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.461709 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:21Z","lastTransitionTime":"2026-02-26T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.463141 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: E0226 11:07:21.475976 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.480295 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.480330 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.480341 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.480359 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.480374 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:21Z","lastTransitionTime":"2026-02-26T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.480612 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: E0226 11:07:21.493958 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: E0226 11:07:21.494156 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.495722 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.495901 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.495944 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.495956 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.495973 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.495986 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:21Z","lastTransitionTime":"2026-02-26T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.522523 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.551360 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.591644 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.598034 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.598063 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.598074 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.598088 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.598100 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:21Z","lastTransitionTime":"2026-02-26T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.607976 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.621544 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.634824 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.646279 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.657761 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.699806 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.699863 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.699872 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.699887 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.699899 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:21Z","lastTransitionTime":"2026-02-26T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.802236 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.802289 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.802301 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.802319 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.802332 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:21Z","lastTransitionTime":"2026-02-26T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.904671 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.904722 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.904737 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.904758 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.904774 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:21Z","lastTransitionTime":"2026-02-26T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:21 crc kubenswrapper[4724]: I0226 11:07:21.975567 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:21 crc kubenswrapper[4724]: E0226 11:07:21.975705 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.006472 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.006506 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.006517 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.006533 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.006544 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:22Z","lastTransitionTime":"2026-02-26T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.109076 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.109170 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.109210 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.109236 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.109248 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:22Z","lastTransitionTime":"2026-02-26T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.212102 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.212141 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.212150 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.212165 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.212186 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:22Z","lastTransitionTime":"2026-02-26T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.314636 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.314677 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.314685 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.314698 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.314710 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:22Z","lastTransitionTime":"2026-02-26T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.416351 4724 generic.go:334] "Generic (PLEG): container finished" podID="23294c7d-d7c0-4b51-92a0-f7df8c67ff0e" containerID="93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa" exitCode=0 Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.416420 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.416431 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" event={"ID":"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e","Type":"ContainerDied","Data":"93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa"} Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.416464 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.416473 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.416706 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.416716 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:22Z","lastTransitionTime":"2026-02-26T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.422514 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerStarted","Data":"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2"} Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.439611 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:22Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.453005 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:22Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.465202 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:22Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.477652 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:22Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.489962 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:22Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.502598 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:22Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.513718 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:22Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.519535 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.519559 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.519568 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.519580 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.519589 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:22Z","lastTransitionTime":"2026-02-26T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.526896 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:22Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.537401 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:22Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.549299 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:22Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.565336 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:22Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.582847 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:22Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.621782 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.621829 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.621841 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.621860 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.621874 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:22Z","lastTransitionTime":"2026-02-26T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.725339 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.725616 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.725626 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.725640 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.725652 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:22Z","lastTransitionTime":"2026-02-26T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.828307 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.828348 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.828361 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.828379 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.828405 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:22Z","lastTransitionTime":"2026-02-26T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.930161 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.930263 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.930308 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.930340 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.930363 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:22Z","lastTransitionTime":"2026-02-26T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.956398 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-49n4g"] Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.956952 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-49n4g" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.961131 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.961247 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.961438 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.961511 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.975089 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.975090 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:22 crc kubenswrapper[4724]: E0226 11:07:22.975407 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:22 crc kubenswrapper[4724]: E0226 11:07:22.975548 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.977355 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:22Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:22 crc kubenswrapper[4724]: I0226 11:07:22.993114 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:22Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.009765 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f9df16c-aeb4-4568-acbc-01b30c871371-host\") pod \"node-ca-49n4g\" (UID: \"9f9df16c-aeb4-4568-acbc-01b30c871371\") " pod="openshift-image-registry/node-ca-49n4g" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.010298 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6xfb\" (UniqueName: \"kubernetes.io/projected/9f9df16c-aeb4-4568-acbc-01b30c871371-kube-api-access-g6xfb\") pod \"node-ca-49n4g\" (UID: \"9f9df16c-aeb4-4568-acbc-01b30c871371\") " pod="openshift-image-registry/node-ca-49n4g" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.010441 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9f9df16c-aeb4-4568-acbc-01b30c871371-serviceca\") pod \"node-ca-49n4g\" (UID: \"9f9df16c-aeb4-4568-acbc-01b30c871371\") " pod="openshift-image-registry/node-ca-49n4g" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.011051 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.025880 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.033404 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.033447 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.033458 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.033476 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.033490 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:23Z","lastTransitionTime":"2026-02-26T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.040209 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.052995 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.070077 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.083078 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.096718 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.108751 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.111385 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9f9df16c-aeb4-4568-acbc-01b30c871371-serviceca\") pod \"node-ca-49n4g\" (UID: \"9f9df16c-aeb4-4568-acbc-01b30c871371\") " pod="openshift-image-registry/node-ca-49n4g" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.111570 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f9df16c-aeb4-4568-acbc-01b30c871371-host\") pod \"node-ca-49n4g\" (UID: \"9f9df16c-aeb4-4568-acbc-01b30c871371\") " pod="openshift-image-registry/node-ca-49n4g" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.111653 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f9df16c-aeb4-4568-acbc-01b30c871371-host\") pod \"node-ca-49n4g\" (UID: \"9f9df16c-aeb4-4568-acbc-01b30c871371\") " pod="openshift-image-registry/node-ca-49n4g" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.111961 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6xfb\" (UniqueName: \"kubernetes.io/projected/9f9df16c-aeb4-4568-acbc-01b30c871371-kube-api-access-g6xfb\") pod \"node-ca-49n4g\" (UID: \"9f9df16c-aeb4-4568-acbc-01b30c871371\") " pod="openshift-image-registry/node-ca-49n4g" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.112476 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9f9df16c-aeb4-4568-acbc-01b30c871371-serviceca\") pod \"node-ca-49n4g\" (UID: \"9f9df16c-aeb4-4568-acbc-01b30c871371\") " pod="openshift-image-registry/node-ca-49n4g" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.125526 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.130418 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6xfb\" (UniqueName: \"kubernetes.io/projected/9f9df16c-aeb4-4568-acbc-01b30c871371-kube-api-access-g6xfb\") pod \"node-ca-49n4g\" (UID: \"9f9df16c-aeb4-4568-acbc-01b30c871371\") " pod="openshift-image-registry/node-ca-49n4g" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.136651 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.136685 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.136694 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.136707 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.136718 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:23Z","lastTransitionTime":"2026-02-26T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.137800 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.150906 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.238901 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.238950 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.238963 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.238983 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.238996 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:23Z","lastTransitionTime":"2026-02-26T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.272434 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-49n4g" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.348020 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.348053 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.348062 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.348076 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.348086 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:23Z","lastTransitionTime":"2026-02-26T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.428085 4724 generic.go:334] "Generic (PLEG): container finished" podID="23294c7d-d7c0-4b51-92a0-f7df8c67ff0e" containerID="17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b" exitCode=0 Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.428135 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" event={"ID":"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e","Type":"ContainerDied","Data":"17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b"} Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.430710 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-49n4g" event={"ID":"9f9df16c-aeb4-4568-acbc-01b30c871371","Type":"ContainerStarted","Data":"01f3d75b277b08d1681598f2abd1670bc3fd7586dc47045b61a95c0a8e0f27bf"} Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.442509 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.450273 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.450310 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.450319 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.450333 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.450344 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:23Z","lastTransitionTime":"2026-02-26T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.456449 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.469228 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.484404 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.497164 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.510658 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.522379 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.536996 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.550056 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.552459 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.552495 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.552508 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.552524 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.552535 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:23Z","lastTransitionTime":"2026-02-26T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.563283 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.573285 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.587860 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.605069 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.654931 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.654968 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.654980 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.654996 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.655007 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:23Z","lastTransitionTime":"2026-02-26T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.757813 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.758117 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.758228 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.758329 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.758396 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:23Z","lastTransitionTime":"2026-02-26T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.861171 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.861529 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.861619 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.861716 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.861803 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:23Z","lastTransitionTime":"2026-02-26T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.963799 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.963840 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.963849 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.963862 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.963871 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:23Z","lastTransitionTime":"2026-02-26T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.975554 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:23 crc kubenswrapper[4724]: E0226 11:07:23.975694 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:23 crc kubenswrapper[4724]: I0226 11:07:23.990229 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.003452 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.017209 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.027878 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.039372 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.050572 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.060675 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.066642 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.066687 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.066699 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.066714 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.066725 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:24Z","lastTransitionTime":"2026-02-26T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.072404 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.090619 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.101731 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.114016 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.132305 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.159698 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.168847 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.168877 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.168886 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.168900 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.168909 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:24Z","lastTransitionTime":"2026-02-26T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.271000 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.271038 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.271071 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.271090 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.271105 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:24Z","lastTransitionTime":"2026-02-26T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.373654 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.373688 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.373699 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.373713 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.373725 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:24Z","lastTransitionTime":"2026-02-26T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.439786 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" event={"ID":"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e","Type":"ContainerStarted","Data":"d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20"} Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.445727 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerStarted","Data":"8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b"} Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.446150 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.447040 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-49n4g" event={"ID":"9f9df16c-aeb4-4568-acbc-01b30c871371","Type":"ContainerStarted","Data":"c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162"} Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.454078 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.466931 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.467881 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.476956 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.477000 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.477015 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.477034 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.477046 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:24Z","lastTransitionTime":"2026-02-26T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.482135 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.492784 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.506076 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.518334 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.537339 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.552167 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.562529 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.577369 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.578521 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.578550 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.578562 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.578578 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.578591 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:24Z","lastTransitionTime":"2026-02-26T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.589580 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.602930 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.615849 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.632020 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.648660 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.660388 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.675497 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.681001 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.681038 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.681050 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.681062 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.681071 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:24Z","lastTransitionTime":"2026-02-26T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.688893 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.700845 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.712556 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.724325 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.735205 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.747974 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.759647 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.769908 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.783092 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.783131 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.783142 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.783158 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.783171 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:24Z","lastTransitionTime":"2026-02-26T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.787253 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.885292 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.885333 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.885343 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.885358 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.885368 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:24Z","lastTransitionTime":"2026-02-26T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.974559 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.974717 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:24 crc kubenswrapper[4724]: E0226 11:07:24.974777 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:24 crc kubenswrapper[4724]: E0226 11:07:24.974879 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.987402 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.987454 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.987469 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.987491 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:24 crc kubenswrapper[4724]: I0226 11:07:24.987505 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:24Z","lastTransitionTime":"2026-02-26T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.090533 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.090585 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.090598 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.090614 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.090626 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:25Z","lastTransitionTime":"2026-02-26T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.192883 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.192923 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.192935 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.192949 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.192960 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:25Z","lastTransitionTime":"2026-02-26T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.295475 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.295526 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.295541 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.295562 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.295577 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:25Z","lastTransitionTime":"2026-02-26T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.399022 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.399099 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.399120 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.399144 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.399162 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:25Z","lastTransitionTime":"2026-02-26T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.451140 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.451237 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.473986 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.490867 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.501997 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.502040 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.502053 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.502070 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.502083 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:25Z","lastTransitionTime":"2026-02-26T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.509918 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.523643 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.540580 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.557405 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.571433 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.582459 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.595264 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.604736 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.604794 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.604811 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.604828 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.604840 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:25Z","lastTransitionTime":"2026-02-26T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.610627 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.627139 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.642856 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.653302 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.668632 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.707354 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.707391 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.707402 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.707418 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.707429 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:25Z","lastTransitionTime":"2026-02-26T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.810122 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.810165 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.810193 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.810210 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.810222 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:25Z","lastTransitionTime":"2026-02-26T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.912756 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.912798 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.913002 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.913020 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.913033 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:25Z","lastTransitionTime":"2026-02-26T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:25 crc kubenswrapper[4724]: I0226 11:07:25.975588 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:25 crc kubenswrapper[4724]: E0226 11:07:25.975797 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.014937 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.014984 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.014995 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.015014 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.015026 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:26Z","lastTransitionTime":"2026-02-26T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.117656 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.117683 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.117691 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.117704 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.117714 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:26Z","lastTransitionTime":"2026-02-26T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.219720 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.219752 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.219764 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.219778 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.219792 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:26Z","lastTransitionTime":"2026-02-26T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.323106 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.323150 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.323163 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.323201 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.323215 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:26Z","lastTransitionTime":"2026-02-26T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.425551 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.425589 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.425601 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.425618 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.425629 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:26Z","lastTransitionTime":"2026-02-26T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.528536 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.528576 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.528589 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.528605 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.528615 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:26Z","lastTransitionTime":"2026-02-26T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.640027 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.640067 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.640077 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.640092 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.640103 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:26Z","lastTransitionTime":"2026-02-26T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.742736 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.742808 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.742826 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.742849 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.743058 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:26Z","lastTransitionTime":"2026-02-26T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.846071 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.846124 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.846136 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.846153 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.846164 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:26Z","lastTransitionTime":"2026-02-26T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.948839 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.948888 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.948899 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.948913 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.948922 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:26Z","lastTransitionTime":"2026-02-26T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.974538 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:26 crc kubenswrapper[4724]: I0226 11:07:26.974600 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:26 crc kubenswrapper[4724]: E0226 11:07:26.974667 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:26 crc kubenswrapper[4724]: E0226 11:07:26.974735 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.050763 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.050795 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.050805 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.050821 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.050831 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:27Z","lastTransitionTime":"2026-02-26T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.153199 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.153244 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.153261 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.153283 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.153298 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:27Z","lastTransitionTime":"2026-02-26T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.254911 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.254949 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.254959 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.254973 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.254982 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:27Z","lastTransitionTime":"2026-02-26T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.356871 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.356902 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.356910 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.356922 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.356931 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:27Z","lastTransitionTime":"2026-02-26T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.459322 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.459355 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.459365 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.459380 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.459390 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:27Z","lastTransitionTime":"2026-02-26T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.561102 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.561142 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.561151 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.561163 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.561171 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:27Z","lastTransitionTime":"2026-02-26T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.663809 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.663854 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.663865 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.663883 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.663895 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:27Z","lastTransitionTime":"2026-02-26T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.766000 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.766035 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.766047 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.766060 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.766070 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:27Z","lastTransitionTime":"2026-02-26T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.868194 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.868228 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.868238 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.868252 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.868263 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:27Z","lastTransitionTime":"2026-02-26T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.970771 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.970831 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.970850 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.970873 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.970889 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:27Z","lastTransitionTime":"2026-02-26T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:27 crc kubenswrapper[4724]: I0226 11:07:27.974955 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:27 crc kubenswrapper[4724]: E0226 11:07:27.975246 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.079579 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.079612 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.079620 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.079634 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.079643 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:28Z","lastTransitionTime":"2026-02-26T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.181705 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.181739 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.181747 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.181760 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.181769 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:28Z","lastTransitionTime":"2026-02-26T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.284582 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.284645 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.284660 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.284678 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.284698 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:28Z","lastTransitionTime":"2026-02-26T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.386280 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.386318 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.386327 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.386341 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.386367 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:28Z","lastTransitionTime":"2026-02-26T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.463978 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovnkube-controller/0.log" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.466868 4724 generic.go:334] "Generic (PLEG): container finished" podID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerID="8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b" exitCode=1 Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.466918 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerDied","Data":"8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b"} Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.467730 4724 scope.go:117] "RemoveContainer" containerID="8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.482941 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.488094 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.488146 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.488158 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.488194 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.488206 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:28Z","lastTransitionTime":"2026-02-26T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.498810 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.508722 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.521204 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.536087 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.546387 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.559733 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.570349 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.584997 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.590697 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.590739 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.590753 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.590774 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.590787 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:28Z","lastTransitionTime":"2026-02-26T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.599951 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.611508 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.624097 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.653446 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:27Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:07:27.937648 6462 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 11:07:27.937742 6462 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:07:27.938106 6462 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 11:07:27.938119 6462 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 11:07:27.938129 6462 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 11:07:27.938145 6462 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 11:07:27.938154 6462 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 11:07:27.938165 6462 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 11:07:27.938198 6462 factory.go:656] Stopping watch factory\\\\nI0226 11:07:27.938210 6462 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:07:27.938222 6462 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 11:07:27.938235 6462 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 11:07:27.938244 6462 handler.go:208] Removed *v1.Namespace\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.693199 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.693652 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.693664 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.693679 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.693692 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:28Z","lastTransitionTime":"2026-02-26T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.796197 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.796237 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.796249 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.796263 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.796319 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:28Z","lastTransitionTime":"2026-02-26T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.823386 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686"] Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.823785 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.825342 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.826432 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.836326 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.849688 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.862310 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.875289 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.887890 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.898581 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.898654 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.898664 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.898677 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.898686 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:28Z","lastTransitionTime":"2026-02-26T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.901343 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.912434 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.930839 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:27Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:07:27.937648 6462 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 11:07:27.937742 6462 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:07:27.938106 6462 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 11:07:27.938119 6462 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 11:07:27.938129 6462 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 11:07:27.938145 6462 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 11:07:27.938154 6462 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 11:07:27.938165 6462 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 11:07:27.938198 6462 factory.go:656] Stopping watch factory\\\\nI0226 11:07:27.938210 6462 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:07:27.938222 6462 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 11:07:27.938235 6462 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 11:07:27.938244 6462 handler.go:208] Removed *v1.Namespace\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.947808 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.956327 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.968804 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.974352 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.974379 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:28 crc kubenswrapper[4724]: E0226 11:07:28.974441 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:28 crc kubenswrapper[4724]: E0226 11:07:28.974564 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.977702 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/592681b1-be7d-45f1-9be8-64900e488bf3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-f7686\" (UID: \"592681b1-be7d-45f1-9be8-64900e488bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.977748 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tkwj\" (UniqueName: \"kubernetes.io/projected/592681b1-be7d-45f1-9be8-64900e488bf3-kube-api-access-9tkwj\") pod \"ovnkube-control-plane-749d76644c-f7686\" (UID: \"592681b1-be7d-45f1-9be8-64900e488bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.977781 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/592681b1-be7d-45f1-9be8-64900e488bf3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-f7686\" (UID: \"592681b1-be7d-45f1-9be8-64900e488bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" Feb 26 11:07:28 crc kubenswrapper[4724]: I0226 11:07:28.977796 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/592681b1-be7d-45f1-9be8-64900e488bf3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-f7686\" (UID: \"592681b1-be7d-45f1-9be8-64900e488bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.001139 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.001218 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.001232 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.001219 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:28Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.001250 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.001385 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:29Z","lastTransitionTime":"2026-02-26T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.027225 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.054903 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.079431 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/592681b1-be7d-45f1-9be8-64900e488bf3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-f7686\" (UID: \"592681b1-be7d-45f1-9be8-64900e488bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.079472 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tkwj\" (UniqueName: \"kubernetes.io/projected/592681b1-be7d-45f1-9be8-64900e488bf3-kube-api-access-9tkwj\") pod \"ovnkube-control-plane-749d76644c-f7686\" (UID: \"592681b1-be7d-45f1-9be8-64900e488bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.079544 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/592681b1-be7d-45f1-9be8-64900e488bf3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-f7686\" (UID: \"592681b1-be7d-45f1-9be8-64900e488bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.079585 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/592681b1-be7d-45f1-9be8-64900e488bf3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-f7686\" (UID: \"592681b1-be7d-45f1-9be8-64900e488bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.080264 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/592681b1-be7d-45f1-9be8-64900e488bf3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-f7686\" (UID: \"592681b1-be7d-45f1-9be8-64900e488bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.080336 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/592681b1-be7d-45f1-9be8-64900e488bf3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-f7686\" (UID: \"592681b1-be7d-45f1-9be8-64900e488bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.085506 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/592681b1-be7d-45f1-9be8-64900e488bf3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-f7686\" (UID: \"592681b1-be7d-45f1-9be8-64900e488bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.097114 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tkwj\" (UniqueName: \"kubernetes.io/projected/592681b1-be7d-45f1-9be8-64900e488bf3-kube-api-access-9tkwj\") pod \"ovnkube-control-plane-749d76644c-f7686\" (UID: \"592681b1-be7d-45f1-9be8-64900e488bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.103285 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.103327 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.103338 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.103353 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.103368 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:29Z","lastTransitionTime":"2026-02-26T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.140645 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.205994 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.206014 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.206023 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.206036 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.206045 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:29Z","lastTransitionTime":"2026-02-26T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.334627 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.334669 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.334681 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.334695 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.334706 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:29Z","lastTransitionTime":"2026-02-26T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.437012 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.437051 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.437060 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.437093 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.437103 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:29Z","lastTransitionTime":"2026-02-26T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.470345 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" event={"ID":"592681b1-be7d-45f1-9be8-64900e488bf3","Type":"ContainerStarted","Data":"bcbc956b05a429da7457faade2b944826b48a96e47449615f5c32f7c708686af"} Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.472373 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovnkube-controller/0.log" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.476245 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerStarted","Data":"9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062"} Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.476858 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.489814 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.511900 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.526475 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.534029 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-tj879"] Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.535241 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:29 crc kubenswrapper[4724]: E0226 11:07:29.535315 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.539343 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.539389 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.539406 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.539427 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.539443 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:29Z","lastTransitionTime":"2026-02-26T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.543803 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.555582 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.641002 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.642270 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.642298 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.642307 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.642321 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.642329 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:29Z","lastTransitionTime":"2026-02-26T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.661322 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.675711 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.686941 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs\") pod \"network-metrics-daemon-tj879\" (UID: \"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\") " pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.687023 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z55mr\" (UniqueName: \"kubernetes.io/projected/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-kube-api-access-z55mr\") pod \"network-metrics-daemon-tj879\" (UID: \"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\") " pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.697284 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.719753 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:27Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:07:27.937648 6462 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 11:07:27.937742 6462 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:07:27.938106 6462 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 11:07:27.938119 6462 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 11:07:27.938129 6462 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 11:07:27.938145 6462 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 11:07:27.938154 6462 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 11:07:27.938165 6462 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 11:07:27.938198 6462 factory.go:656] Stopping watch factory\\\\nI0226 11:07:27.938210 6462 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:07:27.938222 6462 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 11:07:27.938235 6462 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 11:07:27.938244 6462 handler.go:208] Removed *v1.Namespace\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.731311 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.744324 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.745199 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.745231 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.745241 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.745256 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.745266 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:29Z","lastTransitionTime":"2026-02-26T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.759936 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.771447 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.783366 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.787847 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs\") pod \"network-metrics-daemon-tj879\" (UID: \"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\") " pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.787905 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z55mr\" (UniqueName: \"kubernetes.io/projected/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-kube-api-access-z55mr\") pod \"network-metrics-daemon-tj879\" (UID: \"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\") " pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:29 crc kubenswrapper[4724]: E0226 11:07:29.788013 4724 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:07:29 crc kubenswrapper[4724]: E0226 11:07:29.788080 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs podName:00a83b55-07c3-47d4-9e4a-9d613f82d8a4 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:30.288061452 +0000 UTC m=+116.943800567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs") pod "network-metrics-daemon-tj879" (UID: "00a83b55-07c3-47d4-9e4a-9d613f82d8a4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.794754 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.807528 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z55mr\" (UniqueName: \"kubernetes.io/projected/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-kube-api-access-z55mr\") pod \"network-metrics-daemon-tj879\" (UID: \"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\") " pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.810558 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.826695 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.836764 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.847572 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.848027 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.848073 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.848085 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.848102 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.848113 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:29Z","lastTransitionTime":"2026-02-26T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.868078 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:27Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:07:27.937648 6462 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 11:07:27.937742 6462 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:07:27.938106 6462 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 11:07:27.938119 6462 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 11:07:27.938129 6462 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 11:07:27.938145 6462 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 11:07:27.938154 6462 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 11:07:27.938165 6462 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 11:07:27.938198 6462 factory.go:656] Stopping watch factory\\\\nI0226 11:07:27.938210 6462 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:07:27.938222 6462 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 11:07:27.938235 6462 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 11:07:27.938244 6462 handler.go:208] Removed *v1.Namespace\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.880772 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.898271 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.914368 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.939904 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.951323 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.951385 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.951401 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.951420 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.951432 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:29Z","lastTransitionTime":"2026-02-26T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.956691 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.975452 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:29 crc kubenswrapper[4724]: E0226 11:07:29.975599 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.976153 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:29 crc kubenswrapper[4724]: I0226 11:07:29.995505 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:29Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.013806 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.054522 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.054580 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.054596 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.054619 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.054632 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:30Z","lastTransitionTime":"2026-02-26T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.158117 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.158171 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.158205 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.158226 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.158242 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:30Z","lastTransitionTime":"2026-02-26T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.261044 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.261098 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.261120 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.261142 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.261156 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:30Z","lastTransitionTime":"2026-02-26T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.293545 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs\") pod \"network-metrics-daemon-tj879\" (UID: \"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\") " pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:30 crc kubenswrapper[4724]: E0226 11:07:30.293816 4724 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:07:30 crc kubenswrapper[4724]: E0226 11:07:30.293995 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs podName:00a83b55-07c3-47d4-9e4a-9d613f82d8a4 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:31.293960189 +0000 UTC m=+117.949699484 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs") pod "network-metrics-daemon-tj879" (UID: "00a83b55-07c3-47d4-9e4a-9d613f82d8a4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.362975 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.363013 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.363025 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.363040 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.363051 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:30Z","lastTransitionTime":"2026-02-26T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.465019 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.465060 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.465071 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.465086 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.465098 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:30Z","lastTransitionTime":"2026-02-26T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.480847 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" event={"ID":"592681b1-be7d-45f1-9be8-64900e488bf3","Type":"ContainerStarted","Data":"40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33"} Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.480906 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" event={"ID":"592681b1-be7d-45f1-9be8-64900e488bf3","Type":"ContainerStarted","Data":"feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369"} Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.493699 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.508460 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.517129 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.531465 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.542530 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.555609 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.565232 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.566662 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.566698 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.566714 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.566729 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.566740 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:30Z","lastTransitionTime":"2026-02-26T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.577492 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.591119 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.609166 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.619511 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.630580 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.649663 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:27Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:07:27.937648 6462 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 11:07:27.937742 6462 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:07:27.938106 6462 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 11:07:27.938119 6462 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 11:07:27.938129 6462 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 11:07:27.938145 6462 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 11:07:27.938154 6462 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 11:07:27.938165 6462 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 11:07:27.938198 6462 factory.go:656] Stopping watch factory\\\\nI0226 11:07:27.938210 6462 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:07:27.938222 6462 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 11:07:27.938235 6462 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 11:07:27.938244 6462 handler.go:208] Removed *v1.Namespace\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.660674 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.668644 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.668680 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.668691 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.668707 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.668736 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:30Z","lastTransitionTime":"2026-02-26T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.673703 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.771387 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.771447 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.771462 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.771478 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.771489 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:30Z","lastTransitionTime":"2026-02-26T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.874135 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.874193 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.874205 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.874221 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.874234 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:30Z","lastTransitionTime":"2026-02-26T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.975367 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:30 crc kubenswrapper[4724]: E0226 11:07:30.975881 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.975676 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:30 crc kubenswrapper[4724]: E0226 11:07:30.975979 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.975438 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:30 crc kubenswrapper[4724]: E0226 11:07:30.976057 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.977151 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.977226 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.977244 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.977274 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:30 crc kubenswrapper[4724]: I0226 11:07:30.977291 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:30Z","lastTransitionTime":"2026-02-26T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.079542 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.079584 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.079593 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.079607 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.079616 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:31Z","lastTransitionTime":"2026-02-26T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.182670 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.182741 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.182757 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.182837 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.182919 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:31Z","lastTransitionTime":"2026-02-26T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.286108 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.286166 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.286194 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.286214 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.286227 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:31Z","lastTransitionTime":"2026-02-26T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.304832 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs\") pod \"network-metrics-daemon-tj879\" (UID: \"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\") " pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:31 crc kubenswrapper[4724]: E0226 11:07:31.304984 4724 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:07:31 crc kubenswrapper[4724]: E0226 11:07:31.305039 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs podName:00a83b55-07c3-47d4-9e4a-9d613f82d8a4 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:33.305025146 +0000 UTC m=+119.960764261 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs") pod "network-metrics-daemon-tj879" (UID: "00a83b55-07c3-47d4-9e4a-9d613f82d8a4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.388020 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.388068 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.388078 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.388093 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.388102 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:31Z","lastTransitionTime":"2026-02-26T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.484739 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovnkube-controller/1.log" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.485236 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovnkube-controller/0.log" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.488068 4724 generic.go:334] "Generic (PLEG): container finished" podID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerID="9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062" exitCode=1 Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.488614 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerDied","Data":"9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062"} Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.488650 4724 scope.go:117] "RemoveContainer" containerID="8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.489097 4724 scope.go:117] "RemoveContainer" containerID="9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062" Feb 26 11:07:31 crc kubenswrapper[4724]: E0226 11:07:31.489222 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z56jr_openshift-ovn-kubernetes(4c1140bb-3473-456a-b916-cfef4d4b7222)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.491853 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.491883 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.491894 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.491908 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.491918 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:31Z","lastTransitionTime":"2026-02-26T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.500829 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.513533 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.525248 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.534905 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.546325 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.557937 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.568119 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.577696 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.588175 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.593835 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.593877 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.593891 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.593914 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.593934 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:31Z","lastTransitionTime":"2026-02-26T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.602964 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.615405 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.629559 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.641170 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.657468 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:27Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:07:27.937648 6462 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 11:07:27.937742 6462 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:07:27.938106 6462 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 11:07:27.938119 6462 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 11:07:27.938129 6462 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 11:07:27.938145 6462 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 11:07:27.938154 6462 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 11:07:27.938165 6462 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 11:07:27.938198 6462 factory.go:656] Stopping watch factory\\\\nI0226 11:07:27.938210 6462 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:07:27.938222 6462 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 11:07:27.938235 6462 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 11:07:27.938244 6462 handler.go:208] Removed *v1.Namespace\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"message\\\":\\\"[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 11:07:30.659501 6604 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z]\\\\nI0226 11:07:30.659495 6604 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:ma\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.666014 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.696788 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.696848 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.696858 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.696870 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.696879 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:31Z","lastTransitionTime":"2026-02-26T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.753744 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.753787 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.753800 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.753818 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.753829 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:31Z","lastTransitionTime":"2026-02-26T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:31 crc kubenswrapper[4724]: E0226 11:07:31.771475 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.775368 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.775415 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.775428 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.775446 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.775459 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:31Z","lastTransitionTime":"2026-02-26T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:31 crc kubenswrapper[4724]: E0226 11:07:31.790425 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.794875 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.794928 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.794948 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.794976 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.794995 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:31Z","lastTransitionTime":"2026-02-26T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:31 crc kubenswrapper[4724]: E0226 11:07:31.812210 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.815993 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.816039 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.816055 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.816077 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.816093 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:31Z","lastTransitionTime":"2026-02-26T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:31 crc kubenswrapper[4724]: E0226 11:07:31.828258 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.831590 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.831631 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.831646 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.831663 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.831676 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:31Z","lastTransitionTime":"2026-02-26T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:31 crc kubenswrapper[4724]: E0226 11:07:31.843444 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:31Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:31 crc kubenswrapper[4724]: E0226 11:07:31.843570 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.844901 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.844941 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.844956 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.844977 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.844992 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:31Z","lastTransitionTime":"2026-02-26T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.947525 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.947592 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.947605 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.947621 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.947634 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:31Z","lastTransitionTime":"2026-02-26T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.976512 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:31 crc kubenswrapper[4724]: E0226 11:07:31.976671 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:31 crc kubenswrapper[4724]: I0226 11:07:31.984740 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.049887 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.049936 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.049950 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.049972 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.049992 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:32Z","lastTransitionTime":"2026-02-26T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.152791 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.152821 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.152830 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.152843 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.152853 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:32Z","lastTransitionTime":"2026-02-26T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.255882 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.255923 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.255939 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.255970 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.255988 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:32Z","lastTransitionTime":"2026-02-26T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.358077 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.358110 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.358121 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.358137 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.358148 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:32Z","lastTransitionTime":"2026-02-26T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.459898 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.459957 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.459969 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.459982 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.459991 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:32Z","lastTransitionTime":"2026-02-26T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.494624 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovnkube-controller/1.log" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.562076 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.562106 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.562117 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.562131 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.562143 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:32Z","lastTransitionTime":"2026-02-26T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.664403 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.664451 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.664464 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.664481 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.664497 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:32Z","lastTransitionTime":"2026-02-26T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.767430 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.767467 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.767479 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.767494 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.767506 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:32Z","lastTransitionTime":"2026-02-26T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.869465 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.869504 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.869520 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.869541 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.869555 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:32Z","lastTransitionTime":"2026-02-26T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.972515 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.972589 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.972612 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.972640 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.972657 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:32Z","lastTransitionTime":"2026-02-26T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.974953 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.974964 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:32 crc kubenswrapper[4724]: E0226 11:07:32.975247 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:32 crc kubenswrapper[4724]: E0226 11:07:32.975379 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:32 crc kubenswrapper[4724]: I0226 11:07:32.974959 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:32 crc kubenswrapper[4724]: E0226 11:07:32.975582 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.075235 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.075282 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.075293 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.075309 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.075320 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:33Z","lastTransitionTime":"2026-02-26T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.178137 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.178211 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.178233 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.178250 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.178262 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:33Z","lastTransitionTime":"2026-02-26T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.280121 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.280211 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.280232 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.280255 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.280274 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:33Z","lastTransitionTime":"2026-02-26T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.325610 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs\") pod \"network-metrics-daemon-tj879\" (UID: \"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\") " pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:33 crc kubenswrapper[4724]: E0226 11:07:33.325851 4724 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:07:33 crc kubenswrapper[4724]: E0226 11:07:33.326009 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs podName:00a83b55-07c3-47d4-9e4a-9d613f82d8a4 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:37.325991268 +0000 UTC m=+123.981730383 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs") pod "network-metrics-daemon-tj879" (UID: "00a83b55-07c3-47d4-9e4a-9d613f82d8a4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.383251 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.383311 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.383332 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.383373 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.383405 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:33Z","lastTransitionTime":"2026-02-26T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.485895 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.485945 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.485962 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.485985 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.486002 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:33Z","lastTransitionTime":"2026-02-26T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.588704 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.588741 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.588752 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.588768 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.588780 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:33Z","lastTransitionTime":"2026-02-26T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.691735 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.691783 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.691797 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.691816 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.691827 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:33Z","lastTransitionTime":"2026-02-26T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.794565 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.794603 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.794614 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.794628 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.794639 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:33Z","lastTransitionTime":"2026-02-26T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.897463 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.897510 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.897527 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.897558 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.897574 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:33Z","lastTransitionTime":"2026-02-26T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.975327 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:33 crc kubenswrapper[4724]: E0226 11:07:33.975551 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:33 crc kubenswrapper[4724]: I0226 11:07:33.989517 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:33Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:33 crc kubenswrapper[4724]: E0226 11:07:33.998085 4724 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.004287 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:34Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.016442 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:34Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.028694 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:34Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.040889 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:34Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.051694 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:34Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.061991 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:34Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.076220 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:34Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:34 crc kubenswrapper[4724]: E0226 11:07:34.080737 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.090387 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:34Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.106913 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6848715-489f-49c7-b82d-9af20e8cd462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22edd40f30fb82c8479e0a6d01bdf4d119bd515a7b7e70c6d7105677b618723b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfcc0af5de38f5634f6a85b7c165bcc50a6c2e1b65d2d9d66127a67a165af2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2f1f6ee7310c86c0678a38e4a310a5a875177667cf63738f013eb4dc93e0ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:34Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.117007 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:34Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.127561 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:34Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.139943 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:34Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.158641 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:27Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:07:27.937648 6462 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 11:07:27.937742 6462 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:07:27.938106 6462 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 11:07:27.938119 6462 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 11:07:27.938129 6462 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 11:07:27.938145 6462 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 11:07:27.938154 6462 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 11:07:27.938165 6462 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 11:07:27.938198 6462 factory.go:656] Stopping watch factory\\\\nI0226 11:07:27.938210 6462 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:07:27.938222 6462 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 11:07:27.938235 6462 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 11:07:27.938244 6462 handler.go:208] Removed *v1.Namespace\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"message\\\":\\\"[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 11:07:30.659501 6604 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z]\\\\nI0226 11:07:30.659495 6604 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:ma\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:34Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.169839 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:34Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.179768 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:34Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.975571 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.975572 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:34 crc kubenswrapper[4724]: E0226 11:07:34.975826 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:34 crc kubenswrapper[4724]: I0226 11:07:34.976345 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:34 crc kubenswrapper[4724]: E0226 11:07:34.977125 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:34 crc kubenswrapper[4724]: E0226 11:07:34.977293 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:35 crc kubenswrapper[4724]: I0226 11:07:35.975200 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:35 crc kubenswrapper[4724]: E0226 11:07:35.975377 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:36 crc kubenswrapper[4724]: I0226 11:07:36.859983 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:07:36 crc kubenswrapper[4724]: I0226 11:07:36.860089 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:36 crc kubenswrapper[4724]: I0226 11:07:36.860113 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:36 crc kubenswrapper[4724]: I0226 11:07:36.860151 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.860258 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.860284 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.860305 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.860309 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:08.860263403 +0000 UTC m=+155.516002558 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.860318 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.860364 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.860358 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:08.860344765 +0000 UTC m=+155.516083910 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.860607 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:08.860581842 +0000 UTC m=+155.516320987 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.860642 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:08.860629813 +0000 UTC m=+155.516368968 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:07:36 crc kubenswrapper[4724]: I0226 11:07:36.961315 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.961581 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.961615 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.961636 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.961721 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:08.961697834 +0000 UTC m=+155.617436979 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:07:36 crc kubenswrapper[4724]: I0226 11:07:36.974970 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:36 crc kubenswrapper[4724]: I0226 11:07:36.975019 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:36 crc kubenswrapper[4724]: I0226 11:07:36.974962 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.975106 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.975391 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:36 crc kubenswrapper[4724]: E0226 11:07:36.975534 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.365616 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs\") pod \"network-metrics-daemon-tj879\" (UID: \"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\") " pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:37 crc kubenswrapper[4724]: E0226 11:07:37.365769 4724 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:07:37 crc kubenswrapper[4724]: E0226 11:07:37.365815 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs podName:00a83b55-07c3-47d4-9e4a-9d613f82d8a4 nodeName:}" failed. No retries permitted until 2026-02-26 11:07:45.36580201 +0000 UTC m=+132.021541125 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs") pod "network-metrics-daemon-tj879" (UID: "00a83b55-07c3-47d4-9e4a-9d613f82d8a4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.461488 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.484512 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.498726 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.509569 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.530901 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.544212 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.557622 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.573324 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.586039 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.596733 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.613622 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.622865 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.633378 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.645268 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.657877 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6848715-489f-49c7-b82d-9af20e8cd462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22edd40f30fb82c8479e0a6d01bdf4d119bd515a7b7e70c6d7105677b618723b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfcc0af5de38f5634f6a85b7c165bcc50a6c2e1b65d2d9d66127a67a165af2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2f1f6ee7310c86c0678a38e4a310a5a875177667cf63738f013eb4dc93e0ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.686057 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:27Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:07:27.937648 6462 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 11:07:27.937742 6462 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:07:27.938106 6462 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 11:07:27.938119 6462 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 11:07:27.938129 6462 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 11:07:27.938145 6462 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 11:07:27.938154 6462 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 11:07:27.938165 6462 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 11:07:27.938198 6462 factory.go:656] Stopping watch factory\\\\nI0226 11:07:27.938210 6462 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:07:27.938222 6462 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 11:07:27.938235 6462 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 11:07:27.938244 6462 handler.go:208] Removed *v1.Namespace\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"message\\\":\\\"[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 11:07:30.659501 6604 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z]\\\\nI0226 11:07:30.659495 6604 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:ma\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.699950 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:37 crc kubenswrapper[4724]: I0226 11:07:37.975096 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:37 crc kubenswrapper[4724]: E0226 11:07:37.975375 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:38 crc kubenswrapper[4724]: I0226 11:07:38.641062 4724 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 26 11:07:38 crc kubenswrapper[4724]: I0226 11:07:38.974994 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:38 crc kubenswrapper[4724]: I0226 11:07:38.975007 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:38 crc kubenswrapper[4724]: I0226 11:07:38.975131 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:38 crc kubenswrapper[4724]: E0226 11:07:38.975365 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:38 crc kubenswrapper[4724]: E0226 11:07:38.975514 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:38 crc kubenswrapper[4724]: E0226 11:07:38.976032 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:39 crc kubenswrapper[4724]: E0226 11:07:39.082585 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:07:39 crc kubenswrapper[4724]: I0226 11:07:39.975017 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:39 crc kubenswrapper[4724]: E0226 11:07:39.975214 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:40 crc kubenswrapper[4724]: I0226 11:07:40.975370 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:40 crc kubenswrapper[4724]: I0226 11:07:40.975403 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:40 crc kubenswrapper[4724]: I0226 11:07:40.975491 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:40 crc kubenswrapper[4724]: E0226 11:07:40.975516 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:40 crc kubenswrapper[4724]: E0226 11:07:40.975677 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:40 crc kubenswrapper[4724]: E0226 11:07:40.975751 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.868930 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.869242 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.869255 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.869270 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.869281 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:41Z","lastTransitionTime":"2026-02-26T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:41 crc kubenswrapper[4724]: E0226 11:07:41.883360 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.886883 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.886908 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.886917 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.886930 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.886939 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:41Z","lastTransitionTime":"2026-02-26T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:41 crc kubenswrapper[4724]: E0226 11:07:41.899121 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.902786 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.902814 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.902826 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.902840 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.902850 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:41Z","lastTransitionTime":"2026-02-26T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:41 crc kubenswrapper[4724]: E0226 11:07:41.914527 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.918682 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.918716 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.918728 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.918744 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.918756 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:41Z","lastTransitionTime":"2026-02-26T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:41 crc kubenswrapper[4724]: E0226 11:07:41.931016 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.934232 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.934269 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.934284 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.934303 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.934319 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:41Z","lastTransitionTime":"2026-02-26T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:41 crc kubenswrapper[4724]: E0226 11:07:41.946638 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:41 crc kubenswrapper[4724]: E0226 11:07:41.946806 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:07:41 crc kubenswrapper[4724]: I0226 11:07:41.974912 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:41 crc kubenswrapper[4724]: E0226 11:07:41.975148 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:42 crc kubenswrapper[4724]: I0226 11:07:42.975463 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:42 crc kubenswrapper[4724]: I0226 11:07:42.975507 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:42 crc kubenswrapper[4724]: E0226 11:07:42.975648 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:42 crc kubenswrapper[4724]: I0226 11:07:42.975673 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:42 crc kubenswrapper[4724]: E0226 11:07:42.975774 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:42 crc kubenswrapper[4724]: E0226 11:07:42.975875 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:43 crc kubenswrapper[4724]: I0226 11:07:43.975599 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:43 crc kubenswrapper[4724]: E0226 11:07:43.975835 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:43 crc kubenswrapper[4724]: I0226 11:07:43.976518 4724 scope.go:117] "RemoveContainer" containerID="9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062" Feb 26 11:07:43 crc kubenswrapper[4724]: I0226 11:07:43.989518 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:43Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.003655 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.019120 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.036174 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.049846 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.062971 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.075400 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: E0226 11:07:44.083307 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.090817 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.107476 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6848715-489f-49c7-b82d-9af20e8cd462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22edd40f30fb82c8479e0a6d01bdf4d119bd515a7b7e70c6d7105677b618723b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfcc0af5de38f5634f6a85b7c165bcc50a6c2e1b65d2d9d66127a67a165af2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2f1f6ee7310c86c0678a38e4a310a5a875177667cf63738f013eb4dc93e0ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.124434 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.138863 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.154116 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.166068 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.178996 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.200206 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d51bdea9c8378248e5e1ce00a6a439bfc2499683700ab4170138cf023c4420b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:27Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 11:07:27.937648 6462 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 11:07:27.937742 6462 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 11:07:27.938106 6462 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 11:07:27.938119 6462 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 11:07:27.938129 6462 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 11:07:27.938145 6462 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 11:07:27.938154 6462 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 11:07:27.938165 6462 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 11:07:27.938198 6462 factory.go:656] Stopping watch factory\\\\nI0226 11:07:27.938210 6462 ovnkube.go:599] Stopped ovnkube\\\\nI0226 11:07:27.938222 6462 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 11:07:27.938235 6462 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 11:07:27.938244 6462 handler.go:208] Removed *v1.Namespace\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"message\\\":\\\"[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 11:07:30.659501 6604 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z]\\\\nI0226 11:07:30.659495 6604 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:ma\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.211061 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.221327 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.232886 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.245692 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.257365 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6848715-489f-49c7-b82d-9af20e8cd462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22edd40f30fb82c8479e0a6d01bdf4d119bd515a7b7e70c6d7105677b618723b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfcc0af5de38f5634f6a85b7c165bcc50a6c2e1b65d2d9d66127a67a165af2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2f1f6ee7310c86c0678a38e4a310a5a875177667cf63738f013eb4dc93e0ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.268437 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.281506 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.293866 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.310787 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"message\\\":\\\"[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 11:07:30.659501 6604 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z]\\\\nI0226 11:07:30.659495 6604 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:ma\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z56jr_openshift-ovn-kubernetes(4c1140bb-3473-456a-b916-cfef4d4b7222)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.322531 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.332324 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.345546 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.361445 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.372156 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.384553 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.399460 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.413051 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.536303 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovnkube-controller/1.log" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.538694 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerStarted","Data":"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402"} Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.539723 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.552480 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.565887 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.575684 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.586258 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.597583 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.612317 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.623493 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.633962 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.646321 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.664975 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.674900 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6848715-489f-49c7-b82d-9af20e8cd462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22edd40f30fb82c8479e0a6d01bdf4d119bd515a7b7e70c6d7105677b618723b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfcc0af5de38f5634f6a85b7c165bcc50a6c2e1b65d2d9d66127a67a165af2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2f1f6ee7310c86c0678a38e4a310a5a875177667cf63738f013eb4dc93e0ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.686840 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.696707 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.717802 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"message\\\":\\\"[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 11:07:30.659501 6604 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z]\\\\nI0226 11:07:30.659495 6604 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:ma\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.727423 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.737482 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.975267 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.975288 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:44 crc kubenswrapper[4724]: I0226 11:07:44.975676 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:44 crc kubenswrapper[4724]: E0226 11:07:44.975793 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:44 crc kubenswrapper[4724]: E0226 11:07:44.976465 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:44 crc kubenswrapper[4724]: E0226 11:07:44.976568 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.446495 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs\") pod \"network-metrics-daemon-tj879\" (UID: \"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\") " pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:45 crc kubenswrapper[4724]: E0226 11:07:45.446624 4724 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:07:45 crc kubenswrapper[4724]: E0226 11:07:45.446671 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs podName:00a83b55-07c3-47d4-9e4a-9d613f82d8a4 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:01.446657132 +0000 UTC m=+148.102396237 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs") pod "network-metrics-daemon-tj879" (UID: "00a83b55-07c3-47d4-9e4a-9d613f82d8a4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.545373 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovnkube-controller/2.log" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.546713 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovnkube-controller/1.log" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.552606 4724 generic.go:334] "Generic (PLEG): container finished" podID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerID="40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402" exitCode=1 Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.552674 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerDied","Data":"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402"} Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.552733 4724 scope.go:117] "RemoveContainer" containerID="9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.553821 4724 scope.go:117] "RemoveContainer" containerID="40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402" Feb 26 11:07:45 crc kubenswrapper[4724]: E0226 11:07:45.554218 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z56jr_openshift-ovn-kubernetes(4c1140bb-3473-456a-b916-cfef4d4b7222)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.579907 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.599761 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.611730 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.625851 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.643078 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.656136 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.668858 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.679990 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6848715-489f-49c7-b82d-9af20e8cd462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22edd40f30fb82c8479e0a6d01bdf4d119bd515a7b7e70c6d7105677b618723b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfcc0af5de38f5634f6a85b7c165bcc50a6c2e1b65d2d9d66127a67a165af2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2f1f6ee7310c86c0678a38e4a310a5a875177667cf63738f013eb4dc93e0ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.691212 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.703381 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.715901 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.725333 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.734839 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.750058 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eab149b51537f878ef4781e20a6ffe2f896426e639b25cc18930d5a77772062\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"message\\\":\\\"[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 11:07:30.659501 6604 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:30Z is after 2025-08-24T17:21:41Z]\\\\nI0226 11:07:30.659495 6604 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:ma\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:44Z\\\",\\\"message\\\":\\\"64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 11:07:44.892375 6832 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed callin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.759589 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.771156 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:45Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:45 crc kubenswrapper[4724]: I0226 11:07:45.974790 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:45 crc kubenswrapper[4724]: E0226 11:07:45.974937 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.557332 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovnkube-controller/2.log" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.560671 4724 scope.go:117] "RemoveContainer" containerID="40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402" Feb 26 11:07:46 crc kubenswrapper[4724]: E0226 11:07:46.560813 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z56jr_openshift-ovn-kubernetes(4c1140bb-3473-456a-b916-cfef4d4b7222)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.571372 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.589396 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.599642 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.613050 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.629809 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.653036 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.704759 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.718573 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.730850 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.742375 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.754430 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.766282 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.777707 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.787901 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6848715-489f-49c7-b82d-9af20e8cd462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22edd40f30fb82c8479e0a6d01bdf4d119bd515a7b7e70c6d7105677b618723b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfcc0af5de38f5634f6a85b7c165bcc50a6c2e1b65d2d9d66127a67a165af2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2f1f6ee7310c86c0678a38e4a310a5a875177667cf63738f013eb4dc93e0ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.804676 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:44Z\\\",\\\"message\\\":\\\"64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 11:07:44.892375 6832 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed callin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z56jr_openshift-ovn-kubernetes(4c1140bb-3473-456a-b916-cfef4d4b7222)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.815923 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.975316 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.975447 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:46 crc kubenswrapper[4724]: I0226 11:07:46.975402 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:46 crc kubenswrapper[4724]: E0226 11:07:46.975626 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:46 crc kubenswrapper[4724]: E0226 11:07:46.975697 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:46 crc kubenswrapper[4724]: E0226 11:07:46.975794 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:47 crc kubenswrapper[4724]: I0226 11:07:47.975441 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:47 crc kubenswrapper[4724]: E0226 11:07:47.975904 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:48 crc kubenswrapper[4724]: I0226 11:07:48.974726 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:48 crc kubenswrapper[4724]: I0226 11:07:48.974724 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:48 crc kubenswrapper[4724]: I0226 11:07:48.974748 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:48 crc kubenswrapper[4724]: E0226 11:07:48.974967 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:48 crc kubenswrapper[4724]: E0226 11:07:48.975039 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:48 crc kubenswrapper[4724]: E0226 11:07:48.975067 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:49 crc kubenswrapper[4724]: E0226 11:07:49.084883 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:07:49 crc kubenswrapper[4724]: I0226 11:07:49.975169 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:49 crc kubenswrapper[4724]: E0226 11:07:49.975431 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:50 crc kubenswrapper[4724]: I0226 11:07:50.974999 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:50 crc kubenswrapper[4724]: E0226 11:07:50.975141 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:50 crc kubenswrapper[4724]: I0226 11:07:50.975207 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:50 crc kubenswrapper[4724]: I0226 11:07:50.975213 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:50 crc kubenswrapper[4724]: E0226 11:07:50.975386 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:50 crc kubenswrapper[4724]: E0226 11:07:50.975483 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:51 crc kubenswrapper[4724]: I0226 11:07:51.975480 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:51 crc kubenswrapper[4724]: E0226 11:07:51.975649 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.346776 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.347725 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.347834 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.347930 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.348041 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:52Z","lastTransitionTime":"2026-02-26T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:52 crc kubenswrapper[4724]: E0226 11:07:52.361352 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.366797 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.366838 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.366853 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.366875 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.366888 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:52Z","lastTransitionTime":"2026-02-26T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:52 crc kubenswrapper[4724]: E0226 11:07:52.381877 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.385677 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.385709 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.385718 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.385733 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.385743 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:52Z","lastTransitionTime":"2026-02-26T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:52 crc kubenswrapper[4724]: E0226 11:07:52.398212 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.401480 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.401523 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.401533 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.401552 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.401563 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:52Z","lastTransitionTime":"2026-02-26T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:52 crc kubenswrapper[4724]: E0226 11:07:52.412287 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.415983 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.416021 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.416031 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.416046 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.416057 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:07:52Z","lastTransitionTime":"2026-02-26T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:07:52 crc kubenswrapper[4724]: E0226 11:07:52.426787 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:52Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:52 crc kubenswrapper[4724]: E0226 11:07:52.426937 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.974653 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.974759 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:52 crc kubenswrapper[4724]: E0226 11:07:52.974939 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:52 crc kubenswrapper[4724]: I0226 11:07:52.975291 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:52 crc kubenswrapper[4724]: E0226 11:07:52.975352 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:52 crc kubenswrapper[4724]: E0226 11:07:52.975539 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:53 crc kubenswrapper[4724]: I0226 11:07:53.974805 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:53 crc kubenswrapper[4724]: E0226 11:07:53.975052 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:53 crc kubenswrapper[4724]: I0226 11:07:53.987683 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:53Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.003483 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.017811 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.030331 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.044349 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.056074 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.067002 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.079862 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: E0226 11:07:54.085239 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.097654 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.108122 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.119988 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.131211 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6848715-489f-49c7-b82d-9af20e8cd462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22edd40f30fb82c8479e0a6d01bdf4d119bd515a7b7e70c6d7105677b618723b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfcc0af5de38f5634f6a85b7c165bcc50a6c2e1b65d2d9d66127a67a165af2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2f1f6ee7310c86c0678a38e4a310a5a875177667cf63738f013eb4dc93e0ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.141786 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.153668 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.175318 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:44Z\\\",\\\"message\\\":\\\"64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 11:07:44.892375 6832 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed callin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z56jr_openshift-ovn-kubernetes(4c1140bb-3473-456a-b916-cfef4d4b7222)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.185838 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.975468 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.975487 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:54 crc kubenswrapper[4724]: E0226 11:07:54.975617 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:54 crc kubenswrapper[4724]: I0226 11:07:54.975637 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:54 crc kubenswrapper[4724]: E0226 11:07:54.975722 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:54 crc kubenswrapper[4724]: E0226 11:07:54.975810 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:55 crc kubenswrapper[4724]: I0226 11:07:55.974998 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:55 crc kubenswrapper[4724]: E0226 11:07:55.975133 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:56 crc kubenswrapper[4724]: I0226 11:07:56.975418 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:56 crc kubenswrapper[4724]: I0226 11:07:56.975462 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:56 crc kubenswrapper[4724]: E0226 11:07:56.975590 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:56 crc kubenswrapper[4724]: I0226 11:07:56.975763 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:56 crc kubenswrapper[4724]: E0226 11:07:56.975796 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:56 crc kubenswrapper[4724]: E0226 11:07:56.975958 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:57 crc kubenswrapper[4724]: I0226 11:07:57.975333 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:57 crc kubenswrapper[4724]: E0226 11:07:57.975546 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:07:58 crc kubenswrapper[4724]: I0226 11:07:58.975078 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:07:58 crc kubenswrapper[4724]: I0226 11:07:58.975239 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:07:58 crc kubenswrapper[4724]: I0226 11:07:58.975097 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:07:58 crc kubenswrapper[4724]: E0226 11:07:58.975444 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:07:58 crc kubenswrapper[4724]: E0226 11:07:58.975534 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:07:58 crc kubenswrapper[4724]: E0226 11:07:58.975622 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:07:58 crc kubenswrapper[4724]: I0226 11:07:58.977164 4724 scope.go:117] "RemoveContainer" containerID="40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402" Feb 26 11:07:58 crc kubenswrapper[4724]: E0226 11:07:58.977612 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z56jr_openshift-ovn-kubernetes(4c1140bb-3473-456a-b916-cfef4d4b7222)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" Feb 26 11:07:59 crc kubenswrapper[4724]: E0226 11:07:59.086609 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:07:59 crc kubenswrapper[4724]: I0226 11:07:59.974674 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:07:59 crc kubenswrapper[4724]: E0226 11:07:59.974855 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:08:00 crc kubenswrapper[4724]: I0226 11:08:00.974638 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:08:00 crc kubenswrapper[4724]: I0226 11:08:00.974699 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:08:00 crc kubenswrapper[4724]: E0226 11:08:00.974770 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:08:00 crc kubenswrapper[4724]: I0226 11:08:00.974667 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:00 crc kubenswrapper[4724]: E0226 11:08:00.974968 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:08:00 crc kubenswrapper[4724]: E0226 11:08:00.975099 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:08:00 crc kubenswrapper[4724]: I0226 11:08:00.992541 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 26 11:08:01 crc kubenswrapper[4724]: I0226 11:08:01.508824 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs\") pod \"network-metrics-daemon-tj879\" (UID: \"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\") " pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:01 crc kubenswrapper[4724]: E0226 11:08:01.509062 4724 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:08:01 crc kubenswrapper[4724]: E0226 11:08:01.509167 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs podName:00a83b55-07c3-47d4-9e4a-9d613f82d8a4 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:33.509144253 +0000 UTC m=+180.164883378 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs") pod "network-metrics-daemon-tj879" (UID: "00a83b55-07c3-47d4-9e4a-9d613f82d8a4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 11:08:01 crc kubenswrapper[4724]: I0226 11:08:01.975543 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:08:01 crc kubenswrapper[4724]: E0226 11:08:01.975736 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.593298 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.593343 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.593354 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.593371 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.593383 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:08:02Z","lastTransitionTime":"2026-02-26T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:08:02 crc kubenswrapper[4724]: E0226 11:08:02.606681 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:02Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.611472 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.611505 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.611516 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.611531 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.611542 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:08:02Z","lastTransitionTime":"2026-02-26T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:08:02 crc kubenswrapper[4724]: E0226 11:08:02.629064 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:02Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.633268 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.633325 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.633341 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.633364 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.633383 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:08:02Z","lastTransitionTime":"2026-02-26T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:08:02 crc kubenswrapper[4724]: E0226 11:08:02.645943 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:02Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.650375 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.650405 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.650415 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.650428 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.650443 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:08:02Z","lastTransitionTime":"2026-02-26T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:08:02 crc kubenswrapper[4724]: E0226 11:08:02.668517 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:02Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.671997 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.672022 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.672032 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.672046 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.672054 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:08:02Z","lastTransitionTime":"2026-02-26T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:08:02 crc kubenswrapper[4724]: E0226 11:08:02.683418 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:02Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:02 crc kubenswrapper[4724]: E0226 11:08:02.683529 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.974489 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:02 crc kubenswrapper[4724]: E0226 11:08:02.974630 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.974503 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:08:02 crc kubenswrapper[4724]: E0226 11:08:02.974700 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:08:02 crc kubenswrapper[4724]: I0226 11:08:02.974489 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:08:02 crc kubenswrapper[4724]: E0226 11:08:02.974753 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:08:03 crc kubenswrapper[4724]: I0226 11:08:03.975438 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:08:03 crc kubenswrapper[4724]: E0226 11:08:03.976460 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.007718 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686c393b-8069-4a11-ad05-58a4fb1ac696\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d203c18e04f74a64ee80185ab1b24934a813cb2755b922d672db40a0dda14ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://148ac36c722086f3da666bd9d11b1732195ceb43811dcf0a3491c8f85ab56024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5f2a8d9b46117fbada78998ba1203e9ea5af9fa89a86dda7c27c0a4b6aa552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8354a3cde0c7aca4a6e9131d8654ed3498e27c634ea4bb903b708853f67ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ea02fbc7026314379f833340a9b4bed7dd57ba1bbfdd95be51bdb34be147d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccd81d006a26d0167fa7a35db8d58a0c58efd8e0a680fef2b505fe9bde1ccf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccd81d006a26d0167fa7a35db8d58a0c58efd8e0a680fef2b505fe9bde1ccf89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0083f66b44d4a4907a631b1a5cebd789231a3a9ccb2ec23180da662d9e0a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c0083f66b44d4a4907a631b1a5cebd789231a3a9ccb2ec23180da662d9e0a7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f39a3c00519f0bd140f7b4ca72ae785fc0dc56bc23fdb7bfae991041dea8370e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f39a3c00519f0bd140f7b4ca72ae785fc0dc56bc23fdb7bfae991041dea8370e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.023868 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.041893 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.059411 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.074779 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: E0226 11:08:04.087169 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.087645 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.103128 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.116585 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.126934 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6848715-489f-49c7-b82d-9af20e8cd462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22edd40f30fb82c8479e0a6d01bdf4d119bd515a7b7e70c6d7105677b618723b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfcc0af5de38f5634f6a85b7c165bcc50a6c2e1b65d2d9d66127a67a165af2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2f1f6ee7310c86c0678a38e4a310a5a875177667cf63738f013eb4dc93e0ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.139074 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.151538 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.168790 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:44Z\\\",\\\"message\\\":\\\"64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 11:07:44.892375 6832 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed callin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z56jr_openshift-ovn-kubernetes(4c1140bb-3473-456a-b916-cfef4d4b7222)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.181197 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.191152 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.202577 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.214828 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.222840 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.617985 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ns2kr_332754e6-e64b-4e47-988d-6f1ddbe4912e/kube-multus/0.log" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.618384 4724 generic.go:334] "Generic (PLEG): container finished" podID="332754e6-e64b-4e47-988d-6f1ddbe4912e" containerID="f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0" exitCode=1 Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.618448 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ns2kr" event={"ID":"332754e6-e64b-4e47-988d-6f1ddbe4912e","Type":"ContainerDied","Data":"f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0"} Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.618991 4724 scope.go:117] "RemoveContainer" containerID="f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.637501 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.654777 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.673360 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.685122 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.697151 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.711932 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:08:04Z\\\",\\\"message\\\":\\\"2026-02-26T11:07:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_829862bd-eb63-4276-bc3f-ed6136a1bb4e\\\\n2026-02-26T11:07:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_829862bd-eb63-4276-bc3f-ed6136a1bb4e to /host/opt/cni/bin/\\\\n2026-02-26T11:07:19Z [verbose] multus-daemon started\\\\n2026-02-26T11:07:19Z [verbose] Readiness Indicator file check\\\\n2026-02-26T11:08:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.723711 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6848715-489f-49c7-b82d-9af20e8cd462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22edd40f30fb82c8479e0a6d01bdf4d119bd515a7b7e70c6d7105677b618723b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfcc0af5de38f5634f6a85b7c165bcc50a6c2e1b65d2d9d66127a67a165af2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2f1f6ee7310c86c0678a38e4a310a5a875177667cf63738f013eb4dc93e0ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.742692 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:44Z\\\",\\\"message\\\":\\\"64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 11:07:44.892375 6832 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed callin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z56jr_openshift-ovn-kubernetes(4c1140bb-3473-456a-b916-cfef4d4b7222)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.754884 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.766266 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.785539 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.798819 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.816143 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.831085 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.846043 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.869292 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686c393b-8069-4a11-ad05-58a4fb1ac696\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d203c18e04f74a64ee80185ab1b24934a813cb2755b922d672db40a0dda14ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://148ac36c722086f3da666bd9d11b1732195ceb43811dcf0a3491c8f85ab56024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5f2a8d9b46117fbada78998ba1203e9ea5af9fa89a86dda7c27c0a4b6aa552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8354a3cde0c7aca4a6e9131d8654ed3498e27c634ea4bb903b708853f67ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ea02fbc7026314379f833340a9b4bed7dd57ba1bbfdd95be51bdb34be147d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccd81d006a26d0167fa7a35db8d58a0c58efd8e0a680fef2b505fe9bde1ccf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccd81d006a26d0167fa7a35db8d58a0c58efd8e0a680fef2b505fe9bde1ccf89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0083f66b44d4a4907a631b1a5cebd789231a3a9ccb2ec23180da662d9e0a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c0083f66b44d4a4907a631b1a5cebd789231a3a9ccb2ec23180da662d9e0a7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f39a3c00519f0bd140f7b4ca72ae785fc0dc56bc23fdb7bfae991041dea8370e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f39a3c00519f0bd140f7b4ca72ae785fc0dc56bc23fdb7bfae991041dea8370e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.887983 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.974945 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:08:04 crc kubenswrapper[4724]: E0226 11:08:04.975074 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.975163 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:04 crc kubenswrapper[4724]: E0226 11:08:04.975263 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:08:04 crc kubenswrapper[4724]: I0226 11:08:04.975464 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:08:04 crc kubenswrapper[4724]: E0226 11:08:04.975512 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.624490 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ns2kr_332754e6-e64b-4e47-988d-6f1ddbe4912e/kube-multus/0.log" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.624553 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ns2kr" event={"ID":"332754e6-e64b-4e47-988d-6f1ddbe4912e","Type":"ContainerStarted","Data":"3829e785517cb10660ea5da6eca25ac7b18f4295076abb5a63943bf4f7a06384"} Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.640345 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.659295 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.671049 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.683101 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.694440 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.706957 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.717508 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.735256 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686c393b-8069-4a11-ad05-58a4fb1ac696\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d203c18e04f74a64ee80185ab1b24934a813cb2755b922d672db40a0dda14ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://148ac36c722086f3da666bd9d11b1732195ceb43811dcf0a3491c8f85ab56024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5f2a8d9b46117fbada78998ba1203e9ea5af9fa89a86dda7c27c0a4b6aa552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8354a3cde0c7aca4a6e9131d8654ed3498e27c634ea4bb903b708853f67ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ea02fbc7026314379f833340a9b4bed7dd57ba1bbfdd95be51bdb34be147d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccd81d006a26d0167fa7a35db8d58a0c58efd8e0a680fef2b505fe9bde1ccf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccd81d006a26d0167fa7a35db8d58a0c58efd8e0a680fef2b505fe9bde1ccf89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0083f66b44d4a4907a631b1a5cebd789231a3a9ccb2ec23180da662d9e0a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c0083f66b44d4a4907a631b1a5cebd789231a3a9ccb2ec23180da662d9e0a7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f39a3c00519f0bd140f7b4ca72ae785fc0dc56bc23fdb7bfae991041dea8370e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f39a3c00519f0bd140f7b4ca72ae785fc0dc56bc23fdb7bfae991041dea8370e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.747598 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6848715-489f-49c7-b82d-9af20e8cd462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22edd40f30fb82c8479e0a6d01bdf4d119bd515a7b7e70c6d7105677b618723b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfcc0af5de38f5634f6a85b7c165bcc50a6c2e1b65d2d9d66127a67a165af2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2f1f6ee7310c86c0678a38e4a310a5a875177667cf63738f013eb4dc93e0ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.761750 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.772561 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.784144 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.793876 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.805869 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.816779 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3829e785517cb10660ea5da6eca25ac7b18f4295076abb5a63943bf4f7a06384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:08:04Z\\\",\\\"message\\\":\\\"2026-02-26T11:07:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_829862bd-eb63-4276-bc3f-ed6136a1bb4e\\\\n2026-02-26T11:07:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_829862bd-eb63-4276-bc3f-ed6136a1bb4e to /host/opt/cni/bin/\\\\n2026-02-26T11:07:19Z [verbose] multus-daemon started\\\\n2026-02-26T11:07:19Z [verbose] Readiness Indicator file check\\\\n2026-02-26T11:08:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.828382 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.846273 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:44Z\\\",\\\"message\\\":\\\"64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 11:07:44.892375 6832 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed callin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z56jr_openshift-ovn-kubernetes(4c1140bb-3473-456a-b916-cfef4d4b7222)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:05 crc kubenswrapper[4724]: I0226 11:08:05.974657 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:08:05 crc kubenswrapper[4724]: E0226 11:08:05.974882 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:08:06 crc kubenswrapper[4724]: I0226 11:08:06.975293 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:08:06 crc kubenswrapper[4724]: I0226 11:08:06.975403 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:08:06 crc kubenswrapper[4724]: E0226 11:08:06.975423 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:08:06 crc kubenswrapper[4724]: E0226 11:08:06.975563 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:08:06 crc kubenswrapper[4724]: I0226 11:08:06.975316 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:06 crc kubenswrapper[4724]: E0226 11:08:06.975681 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:08:07 crc kubenswrapper[4724]: I0226 11:08:07.974689 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:08:07 crc kubenswrapper[4724]: E0226 11:08:07.974983 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:08:07 crc kubenswrapper[4724]: I0226 11:08:07.988915 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 26 11:08:08 crc kubenswrapper[4724]: I0226 11:08:08.889382 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:08 crc kubenswrapper[4724]: I0226 11:08:08.889510 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:08:08 crc kubenswrapper[4724]: I0226 11:08:08.889532 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.889607 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:09:12.889568469 +0000 UTC m=+219.545307594 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.889648 4724 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.889665 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:08:08 crc kubenswrapper[4724]: I0226 11:08:08.889680 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.889689 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.889750 4724 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.889705 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:09:12.889687683 +0000 UTC m=+219.545426898 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.889722 4724 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.889855 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 11:09:12.889844187 +0000 UTC m=+219.545583302 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.889895 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 11:09:12.889880018 +0000 UTC m=+219.545619203 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 11:08:08 crc kubenswrapper[4724]: I0226 11:08:08.975156 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:08:08 crc kubenswrapper[4724]: I0226 11:08:08.975164 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:08:08 crc kubenswrapper[4724]: I0226 11:08:08.975458 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.975326 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.975546 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.975619 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:08:08 crc kubenswrapper[4724]: I0226 11:08:08.990671 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.990874 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.990897 4724 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.990907 4724 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:08:08 crc kubenswrapper[4724]: E0226 11:08:08.990957 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 11:09:12.990941816 +0000 UTC m=+219.646680931 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 11:08:09 crc kubenswrapper[4724]: E0226 11:08:09.088210 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:08:09 crc kubenswrapper[4724]: I0226 11:08:09.974719 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:08:09 crc kubenswrapper[4724]: E0226 11:08:09.974875 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:08:10 crc kubenswrapper[4724]: I0226 11:08:10.975208 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:08:10 crc kubenswrapper[4724]: I0226 11:08:10.975277 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:08:10 crc kubenswrapper[4724]: I0226 11:08:10.975345 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:10 crc kubenswrapper[4724]: E0226 11:08:10.975431 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:08:10 crc kubenswrapper[4724]: E0226 11:08:10.975360 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:08:10 crc kubenswrapper[4724]: E0226 11:08:10.975538 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:08:11 crc kubenswrapper[4724]: I0226 11:08:11.974878 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:08:11 crc kubenswrapper[4724]: E0226 11:08:11.975552 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.747589 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.747660 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.747674 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.747695 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.747711 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:08:12Z","lastTransitionTime":"2026-02-26T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:08:12 crc kubenswrapper[4724]: E0226 11:08:12.760422 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.766027 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.766072 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.766082 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.766098 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.766111 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:08:12Z","lastTransitionTime":"2026-02-26T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:08:12 crc kubenswrapper[4724]: E0226 11:08:12.780876 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.785818 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.785872 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.785881 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.785895 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.785905 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:08:12Z","lastTransitionTime":"2026-02-26T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:08:12 crc kubenswrapper[4724]: E0226 11:08:12.798219 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.801951 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.801978 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.801987 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.802001 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.802010 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:08:12Z","lastTransitionTime":"2026-02-26T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:08:12 crc kubenswrapper[4724]: E0226 11:08:12.870380 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.875907 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.875974 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.875996 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.876042 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.876054 4724 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T11:08:12Z","lastTransitionTime":"2026-02-26T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 11:08:12 crc kubenswrapper[4724]: E0226 11:08:12.891251 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c9118f1-edc7-4e76-b83d-ad0410c545bb\\\",\\\"systemUUID\\\":\\\"68498961-1c21-4225-84c0-71d91bc5664e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:12 crc kubenswrapper[4724]: E0226 11:08:12.891438 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.974969 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.975004 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:08:12 crc kubenswrapper[4724]: I0226 11:08:12.975117 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:08:12 crc kubenswrapper[4724]: E0226 11:08:12.975106 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:08:12 crc kubenswrapper[4724]: E0226 11:08:12.975266 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:08:12 crc kubenswrapper[4724]: E0226 11:08:12.975327 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:08:13 crc kubenswrapper[4724]: I0226 11:08:13.975306 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:08:13 crc kubenswrapper[4724]: E0226 11:08:13.975461 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:08:13 crc kubenswrapper[4724]: I0226 11:08:13.976459 4724 scope.go:117] "RemoveContainer" containerID="40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:13.999989 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686c393b-8069-4a11-ad05-58a4fb1ac696\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d203c18e04f74a64ee80185ab1b24934a813cb2755b922d672db40a0dda14ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://148ac36c722086f3da666bd9d11b1732195ceb43811dcf0a3491c8f85ab56024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5f2a8d9b46117fbada78998ba1203e9ea5af9fa89a86dda7c27c0a4b6aa552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8354a3cde0c7aca4a6e9131d8654ed3498e27c634ea4bb903b708853f67ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ea02fbc7026314379f833340a9b4bed7dd57ba1bbfdd95be51bdb34be147d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccd81d006a26d0167fa7a35db8d58a0c58efd8e0a680fef2b505fe9bde1ccf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccd81d006a26d0167fa7a35db8d58a0c58efd8e0a680fef2b505fe9bde1ccf89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0083f66b44d4a4907a631b1a5cebd789231a3a9ccb2ec23180da662d9e0a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c0083f66b44d4a4907a631b1a5cebd789231a3a9ccb2ec23180da662d9e0a7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f39a3c00519f0bd140f7b4ca72ae785fc0dc56bc23fdb7bfae991041dea8370e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f39a3c00519f0bd140f7b4ca72ae785fc0dc56bc23fdb7bfae991041dea8370e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:13Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.016666 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.031517 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.046561 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.060846 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.077487 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3829e785517cb10660ea5da6eca25ac7b18f4295076abb5a63943bf4f7a06384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:08:04Z\\\",\\\"message\\\":\\\"2026-02-26T11:07:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_829862bd-eb63-4276-bc3f-ed6136a1bb4e\\\\n2026-02-26T11:07:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_829862bd-eb63-4276-bc3f-ed6136a1bb4e to /host/opt/cni/bin/\\\\n2026-02-26T11:07:19Z [verbose] multus-daemon started\\\\n2026-02-26T11:07:19Z [verbose] Readiness Indicator file check\\\\n2026-02-26T11:08:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: E0226 11:08:14.090760 4724 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.097760 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4891ce-f96f-4918-afdc-4af091e8cdf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e92d51ca0bcde355c08989b23b0a74610f818b9d28a946d9260561e934dfea5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1f53d018ded65ea8f7f942ccaf1686887229ec9ddc478ce0676bc5e2d92279\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:35Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 11:06:05.634789 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 11:06:05.635532 1 observer_polling.go:159] Starting file observer\\\\nI0226 11:06:05.636264 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 11:06:05.636838 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 11:06:27.980752 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0226 11:06:35.262775 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 11:06:35.262835 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:05Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fabdbc207283755407fe33ee609345de6dcab0c4a0272e0b04a3cf02daf7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe8db8375d5a3f78d3345a3a3d9fd57496cbbf2338e3e6c9d7b9ecd638257f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d145aa52f5003862c62b869f473cfc5fe8ff7fb099013d711b206630407c5cd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.110762 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6848715-489f-49c7-b82d-9af20e8cd462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22edd40f30fb82c8479e0a6d01bdf4d119bd515a7b7e70c6d7105677b618723b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfcc0af5de38f5634f6a85b7c165bcc50a6c2e1b65d2d9d66127a67a165af2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2f1f6ee7310c86c0678a38e4a310a5a875177667cf63738f013eb4dc93e0ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.128882 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.145802 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.163044 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.177895 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.200420 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:44Z\\\",\\\"message\\\":\\\"64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 11:07:44.892375 6832 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed callin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z56jr_openshift-ovn-kubernetes(4c1140bb-3473-456a-b916-cfef4d4b7222)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.219079 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.238081 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.256333 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.279861 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.302475 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.657906 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovnkube-controller/2.log" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.663782 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerStarted","Data":"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73"} Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.664438 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.701375 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:33Z\\\",\\\"message\\\":\\\"W0226 11:06:33.218996 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 11:06:33.219490 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772103993 cert, and key in /tmp/serving-cert-2688003005/serving-signer.crt, /tmp/serving-cert-2688003005/serving-signer.key\\\\nI0226 11:06:33.551627 1 observer_polling.go:159] Starting file observer\\\\nW0226 11:06:33.562435 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0226 11:06:33.562598 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 11:06:33.563366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2688003005/tls.crt::/tmp/serving-cert-2688003005/tls.key\\\\\\\"\\\\nF0226 11:06:33.909606 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.731366 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23294c7d-d7c0-4b51-92a0-f7df8c67ff0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21a563ca2978882c4ca3f1344f600cba54aed6fcf334a1fd715e5dcf377cf20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8fc9a9e7150637bbd1733b68840b4bf24f3f1d8b6943f2f90e1bd4b1a6284e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://292246633fb22c553f1b3c93e14fd218a7f7c0a7d64f75bbc743eb1e1d762a35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f6d900a148e9e1ace75145745dd723f324956fec07e2f52f19f6d01c37d02a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a30c811ab0828793e5f3c006b78d8e6cd0c23d0c52409f429622bc4e56e99e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93038ef48a78c1a5373e9ad859141b3795398859fe5f027062d9e45a86a729aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c9cd05cc332a05e4d16a787743081c65b5a9fba22f86d66a4d169cea85991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hq495\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wtm5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.750127 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-49n4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9df16c-aeb4-4568-acbc-01b30c871371\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c79eb47a1cb74194d6868d8777fd331161ab71fae057a93519b6ead3b1162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6xfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-49n4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.799061 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686c393b-8069-4a11-ad05-58a4fb1ac696\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d203c18e04f74a64ee80185ab1b24934a813cb2755b922d672db40a0dda14ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://148ac36c722086f3da666bd9d11b1732195ceb43811dcf0a3491c8f85ab56024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5f2a8d9b46117fbada78998ba1203e9ea5af9fa89a86dda7c27c0a4b6aa552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f8354a3cde0c7aca4a6e9131d8654ed3498e27c634ea4bb903b708853f67ae3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ea02fbc7026314379f833340a9b4bed7dd57ba1bbfdd95be51bdb34be147d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccd81d006a26d0167fa7a35db8d58a0c58efd8e0a680fef2b505fe9bde1ccf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccd81d006a26d0167fa7a35db8d58a0c58efd8e0a680fef2b505fe9bde1ccf89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0083f66b44d4a4907a631b1a5cebd789231a3a9ccb2ec23180da662d9e0a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c0083f66b44d4a4907a631b1a5cebd789231a3a9ccb2ec23180da662d9e0a7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f39a3c00519f0bd140f7b4ca72ae785fc0dc56bc23fdb7bfae991041dea8370e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f39a3c00519f0bd140f7b4ca72ae785fc0dc56bc23fdb7bfae991041dea8370e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.820312 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0979755c2c05f45c7e4ab094f0489db658dd4e13de450568d2c6dfe9d7e167f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.837589 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.851749 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a31597b7c0447e57dde19f73e2cdd0fb1448e17c9b34ece8ef91fd2bb887be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.869199 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f6ef5ac2cc3ffddd897ed4779c97f614e1b4ee9adbd4b7e458b58b2b036d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.885900 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zfscs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b41f80b-f054-43aa-9b24-64d58d45f72f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e14874c818bdfae50a22e51e86c56f577ff82941e2c2b71a462d5798e8f63d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cln7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zfscs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.906447 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2405c92-e87c-4e60-ac28-0cd51800d9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e969cfe497f7b5c43998e06915e8889b15213e97e2e43734c643ea66372b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pck7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gv7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.927494 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ns2kr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"332754e6-e64b-4e47-988d-6f1ddbe4912e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3829e785517cb10660ea5da6eca25ac7b18f4295076abb5a63943bf4f7a06384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:08:04Z\\\",\\\"message\\\":\\\"2026-02-26T11:07:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_829862bd-eb63-4276-bc3f-ed6136a1bb4e\\\\n2026-02-26T11:07:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_829862bd-eb63-4276-bc3f-ed6136a1bb4e to /host/opt/cni/bin/\\\\n2026-02-26T11:07:19Z [verbose] multus-daemon started\\\\n2026-02-26T11:07:19Z [verbose] Readiness Indicator file check\\\\n2026-02-26T11:08:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rn4f6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ns2kr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.947674 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4891ce-f96f-4918-afdc-4af091e8cdf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e92d51ca0bcde355c08989b23b0a74610f818b9d28a946d9260561e934dfea5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1f53d018ded65ea8f7f942ccaf1686887229ec9ddc478ce0676bc5e2d92279\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T11:06:35Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 11:06:05.634789 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 11:06:05.635532 1 observer_polling.go:159] Starting file observer\\\\nI0226 11:06:05.636264 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 11:06:05.636838 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 11:06:27.980752 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0226 11:06:35.262775 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 11:06:35.262835 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:06:05Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fabdbc207283755407fe33ee609345de6dcab0c4a0272e0b04a3cf02daf7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe8db8375d5a3f78d3345a3a3d9fd57496cbbf2338e3e6c9d7b9ecd638257f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d145aa52f5003862c62b869f473cfc5fe8ff7fb099013d711b206630407c5cd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.963247 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6848715-489f-49c7-b82d-9af20e8cd462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22edd40f30fb82c8479e0a6d01bdf4d119bd515a7b7e70c6d7105677b618723b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dfcc0af5de38f5634f6a85b7c165bcc50a6c2e1b65d2d9d66127a67a165af2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2f1f6ee7310c86c0678a38e4a310a5a875177667cf63738f013eb4dc93e0ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bdda14bdcef0814b70ed4efb53ea51a6d3e1eb3d57af7d44a258bc1f4bb6564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:05:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:05:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:05:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.974950 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.975092 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.975133 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:14 crc kubenswrapper[4724]: E0226 11:08:14.975195 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:08:14 crc kubenswrapper[4724]: E0226 11:08:14.975323 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:08:14 crc kubenswrapper[4724]: E0226 11:08:14.975442 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.986010 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:14Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:14 crc kubenswrapper[4724]: I0226 11:08:14.993270 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 26 11:08:15 crc kubenswrapper[4724]: I0226 11:08:15.008821 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:15 crc kubenswrapper[4724]: I0226 11:08:15.033283 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1140bb-3473-456a-b916-cfef4d4b7222\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T11:07:44Z\\\",\\\"message\\\":\\\"64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 11:07:44.892375 6832 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed callin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvffk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z56jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:15 crc kubenswrapper[4724]: I0226 11:08:15.050237 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592681b1-be7d-45f1-9be8-64900e488bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb9434ad9bf3a8f9ef38bf4f5218d566f1ccb39d3d42bce65ce126c97820369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c7a7a681b5d14f61688da1282df4dad76964f5f0abce790d5bfec57daf7e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9tkwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7686\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:15 crc kubenswrapper[4724]: I0226 11:08:15.063046 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tj879" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T11:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z55mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T11:07:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tj879\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T11:08:15Z is after 2025-08-24T17:21:41Z" Feb 26 11:08:15 crc kubenswrapper[4724]: I0226 11:08:15.367755 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tj879"] Feb 26 11:08:15 crc kubenswrapper[4724]: I0226 11:08:15.668015 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:15 crc kubenswrapper[4724]: E0226 11:08:15.668228 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:08:15 crc kubenswrapper[4724]: I0226 11:08:15.974720 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:08:15 crc kubenswrapper[4724]: E0226 11:08:15.974910 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:08:16 crc kubenswrapper[4724]: I0226 11:08:16.975241 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:08:16 crc kubenswrapper[4724]: I0226 11:08:16.975241 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:08:16 crc kubenswrapper[4724]: E0226 11:08:16.975475 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:08:16 crc kubenswrapper[4724]: E0226 11:08:16.975529 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:08:16 crc kubenswrapper[4724]: I0226 11:08:16.975280 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:16 crc kubenswrapper[4724]: E0226 11:08:16.975638 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:08:17 crc kubenswrapper[4724]: I0226 11:08:17.975020 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:08:17 crc kubenswrapper[4724]: E0226 11:08:17.975309 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 11:08:18 crc kubenswrapper[4724]: I0226 11:08:18.974947 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:08:18 crc kubenswrapper[4724]: I0226 11:08:18.974991 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:08:18 crc kubenswrapper[4724]: I0226 11:08:18.975041 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:18 crc kubenswrapper[4724]: E0226 11:08:18.977294 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 11:08:18 crc kubenswrapper[4724]: E0226 11:08:18.977741 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tj879" podUID="00a83b55-07c3-47d4-9e4a-9d613f82d8a4" Feb 26 11:08:18 crc kubenswrapper[4724]: E0226 11:08:18.977803 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 11:08:19 crc kubenswrapper[4724]: I0226 11:08:19.975167 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:08:19 crc kubenswrapper[4724]: I0226 11:08:19.980061 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 11:08:19 crc kubenswrapper[4724]: I0226 11:08:19.986722 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 11:08:20 crc kubenswrapper[4724]: I0226 11:08:20.974736 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:08:20 crc kubenswrapper[4724]: I0226 11:08:20.974789 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:08:20 crc kubenswrapper[4724]: I0226 11:08:20.975399 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:20 crc kubenswrapper[4724]: I0226 11:08:20.979500 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 11:08:20 crc kubenswrapper[4724]: I0226 11:08:20.979516 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 11:08:20 crc kubenswrapper[4724]: I0226 11:08:20.979551 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 11:08:20 crc kubenswrapper[4724]: I0226 11:08:20.982520 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 11:08:22 crc kubenswrapper[4724]: I0226 11:08:22.968834 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 11:08:22 crc kubenswrapper[4724]: I0226 11:08:22.969716 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 11:08:22 crc kubenswrapper[4724]: I0226 11:08:22.969834 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 11:08:22 crc kubenswrapper[4724]: I0226 11:08:22.969921 4724 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.020976 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nj24t"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.021712 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.022866 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wdxr7"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.023747 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.024246 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rrbmc"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.024567 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69bcg"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.024724 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.024926 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2m27r"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.025332 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.026116 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69bcg" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.026477 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mckmm"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.027068 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.031056 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.031445 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: W0226 11:08:23.031670 4724 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.031705 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 11:08:23 crc kubenswrapper[4724]: W0226 11:08:23.031705 4724 reflector.go:561] object-"openshift-authentication"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Feb 26 11:08:23 crc kubenswrapper[4724]: W0226 11:08:23.031783 4724 reflector.go:561] object-"openshift-console-operator"/"trusted-ca": failed to list *v1.ConfigMap: configmaps "trusted-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Feb 26 11:08:23 crc kubenswrapper[4724]: E0226 11:08:23.031741 4724 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 11:08:23 crc kubenswrapper[4724]: E0226 11:08:23.031833 4724 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.032037 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: E0226 11:08:23.032638 4724 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.032939 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.036379 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.036445 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.036498 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.036450 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.039687 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.040450 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.042501 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.044389 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.051748 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.052045 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.058565 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.059083 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.061982 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.062681 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.063263 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.063904 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.078231 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.078689 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.078788 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.079565 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.080703 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.080869 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.081639 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.081728 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.081867 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.082307 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.082620 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.082691 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.092458 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.097547 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.098282 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-md2vv"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.098769 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.099122 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.099539 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s92pk"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.099825 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.100296 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.100840 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.101115 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.101422 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9cwcb"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.101657 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.101728 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d9shf"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.101935 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.102172 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.102539 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.109165 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.116521 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.116990 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-d9shf" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.117803 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.121692 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.122552 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.125012 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.126166 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.141146 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.127291 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.173148 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-k5ktg"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.173740 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.174492 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.175133 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.176070 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.181350 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.187683 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k5ktg" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.189107 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.126433 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.127993 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.128055 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.128117 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.128268 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.128313 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.128366 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.128413 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.128463 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.128509 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.128789 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.129812 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.130249 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.130289 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.130322 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.130351 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.130375 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.140649 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.190302 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.190347 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.191347 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.192281 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.195250 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.195572 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.195686 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.195790 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.195871 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.196072 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.206239 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.208487 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.210569 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.210649 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.210745 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.210929 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.210989 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.213719 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.214064 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.214237 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.214313 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.214588 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.214796 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.215207 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.215607 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.216045 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.216393 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-h27ll"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.216726 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4f5jn"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.218281 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.218455 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.220997 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.223052 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.223498 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.225932 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.227172 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.227391 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.227497 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.227622 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.227729 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.227818 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.227928 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.228042 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.228152 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.228276 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.228413 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.228544 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.228644 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.228771 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.228896 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.233043 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.233323 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.233635 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.235267 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.236446 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.236930 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=23.23692027 podStartE2EDuration="23.23692027s" podCreationTimestamp="2026-02-26 11:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:23.223394623 +0000 UTC m=+169.879133748" watchObservedRunningTime="2026-02-26 11:08:23.23692027 +0000 UTC m=+169.892659385" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.237895 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238405 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/630d11de-abc5-47ed-8284-7bbf4ec5b9c8-config\") pod \"machine-approver-56656f9798-6tgnh\" (UID: \"630d11de-abc5-47ed-8284-7bbf4ec5b9c8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238434 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238452 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223205f4-c6e1-4f77-bfc3-667ad541a34e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6rwql\" (UID: \"223205f4-c6e1-4f77-bfc3-667ad541a34e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238471 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/809c874c-661e-43c3-9e0e-6ee95ed8586e-proxy-tls\") pod \"machine-config-operator-74547568cd-thlzt\" (UID: \"809c874c-661e-43c3-9e0e-6ee95ed8586e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238486 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/feac8cdb-eb8a-4f0d-afee-d18467d73727-etcd-client\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238505 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nj24t\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238522 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eb89f1c-1230-4455-86c1-6ad3796969a9-config\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238537 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5472b\" (UniqueName: \"kubernetes.io/projected/936d9d11-1063-4a6a-b7d6-68a1fe00d9dd-kube-api-access-5472b\") pod \"openshift-controller-manager-operator-756b6f6bc6-9kj2m\" (UID: \"936d9d11-1063-4a6a-b7d6-68a1fe00d9dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238554 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-oauth-config\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238581 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0eb89f1c-1230-4455-86c1-6ad3796969a9-etcd-serving-ca\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238599 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0eb89f1c-1230-4455-86c1-6ad3796969a9-image-import-ca\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238616 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znjtk\" (UniqueName: \"kubernetes.io/projected/74c6d322-04d4-4a3e-b3d7-fa6157c5a696-kube-api-access-znjtk\") pod \"authentication-operator-69f744f599-mckmm\" (UID: \"74c6d322-04d4-4a3e-b3d7-fa6157c5a696\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238633 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/223205f4-c6e1-4f77-bfc3-667ad541a34e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6rwql\" (UID: \"223205f4-c6e1-4f77-bfc3-667ad541a34e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238654 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-service-ca\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238671 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lvs9\" (UniqueName: \"kubernetes.io/projected/809c874c-661e-43c3-9e0e-6ee95ed8586e-kube-api-access-9lvs9\") pod \"machine-config-operator-74547568cd-thlzt\" (UID: \"809c874c-661e-43c3-9e0e-6ee95ed8586e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238689 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01a2fc1a-81a3-4607-926f-5e8ee502a3c9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ncl99\" (UID: \"01a2fc1a-81a3-4607-926f-5e8ee502a3c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238718 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238744 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7e9f338-eb02-4618-aafb-37065b3823f9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zngnc\" (UID: \"b7e9f338-eb02-4618-aafb-37065b3823f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238771 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qnkw\" (UniqueName: \"kubernetes.io/projected/630d11de-abc5-47ed-8284-7bbf4ec5b9c8-kube-api-access-8qnkw\") pod \"machine-approver-56656f9798-6tgnh\" (UID: \"630d11de-abc5-47ed-8284-7bbf4ec5b9c8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238799 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49a62159-9584-4fd5-b9d2-e81d422f5089-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r668f\" (UID: \"49a62159-9584-4fd5-b9d2-e81d422f5089\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238822 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-oauth-serving-cert\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238842 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/936d9d11-1063-4a6a-b7d6-68a1fe00d9dd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9kj2m\" (UID: \"936d9d11-1063-4a6a-b7d6-68a1fe00d9dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238862 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/45069e17-f50a-47d5-9552-b32b9eecadce-stats-auth\") pod \"router-default-5444994796-h27ll\" (UID: \"45069e17-f50a-47d5-9552-b32b9eecadce\") " pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238881 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feac8cdb-eb8a-4f0d-afee-d18467d73727-serving-cert\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238907 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtrdk\" (UniqueName: \"kubernetes.io/projected/feac8cdb-eb8a-4f0d-afee-d18467d73727-kube-api-access-gtrdk\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238939 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238964 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-client-ca\") pod \"controller-manager-879f6c89f-nj24t\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.238991 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74c6d322-04d4-4a3e-b3d7-fa6157c5a696-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mckmm\" (UID: \"74c6d322-04d4-4a3e-b3d7-fa6157c5a696\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.239018 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/01a2fc1a-81a3-4607-926f-5e8ee502a3c9-metrics-tls\") pod \"ingress-operator-5b745b69d9-ncl99\" (UID: \"01a2fc1a-81a3-4607-926f-5e8ee502a3c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.239053 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnj4w\" (UniqueName: \"kubernetes.io/projected/4c344657-7620-4366-80a9-84de8ed2face-kube-api-access-hnj4w\") pod \"cluster-samples-operator-665b6dd947-69bcg\" (UID: \"4c344657-7620-4366-80a9-84de8ed2face\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69bcg" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.239078 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd269f2-74b2-4dfd-bec7-1442d4b438ef-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-b7cfr\" (UID: \"0bd269f2-74b2-4dfd-bec7-1442d4b438ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.239102 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/207d3079-e7ed-46b9-8744-aed50bb42352-available-featuregates\") pod \"openshift-config-operator-7777fb866f-md2vv\" (UID: \"207d3079-e7ed-46b9-8744-aed50bb42352\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.239122 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-etcd-ca\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.239144 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-config\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.239195 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.239217 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-config\") pod \"controller-manager-879f6c89f-nj24t\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.239236 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0eb89f1c-1230-4455-86c1-6ad3796969a9-node-pullsecrets\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.239254 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g44wp\" (UniqueName: \"kubernetes.io/projected/b0e436fd-9344-4f55-ae35-4eae3aac24c8-kube-api-access-g44wp\") pod \"cluster-image-registry-operator-dc59b4c8b-g9q8d\" (UID: \"b0e436fd-9344-4f55-ae35-4eae3aac24c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.239400 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.241086 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.247842 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.250986 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/630d11de-abc5-47ed-8284-7bbf4ec5b9c8-auth-proxy-config\") pod \"machine-approver-56656f9798-6tgnh\" (UID: \"630d11de-abc5-47ed-8284-7bbf4ec5b9c8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.251019 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hqwr\" (UniqueName: \"kubernetes.io/projected/0bd269f2-74b2-4dfd-bec7-1442d4b438ef-kube-api-access-2hqwr\") pod \"openshift-apiserver-operator-796bbdcf4f-b7cfr\" (UID: \"0bd269f2-74b2-4dfd-bec7-1442d4b438ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.251042 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-serving-cert\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.251063 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.251737 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f469f47-990d-4224-8002-c658ef626f48-audit-dir\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.251770 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/feac8cdb-eb8a-4f0d-afee-d18467d73727-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.251798 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/feac8cdb-eb8a-4f0d-afee-d18467d73727-audit-dir\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.251817 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feac8cdb-eb8a-4f0d-afee-d18467d73727-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.251862 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.251885 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b7e9f338-eb02-4618-aafb-37065b3823f9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zngnc\" (UID: \"b7e9f338-eb02-4618-aafb-37065b3823f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.251909 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7e9f338-eb02-4618-aafb-37065b3823f9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zngnc\" (UID: \"b7e9f338-eb02-4618-aafb-37065b3823f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.251928 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-etcd-client\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.251955 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.251977 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42a649e6-a13d-4a1d-94a6-82c03d5a913b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hn56r\" (UID: \"42a649e6-a13d-4a1d-94a6-82c03d5a913b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252004 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff98r\" (UniqueName: \"kubernetes.io/projected/5f469f47-990d-4224-8002-c658ef626f48-kube-api-access-ff98r\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252021 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ffca8b8-930c-4a19-93ff-e47500546d2e-serving-cert\") pod \"route-controller-manager-6576b87f9c-mxbr7\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252039 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45069e17-f50a-47d5-9552-b32b9eecadce-service-ca-bundle\") pod \"router-default-5444994796-h27ll\" (UID: \"45069e17-f50a-47d5-9552-b32b9eecadce\") " pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252061 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh5c9\" (UniqueName: \"kubernetes.io/projected/fab74e98-0cf9-41ea-aebc-ce1cd5011740-kube-api-access-sh5c9\") pod \"kube-storage-version-migrator-operator-b67b599dd-snwd7\" (UID: \"fab74e98-0cf9-41ea-aebc-ce1cd5011740\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252078 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-etcd-service-ca\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252098 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01c4a397-4485-49bc-9ee3-c794832fd1ee-serving-cert\") pod \"controller-manager-879f6c89f-nj24t\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252120 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0eb89f1c-1230-4455-86c1-6ad3796969a9-serving-cert\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252138 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4f28\" (UniqueName: \"kubernetes.io/projected/9063c94b-5e44-4a4a-9c85-e122cf7751b9-kube-api-access-w4f28\") pod \"console-operator-58897d9998-rrbmc\" (UID: \"9063c94b-5e44-4a4a-9c85-e122cf7751b9\") " pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252161 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a62159-9584-4fd5-b9d2-e81d422f5089-config\") pod \"kube-apiserver-operator-766d6c64bb-r668f\" (UID: \"49a62159-9584-4fd5-b9d2-e81d422f5089\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252193 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b0e436fd-9344-4f55-ae35-4eae3aac24c8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g9q8d\" (UID: \"b0e436fd-9344-4f55-ae35-4eae3aac24c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252211 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab74e98-0cf9-41ea-aebc-ce1cd5011740-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-snwd7\" (UID: \"fab74e98-0cf9-41ea-aebc-ce1cd5011740\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252234 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-audit-policies\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252255 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c6d322-04d4-4a3e-b3d7-fa6157c5a696-config\") pod \"authentication-operator-69f744f599-mckmm\" (UID: \"74c6d322-04d4-4a3e-b3d7-fa6157c5a696\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252276 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42a649e6-a13d-4a1d-94a6-82c03d5a913b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hn56r\" (UID: \"42a649e6-a13d-4a1d-94a6-82c03d5a913b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252356 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7lc7\" (UniqueName: \"kubernetes.io/projected/01a2fc1a-81a3-4607-926f-5e8ee502a3c9-kube-api-access-f7lc7\") pod \"ingress-operator-5b745b69d9-ncl99\" (UID: \"01a2fc1a-81a3-4607-926f-5e8ee502a3c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252550 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-trusted-ca-bundle\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252577 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/809c874c-661e-43c3-9e0e-6ee95ed8586e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-thlzt\" (UID: \"809c874c-661e-43c3-9e0e-6ee95ed8586e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.252641 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4q9l\" (UniqueName: \"kubernetes.io/projected/45069e17-f50a-47d5-9552-b32b9eecadce-kube-api-access-z4q9l\") pod \"router-default-5444994796-h27ll\" (UID: \"45069e17-f50a-47d5-9552-b32b9eecadce\") " pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.253787 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/207d3079-e7ed-46b9-8744-aed50bb42352-serving-cert\") pod \"openshift-config-operator-7777fb866f-md2vv\" (UID: \"207d3079-e7ed-46b9-8744-aed50bb42352\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.253831 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9cq\" (UniqueName: \"kubernetes.io/projected/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-kube-api-access-fw9cq\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254413 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936d9d11-1063-4a6a-b7d6-68a1fe00d9dd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9kj2m\" (UID: \"936d9d11-1063-4a6a-b7d6-68a1fe00d9dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254501 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223205f4-c6e1-4f77-bfc3-667ad541a34e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6rwql\" (UID: \"223205f4-c6e1-4f77-bfc3-667ad541a34e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254556 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42a649e6-a13d-4a1d-94a6-82c03d5a913b-config\") pod \"kube-controller-manager-operator-78b949d7b-hn56r\" (UID: \"42a649e6-a13d-4a1d-94a6-82c03d5a913b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254581 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/feac8cdb-eb8a-4f0d-afee-d18467d73727-audit-policies\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254635 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9063c94b-5e44-4a4a-9c85-e122cf7751b9-config\") pod \"console-operator-58897d9998-rrbmc\" (UID: \"9063c94b-5e44-4a4a-9c85-e122cf7751b9\") " pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254668 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/45069e17-f50a-47d5-9552-b32b9eecadce-default-certificate\") pod \"router-default-5444994796-h27ll\" (UID: \"45069e17-f50a-47d5-9552-b32b9eecadce\") " pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254694 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4c344657-7620-4366-80a9-84de8ed2face-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-69bcg\" (UID: \"4c344657-7620-4366-80a9-84de8ed2face\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69bcg" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254716 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab74e98-0cf9-41ea-aebc-ce1cd5011740-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-snwd7\" (UID: \"fab74e98-0cf9-41ea-aebc-ce1cd5011740\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254775 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0eb89f1c-1230-4455-86c1-6ad3796969a9-etcd-client\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254796 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6lcs\" (UniqueName: \"kubernetes.io/projected/4d5a1aaf-95fa-4eaf-8f2f-68c48937af6f-kube-api-access-r6lcs\") pod \"dns-operator-744455d44c-d9shf\" (UID: \"4d5a1aaf-95fa-4eaf-8f2f-68c48937af6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-d9shf" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254841 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bd269f2-74b2-4dfd-bec7-1442d4b438ef-config\") pod \"openshift-apiserver-operator-796bbdcf4f-b7cfr\" (UID: \"0bd269f2-74b2-4dfd-bec7-1442d4b438ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254867 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtwlm\" (UniqueName: \"kubernetes.io/projected/207d3079-e7ed-46b9-8744-aed50bb42352-kube-api-access-rtwlm\") pod \"openshift-config-operator-7777fb866f-md2vv\" (UID: \"207d3079-e7ed-46b9-8744-aed50bb42352\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254887 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/feac8cdb-eb8a-4f0d-afee-d18467d73727-encryption-config\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254909 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254947 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-config\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254969 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b7e9f338-eb02-4618-aafb-37065b3823f9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zngnc\" (UID: \"b7e9f338-eb02-4618-aafb-37065b3823f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.254989 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49a62159-9584-4fd5-b9d2-e81d422f5089-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r668f\" (UID: \"49a62159-9584-4fd5-b9d2-e81d422f5089\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255007 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdxq7\" (UniqueName: \"kubernetes.io/projected/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-kube-api-access-fdxq7\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255032 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0eb89f1c-1230-4455-86c1-6ad3796969a9-audit-dir\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255071 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/630d11de-abc5-47ed-8284-7bbf4ec5b9c8-machine-approver-tls\") pod \"machine-approver-56656f9798-6tgnh\" (UID: \"630d11de-abc5-47ed-8284-7bbf4ec5b9c8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255091 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45069e17-f50a-47d5-9552-b32b9eecadce-metrics-certs\") pod \"router-default-5444994796-h27ll\" (UID: \"45069e17-f50a-47d5-9552-b32b9eecadce\") " pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255112 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0eb89f1c-1230-4455-86c1-6ad3796969a9-encryption-config\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255132 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9063c94b-5e44-4a4a-9c85-e122cf7751b9-serving-cert\") pod \"console-operator-58897d9998-rrbmc\" (UID: \"9063c94b-5e44-4a4a-9c85-e122cf7751b9\") " pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255148 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9063c94b-5e44-4a4a-9c85-e122cf7751b9-trusted-ca\") pod \"console-operator-58897d9998-rrbmc\" (UID: \"9063c94b-5e44-4a4a-9c85-e122cf7751b9\") " pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255166 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74c6d322-04d4-4a3e-b3d7-fa6157c5a696-service-ca-bundle\") pod \"authentication-operator-69f744f599-mckmm\" (UID: \"74c6d322-04d4-4a3e-b3d7-fa6157c5a696\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255202 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/809c874c-661e-43c3-9e0e-6ee95ed8586e-images\") pod \"machine-config-operator-74547568cd-thlzt\" (UID: \"809c874c-661e-43c3-9e0e-6ee95ed8586e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255252 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhqs\" (UniqueName: \"kubernetes.io/projected/01c4a397-4485-49bc-9ee3-c794832fd1ee-kube-api-access-lqhqs\") pod \"controller-manager-879f6c89f-nj24t\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255276 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0e436fd-9344-4f55-ae35-4eae3aac24c8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g9q8d\" (UID: \"b0e436fd-9344-4f55-ae35-4eae3aac24c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255298 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ffca8b8-930c-4a19-93ff-e47500546d2e-config\") pod \"route-controller-manager-6576b87f9c-mxbr7\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255320 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkks5\" (UniqueName: \"kubernetes.io/projected/7ffca8b8-930c-4a19-93ff-e47500546d2e-kube-api-access-lkks5\") pod \"route-controller-manager-6576b87f9c-mxbr7\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255455 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0e436fd-9344-4f55-ae35-4eae3aac24c8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g9q8d\" (UID: \"b0e436fd-9344-4f55-ae35-4eae3aac24c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255496 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255549 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01a2fc1a-81a3-4607-926f-5e8ee502a3c9-trusted-ca\") pod \"ingress-operator-5b745b69d9-ncl99\" (UID: \"01a2fc1a-81a3-4607-926f-5e8ee502a3c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255576 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ffca8b8-930c-4a19-93ff-e47500546d2e-client-ca\") pod \"route-controller-manager-6576b87f9c-mxbr7\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255606 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-serving-cert\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255626 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255645 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0eb89f1c-1230-4455-86c1-6ad3796969a9-audit\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255664 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd9s4\" (UniqueName: \"kubernetes.io/projected/0eb89f1c-1230-4455-86c1-6ad3796969a9-kube-api-access-xd9s4\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255684 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gz9v\" (UniqueName: \"kubernetes.io/projected/7027d958-98c3-4fd1-9442-232be60e1eb7-kube-api-access-4gz9v\") pod \"downloads-7954f5f757-k5ktg\" (UID: \"7027d958-98c3-4fd1-9442-232be60e1eb7\") " pod="openshift-console/downloads-7954f5f757-k5ktg" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255758 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7e9f338-eb02-4618-aafb-37065b3823f9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zngnc\" (UID: \"b7e9f338-eb02-4618-aafb-37065b3823f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255869 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0eb89f1c-1230-4455-86c1-6ad3796969a9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255894 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74c6d322-04d4-4a3e-b3d7-fa6157c5a696-serving-cert\") pod \"authentication-operator-69f744f599-mckmm\" (UID: \"74c6d322-04d4-4a3e-b3d7-fa6157c5a696\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.255921 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d5a1aaf-95fa-4eaf-8f2f-68c48937af6f-metrics-tls\") pod \"dns-operator-744455d44c-d9shf\" (UID: \"4d5a1aaf-95fa-4eaf-8f2f-68c48937af6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-d9shf" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.272658 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8v4r5"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.273746 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.275227 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.275316 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8v4r5" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.293377 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8kd6n"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.293684 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.294008 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.294361 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.294717 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.295092 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c8x4t"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.295688 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fsm2c"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.286426 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.286535 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.287021 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.287078 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.298221 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.298601 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.298932 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.299543 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.299577 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.299930 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.299617 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.300363 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8x4t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.306269 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.308003 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.308807 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.310034 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsm2c" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.311254 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.311853 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.327934 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535068-crjcm"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.331787 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.333436 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vxxfb"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.337856 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535068-crjcm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.351868 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.376709 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nj24t\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.376768 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5472b\" (UniqueName: \"kubernetes.io/projected/936d9d11-1063-4a6a-b7d6-68a1fe00d9dd-kube-api-access-5472b\") pod \"openshift-controller-manager-operator-756b6f6bc6-9kj2m\" (UID: \"936d9d11-1063-4a6a-b7d6-68a1fe00d9dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.376811 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-service-ca\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.376837 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01a2fc1a-81a3-4607-926f-5e8ee502a3c9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ncl99\" (UID: \"01a2fc1a-81a3-4607-926f-5e8ee502a3c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.376859 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0eb89f1c-1230-4455-86c1-6ad3796969a9-image-import-ca\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.376881 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znjtk\" (UniqueName: \"kubernetes.io/projected/74c6d322-04d4-4a3e-b3d7-fa6157c5a696-kube-api-access-znjtk\") pod \"authentication-operator-69f744f599-mckmm\" (UID: \"74c6d322-04d4-4a3e-b3d7-fa6157c5a696\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.376905 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/223205f4-c6e1-4f77-bfc3-667ad541a34e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6rwql\" (UID: \"223205f4-c6e1-4f77-bfc3-667ad541a34e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.376927 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-oauth-config\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.376951 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qnkw\" (UniqueName: \"kubernetes.io/projected/630d11de-abc5-47ed-8284-7bbf4ec5b9c8-kube-api-access-8qnkw\") pod \"machine-approver-56656f9798-6tgnh\" (UID: \"630d11de-abc5-47ed-8284-7bbf4ec5b9c8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.376972 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-oauth-serving-cert\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.376995 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feac8cdb-eb8a-4f0d-afee-d18467d73727-serving-cert\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377018 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74c6d322-04d4-4a3e-b3d7-fa6157c5a696-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mckmm\" (UID: \"74c6d322-04d4-4a3e-b3d7-fa6157c5a696\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377037 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/01a2fc1a-81a3-4607-926f-5e8ee502a3c9-metrics-tls\") pod \"ingress-operator-5b745b69d9-ncl99\" (UID: \"01a2fc1a-81a3-4607-926f-5e8ee502a3c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377065 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9xjh\" (UniqueName: \"kubernetes.io/projected/fe850ec7-4df8-4628-ae55-3c922de012e8-kube-api-access-j9xjh\") pod \"service-ca-9c57cc56f-8v4r5\" (UID: \"fe850ec7-4df8-4628-ae55-3c922de012e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v4r5" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377089 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnj4w\" (UniqueName: \"kubernetes.io/projected/4c344657-7620-4366-80a9-84de8ed2face-kube-api-access-hnj4w\") pod \"cluster-samples-operator-665b6dd947-69bcg\" (UID: \"4c344657-7620-4366-80a9-84de8ed2face\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69bcg" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377112 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd269f2-74b2-4dfd-bec7-1442d4b438ef-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-b7cfr\" (UID: \"0bd269f2-74b2-4dfd-bec7-1442d4b438ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377135 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-config\") pod \"controller-manager-879f6c89f-nj24t\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377157 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0eb89f1c-1230-4455-86c1-6ad3796969a9-node-pullsecrets\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377193 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-config\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377229 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377252 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hqwr\" (UniqueName: \"kubernetes.io/projected/0bd269f2-74b2-4dfd-bec7-1442d4b438ef-kube-api-access-2hqwr\") pod \"openshift-apiserver-operator-796bbdcf4f-b7cfr\" (UID: \"0bd269f2-74b2-4dfd-bec7-1442d4b438ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377276 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f73a3c79-e83b-4cf2-9a39-ca27f3f3feab-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c8x4t\" (UID: \"f73a3c79-e83b-4cf2-9a39-ca27f3f3feab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8x4t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377306 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377330 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/630d11de-abc5-47ed-8284-7bbf4ec5b9c8-auth-proxy-config\") pod \"machine-approver-56656f9798-6tgnh\" (UID: \"630d11de-abc5-47ed-8284-7bbf4ec5b9c8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377353 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377378 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf4c8\" (UniqueName: \"kubernetes.io/projected/34fa2b5c-f1b3-434f-a307-be966f1d64d9-kube-api-access-bf4c8\") pod \"package-server-manager-789f6589d5-jl45p\" (UID: \"34fa2b5c-f1b3-434f-a307-be966f1d64d9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377403 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f469f47-990d-4224-8002-c658ef626f48-audit-dir\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377426 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377448 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-etcd-client\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377474 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42a649e6-a13d-4a1d-94a6-82c03d5a913b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hn56r\" (UID: \"42a649e6-a13d-4a1d-94a6-82c03d5a913b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377497 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff98r\" (UniqueName: \"kubernetes.io/projected/5f469f47-990d-4224-8002-c658ef626f48-kube-api-access-ff98r\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377522 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ffca8b8-930c-4a19-93ff-e47500546d2e-serving-cert\") pod \"route-controller-manager-6576b87f9c-mxbr7\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377552 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh5c9\" (UniqueName: \"kubernetes.io/projected/fab74e98-0cf9-41ea-aebc-ce1cd5011740-kube-api-access-sh5c9\") pod \"kube-storage-version-migrator-operator-b67b599dd-snwd7\" (UID: \"fab74e98-0cf9-41ea-aebc-ce1cd5011740\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377582 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0eb89f1c-1230-4455-86c1-6ad3796969a9-serving-cert\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377606 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4f28\" (UniqueName: \"kubernetes.io/projected/9063c94b-5e44-4a4a-9c85-e122cf7751b9-kube-api-access-w4f28\") pod \"console-operator-58897d9998-rrbmc\" (UID: \"9063c94b-5e44-4a4a-9c85-e122cf7751b9\") " pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377641 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab74e98-0cf9-41ea-aebc-ce1cd5011740-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-snwd7\" (UID: \"fab74e98-0cf9-41ea-aebc-ce1cd5011740\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377667 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s26nd\" (UniqueName: \"kubernetes.io/projected/91b7ba35-3bf3-4738-8a71-d093b0e7fd12-kube-api-access-s26nd\") pod \"auto-csr-approver-29535068-crjcm\" (UID: \"91b7ba35-3bf3-4738-8a71-d093b0e7fd12\") " pod="openshift-infra/auto-csr-approver-29535068-crjcm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377691 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c6d322-04d4-4a3e-b3d7-fa6157c5a696-config\") pod \"authentication-operator-69f744f599-mckmm\" (UID: \"74c6d322-04d4-4a3e-b3d7-fa6157c5a696\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377716 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42a649e6-a13d-4a1d-94a6-82c03d5a913b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hn56r\" (UID: \"42a649e6-a13d-4a1d-94a6-82c03d5a913b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377742 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7lc7\" (UniqueName: \"kubernetes.io/projected/01a2fc1a-81a3-4607-926f-5e8ee502a3c9-kube-api-access-f7lc7\") pod \"ingress-operator-5b745b69d9-ncl99\" (UID: \"01a2fc1a-81a3-4607-926f-5e8ee502a3c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377766 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3546882-cc78-45d2-b99d-9d14605bdc5b-config-volume\") pod \"collect-profiles-29535060-x9rz4\" (UID: \"f3546882-cc78-45d2-b99d-9d14605bdc5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377792 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/309c37fa-849e-460c-9816-4d67aa631021-profile-collector-cert\") pod \"catalog-operator-68c6474976-mqt24\" (UID: \"309c37fa-849e-460c-9816-4d67aa631021\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377819 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/207d3079-e7ed-46b9-8744-aed50bb42352-serving-cert\") pod \"openshift-config-operator-7777fb866f-md2vv\" (UID: \"207d3079-e7ed-46b9-8744-aed50bb42352\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377842 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223205f4-c6e1-4f77-bfc3-667ad541a34e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6rwql\" (UID: \"223205f4-c6e1-4f77-bfc3-667ad541a34e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377865 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42a649e6-a13d-4a1d-94a6-82c03d5a913b-config\") pod \"kube-controller-manager-operator-78b949d7b-hn56r\" (UID: \"42a649e6-a13d-4a1d-94a6-82c03d5a913b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377886 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/feac8cdb-eb8a-4f0d-afee-d18467d73727-audit-policies\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377909 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936d9d11-1063-4a6a-b7d6-68a1fe00d9dd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9kj2m\" (UID: \"936d9d11-1063-4a6a-b7d6-68a1fe00d9dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377936 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0eb89f1c-1230-4455-86c1-6ad3796969a9-etcd-client\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377960 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtwlm\" (UniqueName: \"kubernetes.io/projected/207d3079-e7ed-46b9-8744-aed50bb42352-kube-api-access-rtwlm\") pod \"openshift-config-operator-7777fb866f-md2vv\" (UID: \"207d3079-e7ed-46b9-8744-aed50bb42352\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.377984 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdjc9\" (UniqueName: \"kubernetes.io/projected/309c37fa-849e-460c-9816-4d67aa631021-kube-api-access-cdjc9\") pod \"catalog-operator-68c6474976-mqt24\" (UID: \"309c37fa-849e-460c-9816-4d67aa631021\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378008 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49a62159-9584-4fd5-b9d2-e81d422f5089-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r668f\" (UID: \"49a62159-9584-4fd5-b9d2-e81d422f5089\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378031 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdxq7\" (UniqueName: \"kubernetes.io/projected/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-kube-api-access-fdxq7\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378056 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/444082d7-63dc-4363-ad17-5b61e61895ed-srv-cert\") pod \"olm-operator-6b444d44fb-nc5bx\" (UID: \"444082d7-63dc-4363-ad17-5b61e61895ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378078 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgjlc\" (UniqueName: \"kubernetes.io/projected/444082d7-63dc-4363-ad17-5b61e61895ed-kube-api-access-fgjlc\") pod \"olm-operator-6b444d44fb-nc5bx\" (UID: \"444082d7-63dc-4363-ad17-5b61e61895ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378099 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/630d11de-abc5-47ed-8284-7bbf4ec5b9c8-machine-approver-tls\") pod \"machine-approver-56656f9798-6tgnh\" (UID: \"630d11de-abc5-47ed-8284-7bbf4ec5b9c8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378122 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45069e17-f50a-47d5-9552-b32b9eecadce-metrics-certs\") pod \"router-default-5444994796-h27ll\" (UID: \"45069e17-f50a-47d5-9552-b32b9eecadce\") " pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378142 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9063c94b-5e44-4a4a-9c85-e122cf7751b9-serving-cert\") pod \"console-operator-58897d9998-rrbmc\" (UID: \"9063c94b-5e44-4a4a-9c85-e122cf7751b9\") " pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378165 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74c6d322-04d4-4a3e-b3d7-fa6157c5a696-service-ca-bundle\") pod \"authentication-operator-69f744f599-mckmm\" (UID: \"74c6d322-04d4-4a3e-b3d7-fa6157c5a696\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378205 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-webhook-cert\") pod \"packageserver-d55dfcdfc-zxggv\" (UID: \"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378230 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0e436fd-9344-4f55-ae35-4eae3aac24c8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g9q8d\" (UID: \"b0e436fd-9344-4f55-ae35-4eae3aac24c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378253 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ffca8b8-930c-4a19-93ff-e47500546d2e-config\") pod \"route-controller-manager-6576b87f9c-mxbr7\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378277 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a214909-86a1-4cbb-bccc-4f24faa05d4b-config\") pod \"service-ca-operator-777779d784-kkmrh\" (UID: \"4a214909-86a1-4cbb-bccc-4f24faa05d4b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378304 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djrt7\" (UniqueName: \"kubernetes.io/projected/0981a4e3-56c8-49a4-a65f-94d3d916eef8-kube-api-access-djrt7\") pod \"migrator-59844c95c7-fsm2c\" (UID: \"0981a4e3-56c8-49a4-a65f-94d3d916eef8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsm2c" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378328 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378351 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0e436fd-9344-4f55-ae35-4eae3aac24c8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g9q8d\" (UID: \"b0e436fd-9344-4f55-ae35-4eae3aac24c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378374 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01a2fc1a-81a3-4607-926f-5e8ee502a3c9-trusted-ca\") pod \"ingress-operator-5b745b69d9-ncl99\" (UID: \"01a2fc1a-81a3-4607-926f-5e8ee502a3c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378399 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g8jd\" (UniqueName: \"kubernetes.io/projected/84a12f19-2563-48d0-8682-26dd701b62ce-kube-api-access-2g8jd\") pod \"machine-config-controller-84d6567774-s7c4t\" (UID: \"84a12f19-2563-48d0-8682-26dd701b62ce\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378420 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9effc4-1c10-46ae-9762-1f3308aa9bc9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4f5jn\" (UID: \"6a9effc4-1c10-46ae-9762-1f3308aa9bc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378443 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/34fa2b5c-f1b3-434f-a307-be966f1d64d9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jl45p\" (UID: \"34fa2b5c-f1b3-434f-a307-be966f1d64d9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378470 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-serving-cert\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378489 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84a12f19-2563-48d0-8682-26dd701b62ce-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-s7c4t\" (UID: \"84a12f19-2563-48d0-8682-26dd701b62ce\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378507 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/309c37fa-849e-460c-9816-4d67aa631021-srv-cert\") pod \"catalog-operator-68c6474976-mqt24\" (UID: \"309c37fa-849e-460c-9816-4d67aa631021\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378529 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378563 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gz9v\" (UniqueName: \"kubernetes.io/projected/7027d958-98c3-4fd1-9442-232be60e1eb7-kube-api-access-4gz9v\") pod \"downloads-7954f5f757-k5ktg\" (UID: \"7027d958-98c3-4fd1-9442-232be60e1eb7\") " pod="openshift-console/downloads-7954f5f757-k5ktg" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378584 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0eb89f1c-1230-4455-86c1-6ad3796969a9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378605 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74c6d322-04d4-4a3e-b3d7-fa6157c5a696-serving-cert\") pod \"authentication-operator-69f744f599-mckmm\" (UID: \"74c6d322-04d4-4a3e-b3d7-fa6157c5a696\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378629 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d5a1aaf-95fa-4eaf-8f2f-68c48937af6f-metrics-tls\") pod \"dns-operator-744455d44c-d9shf\" (UID: \"4d5a1aaf-95fa-4eaf-8f2f-68c48937af6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-d9shf" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378653 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/809c874c-661e-43c3-9e0e-6ee95ed8586e-proxy-tls\") pod \"machine-config-operator-74547568cd-thlzt\" (UID: \"809c874c-661e-43c3-9e0e-6ee95ed8586e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378676 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eb89f1c-1230-4455-86c1-6ad3796969a9-config\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378697 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lvs9\" (UniqueName: \"kubernetes.io/projected/809c874c-661e-43c3-9e0e-6ee95ed8586e-kube-api-access-9lvs9\") pod \"machine-config-operator-74547568cd-thlzt\" (UID: \"809c874c-661e-43c3-9e0e-6ee95ed8586e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378720 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jtbr\" (UniqueName: \"kubernetes.io/projected/e87b7bd7-9d39-48f0-b896-fe5da437416f-kube-api-access-2jtbr\") pod \"control-plane-machine-set-operator-78cbb6b69f-xw4vt\" (UID: \"e87b7bd7-9d39-48f0-b896-fe5da437416f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378761 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/444082d7-63dc-4363-ad17-5b61e61895ed-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nc5bx\" (UID: \"444082d7-63dc-4363-ad17-5b61e61895ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378801 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0eb89f1c-1230-4455-86c1-6ad3796969a9-etcd-serving-ca\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378828 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378851 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84a12f19-2563-48d0-8682-26dd701b62ce-proxy-tls\") pod \"machine-config-controller-84d6567774-s7c4t\" (UID: \"84a12f19-2563-48d0-8682-26dd701b62ce\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378873 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7e9f338-eb02-4618-aafb-37065b3823f9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zngnc\" (UID: \"b7e9f338-eb02-4618-aafb-37065b3823f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378894 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49a62159-9584-4fd5-b9d2-e81d422f5089-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r668f\" (UID: \"49a62159-9584-4fd5-b9d2-e81d422f5089\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378916 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/936d9d11-1063-4a6a-b7d6-68a1fe00d9dd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9kj2m\" (UID: \"936d9d11-1063-4a6a-b7d6-68a1fe00d9dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378938 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/45069e17-f50a-47d5-9552-b32b9eecadce-stats-auth\") pod \"router-default-5444994796-h27ll\" (UID: \"45069e17-f50a-47d5-9552-b32b9eecadce\") " pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378960 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-client-ca\") pod \"controller-manager-879f6c89f-nj24t\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.378982 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtrdk\" (UniqueName: \"kubernetes.io/projected/feac8cdb-eb8a-4f0d-afee-d18467d73727-kube-api-access-gtrdk\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379006 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6a9effc4-1c10-46ae-9762-1f3308aa9bc9-images\") pod \"machine-api-operator-5694c8668f-4f5jn\" (UID: \"6a9effc4-1c10-46ae-9762-1f3308aa9bc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379031 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379055 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/207d3079-e7ed-46b9-8744-aed50bb42352-available-featuregates\") pod \"openshift-config-operator-7777fb866f-md2vv\" (UID: \"207d3079-e7ed-46b9-8744-aed50bb42352\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379081 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-etcd-ca\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379105 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3546882-cc78-45d2-b99d-9d14605bdc5b-secret-volume\") pod \"collect-profiles-29535060-x9rz4\" (UID: \"f3546882-cc78-45d2-b99d-9d14605bdc5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379131 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-serving-cert\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379157 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g44wp\" (UniqueName: \"kubernetes.io/projected/b0e436fd-9344-4f55-ae35-4eae3aac24c8-kube-api-access-g44wp\") pod \"cluster-image-registry-operator-dc59b4c8b-g9q8d\" (UID: \"b0e436fd-9344-4f55-ae35-4eae3aac24c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379233 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/feac8cdb-eb8a-4f0d-afee-d18467d73727-audit-dir\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379260 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/481dac61-2ecf-46c9-b8f8-981815ceb9c5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8kd6n\" (UID: \"481dac61-2ecf-46c9-b8f8-981815ceb9c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379285 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/feac8cdb-eb8a-4f0d-afee-d18467d73727-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379307 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7e9f338-eb02-4618-aafb-37065b3823f9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zngnc\" (UID: \"b7e9f338-eb02-4618-aafb-37065b3823f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379328 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feac8cdb-eb8a-4f0d-afee-d18467d73727-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379363 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b7e9f338-eb02-4618-aafb-37065b3823f9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zngnc\" (UID: \"b7e9f338-eb02-4618-aafb-37065b3823f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379387 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379411 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhfjd\" (UniqueName: \"kubernetes.io/projected/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-kube-api-access-rhfjd\") pod \"packageserver-d55dfcdfc-zxggv\" (UID: \"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379435 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/481dac61-2ecf-46c9-b8f8-981815ceb9c5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8kd6n\" (UID: \"481dac61-2ecf-46c9-b8f8-981815ceb9c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379460 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-etcd-service-ca\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379483 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gbts\" (UniqueName: \"kubernetes.io/projected/f3546882-cc78-45d2-b99d-9d14605bdc5b-kube-api-access-8gbts\") pod \"collect-profiles-29535060-x9rz4\" (UID: \"f3546882-cc78-45d2-b99d-9d14605bdc5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379510 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45069e17-f50a-47d5-9552-b32b9eecadce-service-ca-bundle\") pod \"router-default-5444994796-h27ll\" (UID: \"45069e17-f50a-47d5-9552-b32b9eecadce\") " pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379536 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01c4a397-4485-49bc-9ee3-c794832fd1ee-serving-cert\") pod \"controller-manager-879f6c89f-nj24t\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379561 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a62159-9584-4fd5-b9d2-e81d422f5089-config\") pod \"kube-apiserver-operator-766d6c64bb-r668f\" (UID: \"49a62159-9584-4fd5-b9d2-e81d422f5089\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379612 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b0e436fd-9344-4f55-ae35-4eae3aac24c8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g9q8d\" (UID: \"b0e436fd-9344-4f55-ae35-4eae3aac24c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379639 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k76vd\" (UniqueName: \"kubernetes.io/projected/6a9effc4-1c10-46ae-9762-1f3308aa9bc9-kube-api-access-k76vd\") pod \"machine-api-operator-5694c8668f-4f5jn\" (UID: \"6a9effc4-1c10-46ae-9762-1f3308aa9bc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379663 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-audit-policies\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379688 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fe850ec7-4df8-4628-ae55-3c922de012e8-signing-key\") pod \"service-ca-9c57cc56f-8v4r5\" (UID: \"fe850ec7-4df8-4628-ae55-3c922de012e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v4r5" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379726 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-trusted-ca-bundle\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379754 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/809c874c-661e-43c3-9e0e-6ee95ed8586e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-thlzt\" (UID: \"809c874c-661e-43c3-9e0e-6ee95ed8586e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379782 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4q9l\" (UniqueName: \"kubernetes.io/projected/45069e17-f50a-47d5-9552-b32b9eecadce-kube-api-access-z4q9l\") pod \"router-default-5444994796-h27ll\" (UID: \"45069e17-f50a-47d5-9552-b32b9eecadce\") " pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379806 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw9cq\" (UniqueName: \"kubernetes.io/projected/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-kube-api-access-fw9cq\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379836 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e87b7bd7-9d39-48f0-b896-fe5da437416f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xw4vt\" (UID: \"e87b7bd7-9d39-48f0-b896-fe5da437416f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379863 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a9effc4-1c10-46ae-9762-1f3308aa9bc9-config\") pod \"machine-api-operator-5694c8668f-4f5jn\" (UID: \"6a9effc4-1c10-46ae-9762-1f3308aa9bc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379885 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ngbn\" (UniqueName: \"kubernetes.io/projected/481dac61-2ecf-46c9-b8f8-981815ceb9c5-kube-api-access-8ngbn\") pod \"marketplace-operator-79b997595-8kd6n\" (UID: \"481dac61-2ecf-46c9-b8f8-981815ceb9c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379915 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9063c94b-5e44-4a4a-9c85-e122cf7751b9-config\") pod \"console-operator-58897d9998-rrbmc\" (UID: \"9063c94b-5e44-4a4a-9c85-e122cf7751b9\") " pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379941 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/45069e17-f50a-47d5-9552-b32b9eecadce-default-certificate\") pod \"router-default-5444994796-h27ll\" (UID: \"45069e17-f50a-47d5-9552-b32b9eecadce\") " pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379962 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fe850ec7-4df8-4628-ae55-3c922de012e8-signing-cabundle\") pod \"service-ca-9c57cc56f-8v4r5\" (UID: \"fe850ec7-4df8-4628-ae55-3c922de012e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v4r5" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.379987 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4c344657-7620-4366-80a9-84de8ed2face-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-69bcg\" (UID: \"4c344657-7620-4366-80a9-84de8ed2face\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69bcg" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.380009 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab74e98-0cf9-41ea-aebc-ce1cd5011740-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-snwd7\" (UID: \"fab74e98-0cf9-41ea-aebc-ce1cd5011740\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.380032 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/feac8cdb-eb8a-4f0d-afee-d18467d73727-encryption-config\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.380061 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6lcs\" (UniqueName: \"kubernetes.io/projected/4d5a1aaf-95fa-4eaf-8f2f-68c48937af6f-kube-api-access-r6lcs\") pod \"dns-operator-744455d44c-d9shf\" (UID: \"4d5a1aaf-95fa-4eaf-8f2f-68c48937af6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-d9shf" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.380088 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bd269f2-74b2-4dfd-bec7-1442d4b438ef-config\") pod \"openshift-apiserver-operator-796bbdcf4f-b7cfr\" (UID: \"0bd269f2-74b2-4dfd-bec7-1442d4b438ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.380122 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.380149 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-config\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.380858 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q849z\" (UniqueName: \"kubernetes.io/projected/f73a3c79-e83b-4cf2-9a39-ca27f3f3feab-kube-api-access-q849z\") pod \"multus-admission-controller-857f4d67dd-c8x4t\" (UID: \"f73a3c79-e83b-4cf2-9a39-ca27f3f3feab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8x4t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.380921 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b7e9f338-eb02-4618-aafb-37065b3823f9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zngnc\" (UID: \"b7e9f338-eb02-4618-aafb-37065b3823f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.380953 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-apiservice-cert\") pod \"packageserver-d55dfcdfc-zxggv\" (UID: \"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.380978 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0eb89f1c-1230-4455-86c1-6ad3796969a9-audit-dir\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381000 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-576kc\" (UniqueName: \"kubernetes.io/projected/4a214909-86a1-4cbb-bccc-4f24faa05d4b-kube-api-access-576kc\") pod \"service-ca-operator-777779d784-kkmrh\" (UID: \"4a214909-86a1-4cbb-bccc-4f24faa05d4b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381032 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0eb89f1c-1230-4455-86c1-6ad3796969a9-encryption-config\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381056 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9063c94b-5e44-4a4a-9c85-e122cf7751b9-trusted-ca\") pod \"console-operator-58897d9998-rrbmc\" (UID: \"9063c94b-5e44-4a4a-9c85-e122cf7751b9\") " pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381078 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/809c874c-661e-43c3-9e0e-6ee95ed8586e-images\") pod \"machine-config-operator-74547568cd-thlzt\" (UID: \"809c874c-661e-43c3-9e0e-6ee95ed8586e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381115 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqhqs\" (UniqueName: \"kubernetes.io/projected/01c4a397-4485-49bc-9ee3-c794832fd1ee-kube-api-access-lqhqs\") pod \"controller-manager-879f6c89f-nj24t\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381139 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkks5\" (UniqueName: \"kubernetes.io/projected/7ffca8b8-930c-4a19-93ff-e47500546d2e-kube-api-access-lkks5\") pod \"route-controller-manager-6576b87f9c-mxbr7\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381164 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a214909-86a1-4cbb-bccc-4f24faa05d4b-serving-cert\") pod \"service-ca-operator-777779d784-kkmrh\" (UID: \"4a214909-86a1-4cbb-bccc-4f24faa05d4b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381211 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ffca8b8-930c-4a19-93ff-e47500546d2e-client-ca\") pod \"route-controller-manager-6576b87f9c-mxbr7\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381273 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-tmpfs\") pod \"packageserver-d55dfcdfc-zxggv\" (UID: \"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381300 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0eb89f1c-1230-4455-86c1-6ad3796969a9-audit\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381326 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd9s4\" (UniqueName: \"kubernetes.io/projected/0eb89f1c-1230-4455-86c1-6ad3796969a9-kube-api-access-xd9s4\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381349 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7e9f338-eb02-4618-aafb-37065b3823f9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zngnc\" (UID: \"b7e9f338-eb02-4618-aafb-37065b3823f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381373 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/630d11de-abc5-47ed-8284-7bbf4ec5b9c8-config\") pod \"machine-approver-56656f9798-6tgnh\" (UID: \"630d11de-abc5-47ed-8284-7bbf4ec5b9c8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381399 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/feac8cdb-eb8a-4f0d-afee-d18467d73727-etcd-client\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381425 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.381447 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223205f4-c6e1-4f77-bfc3-667ad541a34e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6rwql\" (UID: \"223205f4-c6e1-4f77-bfc3-667ad541a34e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.384557 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-service-ca\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.385206 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/feac8cdb-eb8a-4f0d-afee-d18467d73727-audit-dir\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.385951 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/feac8cdb-eb8a-4f0d-afee-d18467d73727-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.386278 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.386588 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eb89f1c-1230-4455-86c1-6ad3796969a9-config\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.387247 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7e9f338-eb02-4618-aafb-37065b3823f9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zngnc\" (UID: \"b7e9f338-eb02-4618-aafb-37065b3823f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.387520 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nj24t\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.387477 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0eb89f1c-1230-4455-86c1-6ad3796969a9-image-import-ca\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.387899 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feac8cdb-eb8a-4f0d-afee-d18467d73727-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.387984 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b7e9f338-eb02-4618-aafb-37065b3823f9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zngnc\" (UID: \"b7e9f338-eb02-4618-aafb-37065b3823f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.389784 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-trusted-ca-bundle\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.390759 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/809c874c-661e-43c3-9e0e-6ee95ed8586e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-thlzt\" (UID: \"809c874c-661e-43c3-9e0e-6ee95ed8586e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.392140 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9063c94b-5e44-4a4a-9c85-e122cf7751b9-config\") pod \"console-operator-58897d9998-rrbmc\" (UID: \"9063c94b-5e44-4a4a-9c85-e122cf7751b9\") " pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.399432 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-etcd-service-ca\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.400367 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/feac8cdb-eb8a-4f0d-afee-d18467d73727-audit-policies\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.401341 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0eb89f1c-1230-4455-86c1-6ad3796969a9-etcd-serving-ca\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.406634 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936d9d11-1063-4a6a-b7d6-68a1fe00d9dd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9kj2m\" (UID: \"936d9d11-1063-4a6a-b7d6-68a1fe00d9dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.413519 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/936d9d11-1063-4a6a-b7d6-68a1fe00d9dd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9kj2m\" (UID: \"936d9d11-1063-4a6a-b7d6-68a1fe00d9dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.413993 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.417438 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s92pk"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.417642 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.417870 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.417969 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wdxr7"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.418059 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-zhtn5"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.419244 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4rspm"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.420538 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-nnjz2"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.421171 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.421312 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nj24t"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.421393 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c8x4t"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.421469 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.421551 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d9shf"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.421631 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-md2vv"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.421709 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k5ktg"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.421785 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69bcg"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.421869 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.444602 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4f5jn"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.444643 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8v4r5"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.444655 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mckmm"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.444666 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.444714 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2m27r"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.444728 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9cwcb"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.444748 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.444746 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74c6d322-04d4-4a3e-b3d7-fa6157c5a696-service-ca-bundle\") pod \"authentication-operator-69f744f599-mckmm\" (UID: \"74c6d322-04d4-4a3e-b3d7-fa6157c5a696\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.427942 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-serving-cert\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.430468 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.422904 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-audit-policies\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.433999 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0eb89f1c-1230-4455-86c1-6ad3796969a9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.436751 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0eb89f1c-1230-4455-86c1-6ad3796969a9-etcd-client\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.437084 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-oauth-config\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.444763 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-mlrhs"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445479 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b0e436fd-9344-4f55-ae35-4eae3aac24c8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g9q8d\" (UID: \"b0e436fd-9344-4f55-ae35-4eae3aac24c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445525 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-etcd-ca\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445724 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fsm2c"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445753 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rrbmc"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445766 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445780 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8kd6n"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445795 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445807 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445820 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4rspm"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445832 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zhtn5"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445843 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445855 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445865 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445876 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445886 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445896 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mlrhs"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445909 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.445995 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mlrhs" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.423425 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01c4a397-4485-49bc-9ee3-c794832fd1ee-serving-cert\") pod \"controller-manager-879f6c89f-nj24t\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.423683 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.446118 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ffca8b8-930c-4a19-93ff-e47500546d2e-config\") pod \"route-controller-manager-6576b87f9c-mxbr7\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.424425 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-client-ca\") pod \"controller-manager-879f6c89f-nj24t\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.424958 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/207d3079-e7ed-46b9-8744-aed50bb42352-available-featuregates\") pod \"openshift-config-operator-7777fb866f-md2vv\" (UID: \"207d3079-e7ed-46b9-8744-aed50bb42352\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.426495 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4c344657-7620-4366-80a9-84de8ed2face-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-69bcg\" (UID: \"4c344657-7620-4366-80a9-84de8ed2face\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69bcg" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.427084 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.446336 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.430513 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nnjz2" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.427461 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.447594 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c6d322-04d4-4a3e-b3d7-fa6157c5a696-config\") pod \"authentication-operator-69f744f599-mckmm\" (UID: \"74c6d322-04d4-4a3e-b3d7-fa6157c5a696\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.447602 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d5a1aaf-95fa-4eaf-8f2f-68c48937af6f-metrics-tls\") pod \"dns-operator-744455d44c-d9shf\" (UID: \"4d5a1aaf-95fa-4eaf-8f2f-68c48937af6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-d9shf" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.430407 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zhtn5" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.448424 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0eb89f1c-1230-4455-86c1-6ad3796969a9-node-pullsecrets\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.449022 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/630d11de-abc5-47ed-8284-7bbf4ec5b9c8-config\") pod \"machine-approver-56656f9798-6tgnh\" (UID: \"630d11de-abc5-47ed-8284-7bbf4ec5b9c8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.433548 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.449484 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.449526 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0eb89f1c-1230-4455-86c1-6ad3796969a9-audit\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.433747 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.449973 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-oauth-serving-cert\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.450283 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.450377 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/207d3079-e7ed-46b9-8744-aed50bb42352-serving-cert\") pod \"openshift-config-operator-7777fb866f-md2vv\" (UID: \"207d3079-e7ed-46b9-8744-aed50bb42352\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.450503 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b7e9f338-eb02-4618-aafb-37065b3823f9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zngnc\" (UID: \"b7e9f338-eb02-4618-aafb-37065b3823f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.451205 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/809c874c-661e-43c3-9e0e-6ee95ed8586e-images\") pod \"machine-config-operator-74547568cd-thlzt\" (UID: \"809c874c-661e-43c3-9e0e-6ee95ed8586e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.450519 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0eb89f1c-1230-4455-86c1-6ad3796969a9-audit-dir\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.450912 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0e436fd-9344-4f55-ae35-4eae3aac24c8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g9q8d\" (UID: \"b0e436fd-9344-4f55-ae35-4eae3aac24c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.451353 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f469f47-990d-4224-8002-c658ef626f48-audit-dir\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.451440 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/630d11de-abc5-47ed-8284-7bbf4ec5b9c8-auth-proxy-config\") pod \"machine-approver-56656f9798-6tgnh\" (UID: \"630d11de-abc5-47ed-8284-7bbf4ec5b9c8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.451914 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74c6d322-04d4-4a3e-b3d7-fa6157c5a696-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mckmm\" (UID: \"74c6d322-04d4-4a3e-b3d7-fa6157c5a696\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.452809 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bd269f2-74b2-4dfd-bec7-1442d4b438ef-config\") pod \"openshift-apiserver-operator-796bbdcf4f-b7cfr\" (UID: \"0bd269f2-74b2-4dfd-bec7-1442d4b438ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.452808 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-etcd-client\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.453360 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-config\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.454113 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-config\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.457147 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ffca8b8-930c-4a19-93ff-e47500546d2e-client-ca\") pod \"route-controller-manager-6576b87f9c-mxbr7\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.458222 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-config\") pod \"controller-manager-879f6c89f-nj24t\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.467813 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7e9f338-eb02-4618-aafb-37065b3823f9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zngnc\" (UID: \"b7e9f338-eb02-4618-aafb-37065b3823f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.480276 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.481450 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.481876 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.484073 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.484917 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0eb89f1c-1230-4455-86c1-6ad3796969a9-encryption-config\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.486275 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0eb89f1c-1230-4455-86c1-6ad3796969a9-serving-cert\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.486654 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd269f2-74b2-4dfd-bec7-1442d4b438ef-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-b7cfr\" (UID: \"0bd269f2-74b2-4dfd-bec7-1442d4b438ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.486767 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.486828 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.487343 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9xjh\" (UniqueName: \"kubernetes.io/projected/fe850ec7-4df8-4628-ae55-3c922de012e8-kube-api-access-j9xjh\") pod \"service-ca-9c57cc56f-8v4r5\" (UID: \"fe850ec7-4df8-4628-ae55-3c922de012e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v4r5" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.488701 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7zqt\" (UniqueName: \"kubernetes.io/projected/dfd78b66-3464-48ce-9017-0fd1ff5e26f7-kube-api-access-d7zqt\") pod \"dns-default-zhtn5\" (UID: \"dfd78b66-3464-48ce-9017-0fd1ff5e26f7\") " pod="openshift-dns/dns-default-zhtn5" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.489196 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f73a3c79-e83b-4cf2-9a39-ca27f3f3feab-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c8x4t\" (UID: \"f73a3c79-e83b-4cf2-9a39-ca27f3f3feab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8x4t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.489294 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf4c8\" (UniqueName: \"kubernetes.io/projected/34fa2b5c-f1b3-434f-a307-be966f1d64d9-kube-api-access-bf4c8\") pod \"package-server-manager-789f6589d5-jl45p\" (UID: \"34fa2b5c-f1b3-434f-a307-be966f1d64d9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.489430 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dfd78b66-3464-48ce-9017-0fd1ff5e26f7-metrics-tls\") pod \"dns-default-zhtn5\" (UID: \"dfd78b66-3464-48ce-9017-0fd1ff5e26f7\") " pod="openshift-dns/dns-default-zhtn5" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.489528 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s26nd\" (UniqueName: \"kubernetes.io/projected/91b7ba35-3bf3-4738-8a71-d093b0e7fd12-kube-api-access-s26nd\") pod \"auto-csr-approver-29535068-crjcm\" (UID: \"91b7ba35-3bf3-4738-8a71-d093b0e7fd12\") " pod="openshift-infra/auto-csr-approver-29535068-crjcm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.489614 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3546882-cc78-45d2-b99d-9d14605bdc5b-config-volume\") pod \"collect-profiles-29535060-x9rz4\" (UID: \"f3546882-cc78-45d2-b99d-9d14605bdc5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.489688 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/309c37fa-849e-460c-9816-4d67aa631021-profile-collector-cert\") pod \"catalog-operator-68c6474976-mqt24\" (UID: \"309c37fa-849e-460c-9816-4d67aa631021\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.489789 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c9eec4e-df3c-411b-8629-421f3abfb500-registration-dir\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.489883 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdjc9\" (UniqueName: \"kubernetes.io/projected/309c37fa-849e-460c-9816-4d67aa631021-kube-api-access-cdjc9\") pod \"catalog-operator-68c6474976-mqt24\" (UID: \"309c37fa-849e-460c-9816-4d67aa631021\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.489972 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/444082d7-63dc-4363-ad17-5b61e61895ed-srv-cert\") pod \"olm-operator-6b444d44fb-nc5bx\" (UID: \"444082d7-63dc-4363-ad17-5b61e61895ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.490074 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgjlc\" (UniqueName: \"kubernetes.io/projected/444082d7-63dc-4363-ad17-5b61e61895ed-kube-api-access-fgjlc\") pod \"olm-operator-6b444d44fb-nc5bx\" (UID: \"444082d7-63dc-4363-ad17-5b61e61895ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.490187 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-webhook-cert\") pod \"packageserver-d55dfcdfc-zxggv\" (UID: \"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.491494 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a214909-86a1-4cbb-bccc-4f24faa05d4b-config\") pod \"service-ca-operator-777779d784-kkmrh\" (UID: \"4a214909-86a1-4cbb-bccc-4f24faa05d4b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.488165 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.491600 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djrt7\" (UniqueName: \"kubernetes.io/projected/0981a4e3-56c8-49a4-a65f-94d3d916eef8-kube-api-access-djrt7\") pod \"migrator-59844c95c7-fsm2c\" (UID: \"0981a4e3-56c8-49a4-a65f-94d3d916eef8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsm2c" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.491841 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84a12f19-2563-48d0-8682-26dd701b62ce-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-s7c4t\" (UID: \"84a12f19-2563-48d0-8682-26dd701b62ce\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.491922 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g8jd\" (UniqueName: \"kubernetes.io/projected/84a12f19-2563-48d0-8682-26dd701b62ce-kube-api-access-2g8jd\") pod \"machine-config-controller-84d6567774-s7c4t\" (UID: \"84a12f19-2563-48d0-8682-26dd701b62ce\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.491965 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9effc4-1c10-46ae-9762-1f3308aa9bc9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4f5jn\" (UID: \"6a9effc4-1c10-46ae-9762-1f3308aa9bc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.491994 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/34fa2b5c-f1b3-434f-a307-be966f1d64d9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jl45p\" (UID: \"34fa2b5c-f1b3-434f-a307-be966f1d64d9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492049 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/309c37fa-849e-460c-9816-4d67aa631021-srv-cert\") pod \"catalog-operator-68c6474976-mqt24\" (UID: \"309c37fa-849e-460c-9816-4d67aa631021\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492116 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4c9eec4e-df3c-411b-8629-421f3abfb500-csi-data-dir\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492303 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jtbr\" (UniqueName: \"kubernetes.io/projected/e87b7bd7-9d39-48f0-b896-fe5da437416f-kube-api-access-2jtbr\") pod \"control-plane-machine-set-operator-78cbb6b69f-xw4vt\" (UID: \"e87b7bd7-9d39-48f0-b896-fe5da437416f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492348 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/444082d7-63dc-4363-ad17-5b61e61895ed-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nc5bx\" (UID: \"444082d7-63dc-4363-ad17-5b61e61895ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492380 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84a12f19-2563-48d0-8682-26dd701b62ce-proxy-tls\") pod \"machine-config-controller-84d6567774-s7c4t\" (UID: \"84a12f19-2563-48d0-8682-26dd701b62ce\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492456 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6a9effc4-1c10-46ae-9762-1f3308aa9bc9-images\") pod \"machine-api-operator-5694c8668f-4f5jn\" (UID: \"6a9effc4-1c10-46ae-9762-1f3308aa9bc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492504 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3546882-cc78-45d2-b99d-9d14605bdc5b-secret-volume\") pod \"collect-profiles-29535060-x9rz4\" (UID: \"f3546882-cc78-45d2-b99d-9d14605bdc5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492560 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/481dac61-2ecf-46c9-b8f8-981815ceb9c5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8kd6n\" (UID: \"481dac61-2ecf-46c9-b8f8-981815ceb9c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492615 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhfjd\" (UniqueName: \"kubernetes.io/projected/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-kube-api-access-rhfjd\") pod \"packageserver-d55dfcdfc-zxggv\" (UID: \"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492645 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/481dac61-2ecf-46c9-b8f8-981815ceb9c5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8kd6n\" (UID: \"481dac61-2ecf-46c9-b8f8-981815ceb9c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492671 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4c9eec4e-df3c-411b-8629-421f3abfb500-plugins-dir\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492699 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9blh\" (UniqueName: \"kubernetes.io/projected/80040d88-3ec4-42f5-94f6-9c8afef81d73-kube-api-access-p9blh\") pod \"machine-config-server-nnjz2\" (UID: \"80040d88-3ec4-42f5-94f6-9c8afef81d73\") " pod="openshift-machine-config-operator/machine-config-server-nnjz2" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492747 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gbts\" (UniqueName: \"kubernetes.io/projected/f3546882-cc78-45d2-b99d-9d14605bdc5b-kube-api-access-8gbts\") pod \"collect-profiles-29535060-x9rz4\" (UID: \"f3546882-cc78-45d2-b99d-9d14605bdc5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492807 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k76vd\" (UniqueName: \"kubernetes.io/projected/6a9effc4-1c10-46ae-9762-1f3308aa9bc9-kube-api-access-k76vd\") pod \"machine-api-operator-5694c8668f-4f5jn\" (UID: \"6a9effc4-1c10-46ae-9762-1f3308aa9bc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492837 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/80040d88-3ec4-42f5-94f6-9c8afef81d73-node-bootstrap-token\") pod \"machine-config-server-nnjz2\" (UID: \"80040d88-3ec4-42f5-94f6-9c8afef81d73\") " pod="openshift-machine-config-operator/machine-config-server-nnjz2" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492875 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fe850ec7-4df8-4628-ae55-3c922de012e8-signing-key\") pod \"service-ca-9c57cc56f-8v4r5\" (UID: \"fe850ec7-4df8-4628-ae55-3c922de012e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v4r5" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492943 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlwxx\" (UniqueName: \"kubernetes.io/projected/4c9eec4e-df3c-411b-8629-421f3abfb500-kube-api-access-zlwxx\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.492994 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e87b7bd7-9d39-48f0-b896-fe5da437416f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xw4vt\" (UID: \"e87b7bd7-9d39-48f0-b896-fe5da437416f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.493031 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a9effc4-1c10-46ae-9762-1f3308aa9bc9-config\") pod \"machine-api-operator-5694c8668f-4f5jn\" (UID: \"6a9effc4-1c10-46ae-9762-1f3308aa9bc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.493061 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ngbn\" (UniqueName: \"kubernetes.io/projected/481dac61-2ecf-46c9-b8f8-981815ceb9c5-kube-api-access-8ngbn\") pod \"marketplace-operator-79b997595-8kd6n\" (UID: \"481dac61-2ecf-46c9-b8f8-981815ceb9c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.493106 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fe850ec7-4df8-4628-ae55-3c922de012e8-signing-cabundle\") pod \"service-ca-9c57cc56f-8v4r5\" (UID: \"fe850ec7-4df8-4628-ae55-3c922de012e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v4r5" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.493169 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q849z\" (UniqueName: \"kubernetes.io/projected/f73a3c79-e83b-4cf2-9a39-ca27f3f3feab-kube-api-access-q849z\") pod \"multus-admission-controller-857f4d67dd-c8x4t\" (UID: \"f73a3c79-e83b-4cf2-9a39-ca27f3f3feab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8x4t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.493227 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-apiservice-cert\") pod \"packageserver-d55dfcdfc-zxggv\" (UID: \"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.493262 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-576kc\" (UniqueName: \"kubernetes.io/projected/4a214909-86a1-4cbb-bccc-4f24faa05d4b-kube-api-access-576kc\") pod \"service-ca-operator-777779d784-kkmrh\" (UID: \"4a214909-86a1-4cbb-bccc-4f24faa05d4b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.493333 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a214909-86a1-4cbb-bccc-4f24faa05d4b-serving-cert\") pod \"service-ca-operator-777779d784-kkmrh\" (UID: \"4a214909-86a1-4cbb-bccc-4f24faa05d4b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.493364 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-tmpfs\") pod \"packageserver-d55dfcdfc-zxggv\" (UID: \"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.493402 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfd78b66-3464-48ce-9017-0fd1ff5e26f7-config-volume\") pod \"dns-default-zhtn5\" (UID: \"dfd78b66-3464-48ce-9017-0fd1ff5e26f7\") " pod="openshift-dns/dns-default-zhtn5" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.493501 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c9eec4e-df3c-411b-8629-421f3abfb500-socket-dir\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.493540 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/80040d88-3ec4-42f5-94f6-9c8afef81d73-certs\") pod \"machine-config-server-nnjz2\" (UID: \"80040d88-3ec4-42f5-94f6-9c8afef81d73\") " pod="openshift-machine-config-operator/machine-config-server-nnjz2" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.493591 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4c9eec4e-df3c-411b-8629-421f3abfb500-mountpoint-dir\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.494471 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84a12f19-2563-48d0-8682-26dd701b62ce-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-s7c4t\" (UID: \"84a12f19-2563-48d0-8682-26dd701b62ce\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.496210 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-tmpfs\") pod \"packageserver-d55dfcdfc-zxggv\" (UID: \"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.496251 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feac8cdb-eb8a-4f0d-afee-d18467d73727-serving-cert\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.495615 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/feac8cdb-eb8a-4f0d-afee-d18467d73727-encryption-config\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.507400 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.508418 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.514257 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9063c94b-5e44-4a4a-9c85-e122cf7751b9-serving-cert\") pod \"console-operator-58897d9998-rrbmc\" (UID: \"9063c94b-5e44-4a4a-9c85-e122cf7751b9\") " pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.514928 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74c6d322-04d4-4a3e-b3d7-fa6157c5a696-serving-cert\") pod \"authentication-operator-69f744f599-mckmm\" (UID: \"74c6d322-04d4-4a3e-b3d7-fa6157c5a696\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.514989 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-serving-cert\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.515397 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/630d11de-abc5-47ed-8284-7bbf4ec5b9c8-machine-approver-tls\") pod \"machine-approver-56656f9798-6tgnh\" (UID: \"630d11de-abc5-47ed-8284-7bbf4ec5b9c8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.515476 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/feac8cdb-eb8a-4f0d-afee-d18467d73727-etcd-client\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.516728 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.517041 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535068-crjcm"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.521031 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.521668 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ffca8b8-930c-4a19-93ff-e47500546d2e-serving-cert\") pod \"route-controller-manager-6576b87f9c-mxbr7\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.524243 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.524613 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.526246 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.526255 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.531163 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01a2fc1a-81a3-4607-926f-5e8ee502a3c9-trusted-ca\") pod \"ingress-operator-5b745b69d9-ncl99\" (UID: \"01a2fc1a-81a3-4607-926f-5e8ee502a3c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.546790 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.553309 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/01a2fc1a-81a3-4607-926f-5e8ee502a3c9-metrics-tls\") pod \"ingress-operator-5b745b69d9-ncl99\" (UID: \"01a2fc1a-81a3-4607-926f-5e8ee502a3c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.567683 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.573300 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/809c874c-661e-43c3-9e0e-6ee95ed8586e-proxy-tls\") pod \"machine-config-operator-74547568cd-thlzt\" (UID: \"809c874c-661e-43c3-9e0e-6ee95ed8586e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.587712 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.588786 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vxxfb"] Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.601338 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfd78b66-3464-48ce-9017-0fd1ff5e26f7-config-volume\") pod \"dns-default-zhtn5\" (UID: \"dfd78b66-3464-48ce-9017-0fd1ff5e26f7\") " pod="openshift-dns/dns-default-zhtn5" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.601476 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c9eec4e-df3c-411b-8629-421f3abfb500-socket-dir\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.601529 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/80040d88-3ec4-42f5-94f6-9c8afef81d73-certs\") pod \"machine-config-server-nnjz2\" (UID: \"80040d88-3ec4-42f5-94f6-9c8afef81d73\") " pod="openshift-machine-config-operator/machine-config-server-nnjz2" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.601567 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4c9eec4e-df3c-411b-8629-421f3abfb500-mountpoint-dir\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.601609 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7zqt\" (UniqueName: \"kubernetes.io/projected/dfd78b66-3464-48ce-9017-0fd1ff5e26f7-kube-api-access-d7zqt\") pod \"dns-default-zhtn5\" (UID: \"dfd78b66-3464-48ce-9017-0fd1ff5e26f7\") " pod="openshift-dns/dns-default-zhtn5" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.601774 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dfd78b66-3464-48ce-9017-0fd1ff5e26f7-metrics-tls\") pod \"dns-default-zhtn5\" (UID: \"dfd78b66-3464-48ce-9017-0fd1ff5e26f7\") " pod="openshift-dns/dns-default-zhtn5" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.601890 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c9eec4e-df3c-411b-8629-421f3abfb500-registration-dir\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.602068 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4c9eec4e-df3c-411b-8629-421f3abfb500-csi-data-dir\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.602258 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4c9eec4e-df3c-411b-8629-421f3abfb500-plugins-dir\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.602293 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9blh\" (UniqueName: \"kubernetes.io/projected/80040d88-3ec4-42f5-94f6-9c8afef81d73-kube-api-access-p9blh\") pod \"machine-config-server-nnjz2\" (UID: \"80040d88-3ec4-42f5-94f6-9c8afef81d73\") " pod="openshift-machine-config-operator/machine-config-server-nnjz2" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.602384 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/80040d88-3ec4-42f5-94f6-9c8afef81d73-node-bootstrap-token\") pod \"machine-config-server-nnjz2\" (UID: \"80040d88-3ec4-42f5-94f6-9c8afef81d73\") " pod="openshift-machine-config-operator/machine-config-server-nnjz2" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.602455 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlwxx\" (UniqueName: \"kubernetes.io/projected/4c9eec4e-df3c-411b-8629-421f3abfb500-kube-api-access-zlwxx\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.604058 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c9eec4e-df3c-411b-8629-421f3abfb500-registration-dir\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.604284 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4c9eec4e-df3c-411b-8629-421f3abfb500-mountpoint-dir\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.604503 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4c9eec4e-df3c-411b-8629-421f3abfb500-plugins-dir\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.604597 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4c9eec4e-df3c-411b-8629-421f3abfb500-csi-data-dir\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.608013 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.610375 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c9eec4e-df3c-411b-8629-421f3abfb500-socket-dir\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.622254 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab74e98-0cf9-41ea-aebc-ce1cd5011740-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-snwd7\" (UID: \"fab74e98-0cf9-41ea-aebc-ce1cd5011740\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.627878 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.632918 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab74e98-0cf9-41ea-aebc-ce1cd5011740-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-snwd7\" (UID: \"fab74e98-0cf9-41ea-aebc-ce1cd5011740\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.645639 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.684343 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.705488 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.709060 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42a649e6-a13d-4a1d-94a6-82c03d5a913b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hn56r\" (UID: \"42a649e6-a13d-4a1d-94a6-82c03d5a913b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.724838 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.744565 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.765907 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.771127 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42a649e6-a13d-4a1d-94a6-82c03d5a913b-config\") pod \"kube-controller-manager-operator-78b949d7b-hn56r\" (UID: \"42a649e6-a13d-4a1d-94a6-82c03d5a913b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.784817 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.788915 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49a62159-9584-4fd5-b9d2-e81d422f5089-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r668f\" (UID: \"49a62159-9584-4fd5-b9d2-e81d422f5089\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.805495 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.814137 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a62159-9584-4fd5-b9d2-e81d422f5089-config\") pod \"kube-apiserver-operator-766d6c64bb-r668f\" (UID: \"49a62159-9584-4fd5-b9d2-e81d422f5089\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.825313 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.844873 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.864799 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.884751 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.893038 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223205f4-c6e1-4f77-bfc3-667ad541a34e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6rwql\" (UID: \"223205f4-c6e1-4f77-bfc3-667ad541a34e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.904614 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.914352 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223205f4-c6e1-4f77-bfc3-667ad541a34e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6rwql\" (UID: \"223205f4-c6e1-4f77-bfc3-667ad541a34e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.924281 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.945840 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.964741 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.976694 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/45069e17-f50a-47d5-9552-b32b9eecadce-default-certificate\") pod \"router-default-5444994796-h27ll\" (UID: \"45069e17-f50a-47d5-9552-b32b9eecadce\") " pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.985543 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 26 11:08:23 crc kubenswrapper[4724]: I0226 11:08:23.999164 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/45069e17-f50a-47d5-9552-b32b9eecadce-stats-auth\") pod \"router-default-5444994796-h27ll\" (UID: \"45069e17-f50a-47d5-9552-b32b9eecadce\") " pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.005721 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.010858 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45069e17-f50a-47d5-9552-b32b9eecadce-metrics-certs\") pod \"router-default-5444994796-h27ll\" (UID: \"45069e17-f50a-47d5-9552-b32b9eecadce\") " pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.024321 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.030797 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45069e17-f50a-47d5-9552-b32b9eecadce-service-ca-bundle\") pod \"router-default-5444994796-h27ll\" (UID: \"45069e17-f50a-47d5-9552-b32b9eecadce\") " pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.044501 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.065471 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.066834 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a9effc4-1c10-46ae-9762-1f3308aa9bc9-config\") pod \"machine-api-operator-5694c8668f-4f5jn\" (UID: \"6a9effc4-1c10-46ae-9762-1f3308aa9bc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.086020 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.098766 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9effc4-1c10-46ae-9762-1f3308aa9bc9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4f5jn\" (UID: \"6a9effc4-1c10-46ae-9762-1f3308aa9bc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.105343 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.124665 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.134823 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6a9effc4-1c10-46ae-9762-1f3308aa9bc9-images\") pod \"machine-api-operator-5694c8668f-4f5jn\" (UID: \"6a9effc4-1c10-46ae-9762-1f3308aa9bc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.145901 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.165615 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.185281 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.205466 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.224972 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.240974 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fe850ec7-4df8-4628-ae55-3c922de012e8-signing-key\") pod \"service-ca-9c57cc56f-8v4r5\" (UID: \"fe850ec7-4df8-4628-ae55-3c922de012e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v4r5" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.245770 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.257263 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fe850ec7-4df8-4628-ae55-3c922de012e8-signing-cabundle\") pod \"service-ca-9c57cc56f-8v4r5\" (UID: \"fe850ec7-4df8-4628-ae55-3c922de012e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v4r5" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.265290 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.285765 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.298840 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e87b7bd7-9d39-48f0-b896-fe5da437416f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xw4vt\" (UID: \"e87b7bd7-9d39-48f0-b896-fe5da437416f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.302527 4724 request.go:700] Waited for 1.003459817s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.306434 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.325654 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.344896 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.349227 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/444082d7-63dc-4363-ad17-5b61e61895ed-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nc5bx\" (UID: \"444082d7-63dc-4363-ad17-5b61e61895ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.349227 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3546882-cc78-45d2-b99d-9d14605bdc5b-secret-volume\") pod \"collect-profiles-29535060-x9rz4\" (UID: \"f3546882-cc78-45d2-b99d-9d14605bdc5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.354613 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/309c37fa-849e-460c-9816-4d67aa631021-profile-collector-cert\") pod \"catalog-operator-68c6474976-mqt24\" (UID: \"309c37fa-849e-460c-9816-4d67aa631021\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.365394 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.385107 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.397742 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/481dac61-2ecf-46c9-b8f8-981815ceb9c5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8kd6n\" (UID: \"481dac61-2ecf-46c9-b8f8-981815ceb9c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.405983 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.443349 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.445873 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.446807 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/481dac61-2ecf-46c9-b8f8-981815ceb9c5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8kd6n\" (UID: \"481dac61-2ecf-46c9-b8f8-981815ceb9c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.451010 4724 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.451214 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9063c94b-5e44-4a4a-9c85-e122cf7751b9-trusted-ca podName:9063c94b-5e44-4a4a-9c85-e122cf7751b9 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:24.951117 +0000 UTC m=+171.606856115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9063c94b-5e44-4a4a-9c85-e122cf7751b9-trusted-ca") pod "console-operator-58897d9998-rrbmc" (UID: "9063c94b-5e44-4a4a-9c85-e122cf7751b9") : failed to sync configmap cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.455818 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/444082d7-63dc-4363-ad17-5b61e61895ed-srv-cert\") pod \"olm-operator-6b444d44fb-nc5bx\" (UID: \"444082d7-63dc-4363-ad17-5b61e61895ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.465637 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.484722 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.490636 4724 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.490676 4724 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.490851 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f73a3c79-e83b-4cf2-9a39-ca27f3f3feab-webhook-certs podName:f73a3c79-e83b-4cf2-9a39-ca27f3f3feab nodeName:}" failed. No retries permitted until 2026-02-26 11:08:24.990792294 +0000 UTC m=+171.646531409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f73a3c79-e83b-4cf2-9a39-ca27f3f3feab-webhook-certs") pod "multus-admission-controller-857f4d67dd-c8x4t" (UID: "f73a3c79-e83b-4cf2-9a39-ca27f3f3feab") : failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.490929 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3546882-cc78-45d2-b99d-9d14605bdc5b-config-volume podName:f3546882-cc78-45d2-b99d-9d14605bdc5b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:24.990893247 +0000 UTC m=+171.646632562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f3546882-cc78-45d2-b99d-9d14605bdc5b-config-volume") pod "collect-profiles-29535060-x9rz4" (UID: "f3546882-cc78-45d2-b99d-9d14605bdc5b") : failed to sync configmap cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.491654 4724 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.491702 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-webhook-cert podName:6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:24.99169106 +0000 UTC m=+171.647430175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-webhook-cert") pod "packageserver-d55dfcdfc-zxggv" (UID: "6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52") : failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.491736 4724 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.491788 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4a214909-86a1-4cbb-bccc-4f24faa05d4b-config podName:4a214909-86a1-4cbb-bccc-4f24faa05d4b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:24.991774302 +0000 UTC m=+171.647513417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4a214909-86a1-4cbb-bccc-4f24faa05d4b-config") pod "service-ca-operator-777779d784-kkmrh" (UID: "4a214909-86a1-4cbb-bccc-4f24faa05d4b") : failed to sync configmap cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.495469 4724 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.495493 4724 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.495566 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a214909-86a1-4cbb-bccc-4f24faa05d4b-serving-cert podName:4a214909-86a1-4cbb-bccc-4f24faa05d4b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:24.995534179 +0000 UTC m=+171.651273474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4a214909-86a1-4cbb-bccc-4f24faa05d4b-serving-cert") pod "service-ca-operator-777779d784-kkmrh" (UID: "4a214909-86a1-4cbb-bccc-4f24faa05d4b") : failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.495582 4724 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.495607 4724 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.495639 4724 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.495702 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34fa2b5c-f1b3-434f-a307-be966f1d64d9-package-server-manager-serving-cert podName:34fa2b5c-f1b3-434f-a307-be966f1d64d9 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:24.995680534 +0000 UTC m=+171.651419849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/34fa2b5c-f1b3-434f-a307-be966f1d64d9-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-jl45p" (UID: "34fa2b5c-f1b3-434f-a307-be966f1d64d9") : failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.495881 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/309c37fa-849e-460c-9816-4d67aa631021-srv-cert podName:309c37fa-849e-460c-9816-4d67aa631021 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:24.995860259 +0000 UTC m=+171.651599574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/309c37fa-849e-460c-9816-4d67aa631021-srv-cert") pod "catalog-operator-68c6474976-mqt24" (UID: "309c37fa-849e-460c-9816-4d67aa631021") : failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.495909 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84a12f19-2563-48d0-8682-26dd701b62ce-proxy-tls podName:84a12f19-2563-48d0-8682-26dd701b62ce nodeName:}" failed. No retries permitted until 2026-02-26 11:08:24.99590087 +0000 UTC m=+171.651640195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/84a12f19-2563-48d0-8682-26dd701b62ce-proxy-tls") pod "machine-config-controller-84d6567774-s7c4t" (UID: "84a12f19-2563-48d0-8682-26dd701b62ce") : failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.495934 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-apiservice-cert podName:6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:24.995924911 +0000 UTC m=+171.651664236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-apiservice-cert") pod "packageserver-d55dfcdfc-zxggv" (UID: "6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52") : failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.504888 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.524611 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.544428 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.565235 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.585408 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.603061 4724 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.603244 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dfd78b66-3464-48ce-9017-0fd1ff5e26f7-config-volume podName:dfd78b66-3464-48ce-9017-0fd1ff5e26f7 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:25.103221917 +0000 UTC m=+171.758961032 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dfd78b66-3464-48ce-9017-0fd1ff5e26f7-config-volume") pod "dns-default-zhtn5" (UID: "dfd78b66-3464-48ce-9017-0fd1ff5e26f7") : failed to sync configmap cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.604397 4724 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.604452 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80040d88-3ec4-42f5-94f6-9c8afef81d73-certs podName:80040d88-3ec4-42f5-94f6-9c8afef81d73 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:25.104439752 +0000 UTC m=+171.760178867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/80040d88-3ec4-42f5-94f6-9c8afef81d73-certs") pod "machine-config-server-nnjz2" (UID: "80040d88-3ec4-42f5-94f6-9c8afef81d73") : failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.604577 4724 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.604708 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfd78b66-3464-48ce-9017-0fd1ff5e26f7-metrics-tls podName:dfd78b66-3464-48ce-9017-0fd1ff5e26f7 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:25.104678309 +0000 UTC m=+171.760417424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/dfd78b66-3464-48ce-9017-0fd1ff5e26f7-metrics-tls") pod "dns-default-zhtn5" (UID: "dfd78b66-3464-48ce-9017-0fd1ff5e26f7") : failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.604752 4724 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: E0226 11:08:24.604782 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80040d88-3ec4-42f5-94f6-9c8afef81d73-node-bootstrap-token podName:80040d88-3ec4-42f5-94f6-9c8afef81d73 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:25.104773021 +0000 UTC m=+171.760512336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/80040d88-3ec4-42f5-94f6-9c8afef81d73-node-bootstrap-token") pod "machine-config-server-nnjz2" (UID: "80040d88-3ec4-42f5-94f6-9c8afef81d73") : failed to sync secret cache: timed out waiting for the condition Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.605471 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.624766 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.645667 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.665048 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.685734 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.705477 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.725497 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.745311 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.765325 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.785357 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.804918 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.826302 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.844728 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.865968 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.885817 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.904895 4724 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.946086 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5472b\" (UniqueName: \"kubernetes.io/projected/936d9d11-1063-4a6a-b7d6-68a1fe00d9dd-kube-api-access-5472b\") pod \"openshift-controller-manager-operator-756b6f6bc6-9kj2m\" (UID: \"936d9d11-1063-4a6a-b7d6-68a1fe00d9dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.947697 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.953458 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9063c94b-5e44-4a4a-9c85-e122cf7751b9-trusted-ca\") pod \"console-operator-58897d9998-rrbmc\" (UID: \"9063c94b-5e44-4a4a-9c85-e122cf7751b9\") " pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.968425 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01a2fc1a-81a3-4607-926f-5e8ee502a3c9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ncl99\" (UID: \"01a2fc1a-81a3-4607-926f-5e8ee502a3c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" Feb 26 11:08:24 crc kubenswrapper[4724]: I0226 11:08:24.985058 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g44wp\" (UniqueName: \"kubernetes.io/projected/b0e436fd-9344-4f55-ae35-4eae3aac24c8-kube-api-access-g44wp\") pod \"cluster-image-registry-operator-dc59b4c8b-g9q8d\" (UID: \"b0e436fd-9344-4f55-ae35-4eae3aac24c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.008493 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znjtk\" (UniqueName: \"kubernetes.io/projected/74c6d322-04d4-4a3e-b3d7-fa6157c5a696-kube-api-access-znjtk\") pod \"authentication-operator-69f744f599-mckmm\" (UID: \"74c6d322-04d4-4a3e-b3d7-fa6157c5a696\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.023604 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/223205f4-c6e1-4f77-bfc3-667ad541a34e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6rwql\" (UID: \"223205f4-c6e1-4f77-bfc3-667ad541a34e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.044556 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4q9l\" (UniqueName: \"kubernetes.io/projected/45069e17-f50a-47d5-9552-b32b9eecadce-kube-api-access-z4q9l\") pod \"router-default-5444994796-h27ll\" (UID: \"45069e17-f50a-47d5-9552-b32b9eecadce\") " pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.055300 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-webhook-cert\") pod \"packageserver-d55dfcdfc-zxggv\" (UID: \"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.055362 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a214909-86a1-4cbb-bccc-4f24faa05d4b-config\") pod \"service-ca-operator-777779d784-kkmrh\" (UID: \"4a214909-86a1-4cbb-bccc-4f24faa05d4b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.056172 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/34fa2b5c-f1b3-434f-a307-be966f1d64d9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jl45p\" (UID: \"34fa2b5c-f1b3-434f-a307-be966f1d64d9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.056240 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/309c37fa-849e-460c-9816-4d67aa631021-srv-cert\") pod \"catalog-operator-68c6474976-mqt24\" (UID: \"309c37fa-849e-460c-9816-4d67aa631021\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.056310 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84a12f19-2563-48d0-8682-26dd701b62ce-proxy-tls\") pod \"machine-config-controller-84d6567774-s7c4t\" (UID: \"84a12f19-2563-48d0-8682-26dd701b62ce\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.056506 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-apiservice-cert\") pod \"packageserver-d55dfcdfc-zxggv\" (UID: \"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.056595 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a214909-86a1-4cbb-bccc-4f24faa05d4b-serving-cert\") pod \"service-ca-operator-777779d784-kkmrh\" (UID: \"4a214909-86a1-4cbb-bccc-4f24faa05d4b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.056745 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f73a3c79-e83b-4cf2-9a39-ca27f3f3feab-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c8x4t\" (UID: \"f73a3c79-e83b-4cf2-9a39-ca27f3f3feab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8x4t" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.056822 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a214909-86a1-4cbb-bccc-4f24faa05d4b-config\") pod \"service-ca-operator-777779d784-kkmrh\" (UID: \"4a214909-86a1-4cbb-bccc-4f24faa05d4b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.056894 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3546882-cc78-45d2-b99d-9d14605bdc5b-config-volume\") pod \"collect-profiles-29535060-x9rz4\" (UID: \"f3546882-cc78-45d2-b99d-9d14605bdc5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.057849 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3546882-cc78-45d2-b99d-9d14605bdc5b-config-volume\") pod \"collect-profiles-29535060-x9rz4\" (UID: \"f3546882-cc78-45d2-b99d-9d14605bdc5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.060121 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84a12f19-2563-48d0-8682-26dd701b62ce-proxy-tls\") pod \"machine-config-controller-84d6567774-s7c4t\" (UID: \"84a12f19-2563-48d0-8682-26dd701b62ce\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.061349 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/34fa2b5c-f1b3-434f-a307-be966f1d64d9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jl45p\" (UID: \"34fa2b5c-f1b3-434f-a307-be966f1d64d9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.061354 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-webhook-cert\") pod \"packageserver-d55dfcdfc-zxggv\" (UID: \"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.063998 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a214909-86a1-4cbb-bccc-4f24faa05d4b-serving-cert\") pod \"service-ca-operator-777779d784-kkmrh\" (UID: \"4a214909-86a1-4cbb-bccc-4f24faa05d4b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.064065 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f73a3c79-e83b-4cf2-9a39-ca27f3f3feab-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c8x4t\" (UID: \"f73a3c79-e83b-4cf2-9a39-ca27f3f3feab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8x4t" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.064636 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-apiservice-cert\") pod \"packageserver-d55dfcdfc-zxggv\" (UID: \"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.065702 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw9cq\" (UniqueName: \"kubernetes.io/projected/fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9-kube-api-access-fw9cq\") pod \"etcd-operator-b45778765-s92pk\" (UID: \"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.067332 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/309c37fa-849e-460c-9816-4d67aa631021-srv-cert\") pod \"catalog-operator-68c6474976-mqt24\" (UID: \"309c37fa-849e-460c-9816-4d67aa631021\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.081536 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lvs9\" (UniqueName: \"kubernetes.io/projected/809c874c-661e-43c3-9e0e-6ee95ed8586e-kube-api-access-9lvs9\") pod \"machine-config-operator-74547568cd-thlzt\" (UID: \"809c874c-661e-43c3-9e0e-6ee95ed8586e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.104681 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49a62159-9584-4fd5-b9d2-e81d422f5089-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r668f\" (UID: \"49a62159-9584-4fd5-b9d2-e81d422f5089\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.122779 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.123887 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtrdk\" (UniqueName: \"kubernetes.io/projected/feac8cdb-eb8a-4f0d-afee-d18467d73727-kube-api-access-gtrdk\") pod \"apiserver-7bbb656c7d-psfvt\" (UID: \"feac8cdb-eb8a-4f0d-afee-d18467d73727\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.134848 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.140773 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.144461 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.156940 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtwlm\" (UniqueName: \"kubernetes.io/projected/207d3079-e7ed-46b9-8744-aed50bb42352-kube-api-access-rtwlm\") pod \"openshift-config-operator-7777fb866f-md2vv\" (UID: \"207d3079-e7ed-46b9-8744-aed50bb42352\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.158219 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/80040d88-3ec4-42f5-94f6-9c8afef81d73-node-bootstrap-token\") pod \"machine-config-server-nnjz2\" (UID: \"80040d88-3ec4-42f5-94f6-9c8afef81d73\") " pod="openshift-machine-config-operator/machine-config-server-nnjz2" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.158470 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfd78b66-3464-48ce-9017-0fd1ff5e26f7-config-volume\") pod \"dns-default-zhtn5\" (UID: \"dfd78b66-3464-48ce-9017-0fd1ff5e26f7\") " pod="openshift-dns/dns-default-zhtn5" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.158637 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/80040d88-3ec4-42f5-94f6-9c8afef81d73-certs\") pod \"machine-config-server-nnjz2\" (UID: \"80040d88-3ec4-42f5-94f6-9c8afef81d73\") " pod="openshift-machine-config-operator/machine-config-server-nnjz2" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.158767 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dfd78b66-3464-48ce-9017-0fd1ff5e26f7-metrics-tls\") pod \"dns-default-zhtn5\" (UID: \"dfd78b66-3464-48ce-9017-0fd1ff5e26f7\") " pod="openshift-dns/dns-default-zhtn5" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.190729 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdxq7\" (UniqueName: \"kubernetes.io/projected/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-kube-api-access-fdxq7\") pod \"console-f9d7485db-9cwcb\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.212027 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gz9v\" (UniqueName: \"kubernetes.io/projected/7027d958-98c3-4fd1-9442-232be60e1eb7-kube-api-access-4gz9v\") pod \"downloads-7954f5f757-k5ktg\" (UID: \"7027d958-98c3-4fd1-9442-232be60e1eb7\") " pod="openshift-console/downloads-7954f5f757-k5ktg" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.213393 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.248096 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.261708 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0e436fd-9344-4f55-ae35-4eae3aac24c8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g9q8d\" (UID: \"b0e436fd-9344-4f55-ae35-4eae3aac24c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.265808 4724 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.274119 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.286283 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.301309 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/80040d88-3ec4-42f5-94f6-9c8afef81d73-certs\") pod \"machine-config-server-nnjz2\" (UID: \"80040d88-3ec4-42f5-94f6-9c8afef81d73\") " pod="openshift-machine-config-operator/machine-config-server-nnjz2" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.304671 4724 request.go:700] Waited for 1.85802072s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.306901 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.308925 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.328848 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.329337 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.345058 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k5ktg" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.345837 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m"] Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.348869 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.351931 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.361362 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.371797 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.387398 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.411341 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.437010 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/80040d88-3ec4-42f5-94f6-9c8afef81d73-node-bootstrap-token\") pod \"machine-config-server-nnjz2\" (UID: \"80040d88-3ec4-42f5-94f6-9c8afef81d73\") " pod="openshift-machine-config-operator/machine-config-server-nnjz2" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.458033 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh5c9\" (UniqueName: \"kubernetes.io/projected/fab74e98-0cf9-41ea-aebc-ce1cd5011740-kube-api-access-sh5c9\") pod \"kube-storage-version-migrator-operator-b67b599dd-snwd7\" (UID: \"fab74e98-0cf9-41ea-aebc-ce1cd5011740\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.480496 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4f28\" (UniqueName: \"kubernetes.io/projected/9063c94b-5e44-4a4a-9c85-e122cf7751b9-kube-api-access-w4f28\") pod \"console-operator-58897d9998-rrbmc\" (UID: \"9063c94b-5e44-4a4a-9c85-e122cf7751b9\") " pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.516318 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42a649e6-a13d-4a1d-94a6-82c03d5a913b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hn56r\" (UID: \"42a649e6-a13d-4a1d-94a6-82c03d5a913b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.521252 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7lc7\" (UniqueName: \"kubernetes.io/projected/01a2fc1a-81a3-4607-926f-5e8ee502a3c9-kube-api-access-f7lc7\") pod \"ingress-operator-5b745b69d9-ncl99\" (UID: \"01a2fc1a-81a3-4607-926f-5e8ee502a3c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.543434 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7e9f338-eb02-4618-aafb-37065b3823f9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zngnc\" (UID: \"b7e9f338-eb02-4618-aafb-37065b3823f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.559135 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd9s4\" (UniqueName: \"kubernetes.io/projected/0eb89f1c-1230-4455-86c1-6ad3796969a9-kube-api-access-xd9s4\") pod \"apiserver-76f77b778f-wdxr7\" (UID: \"0eb89f1c-1230-4455-86c1-6ad3796969a9\") " pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.571219 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qnkw\" (UniqueName: \"kubernetes.io/projected/630d11de-abc5-47ed-8284-7bbf4ec5b9c8-kube-api-access-8qnkw\") pod \"machine-approver-56656f9798-6tgnh\" (UID: \"630d11de-abc5-47ed-8284-7bbf4ec5b9c8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.588440 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hqwr\" (UniqueName: \"kubernetes.io/projected/0bd269f2-74b2-4dfd-bec7-1442d4b438ef-kube-api-access-2hqwr\") pod \"openshift-apiserver-operator-796bbdcf4f-b7cfr\" (UID: \"0bd269f2-74b2-4dfd-bec7-1442d4b438ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.604663 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6lcs\" (UniqueName: \"kubernetes.io/projected/4d5a1aaf-95fa-4eaf-8f2f-68c48937af6f-kube-api-access-r6lcs\") pod \"dns-operator-744455d44c-d9shf\" (UID: \"4d5a1aaf-95fa-4eaf-8f2f-68c48937af6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-d9shf" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.621860 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.646298 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkks5\" (UniqueName: \"kubernetes.io/projected/7ffca8b8-930c-4a19-93ff-e47500546d2e-kube-api-access-lkks5\") pod \"route-controller-manager-6576b87f9c-mxbr7\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.648089 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.671689 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.672379 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.673110 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.688332 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.691772 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.692404 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.697665 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqhqs\" (UniqueName: \"kubernetes.io/projected/01c4a397-4485-49bc-9ee3-c794832fd1ee-kube-api-access-lqhqs\") pod \"controller-manager-879f6c89f-nj24t\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.705965 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.710898 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.723579 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f"] Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.728156 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.731815 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h27ll" event={"ID":"45069e17-f50a-47d5-9552-b32b9eecadce","Type":"ContainerStarted","Data":"11a9df5e7a01415ef6e6cc08e503f419b4ffb3d829bdb70dbf649b334d25190c"} Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.745830 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.752609 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.756054 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m" event={"ID":"936d9d11-1063-4a6a-b7d6-68a1fe00d9dd","Type":"ContainerStarted","Data":"12a1130670930fa34788bccb49911a268c879ce27fb8a98bdd6b6cf82191ac64"} Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.767628 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfd78b66-3464-48ce-9017-0fd1ff5e26f7-config-volume\") pod \"dns-default-zhtn5\" (UID: \"dfd78b66-3464-48ce-9017-0fd1ff5e26f7\") " pod="openshift-dns/dns-default-zhtn5" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.771214 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.781806 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt"] Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.791981 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.805855 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.807082 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql"] Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.811075 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dfd78b66-3464-48ce-9017-0fd1ff5e26f7-metrics-tls\") pod \"dns-default-zhtn5\" (UID: \"dfd78b66-3464-48ce-9017-0fd1ff5e26f7\") " pod="openshift-dns/dns-default-zhtn5" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.835554 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9xjh\" (UniqueName: \"kubernetes.io/projected/fe850ec7-4df8-4628-ae55-3c922de012e8-kube-api-access-j9xjh\") pod \"service-ca-9c57cc56f-8v4r5\" (UID: \"fe850ec7-4df8-4628-ae55-3c922de012e8\") " pod="openshift-service-ca/service-ca-9c57cc56f-8v4r5" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.860970 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf4c8\" (UniqueName: \"kubernetes.io/projected/34fa2b5c-f1b3-434f-a307-be966f1d64d9-kube-api-access-bf4c8\") pod \"package-server-manager-789f6589d5-jl45p\" (UID: \"34fa2b5c-f1b3-434f-a307-be966f1d64d9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.866962 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s26nd\" (UniqueName: \"kubernetes.io/projected/91b7ba35-3bf3-4738-8a71-d093b0e7fd12-kube-api-access-s26nd\") pod \"auto-csr-approver-29535068-crjcm\" (UID: \"91b7ba35-3bf3-4738-8a71-d093b0e7fd12\") " pod="openshift-infra/auto-csr-approver-29535068-crjcm" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.886143 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdjc9\" (UniqueName: \"kubernetes.io/projected/309c37fa-849e-460c-9816-4d67aa631021-kube-api-access-cdjc9\") pod \"catalog-operator-68c6474976-mqt24\" (UID: \"309c37fa-849e-460c-9816-4d67aa631021\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.887411 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-d9shf" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.896718 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535068-crjcm" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.907189 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p" Feb 26 11:08:25 crc kubenswrapper[4724]: W0226 11:08:25.920376 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49a62159_9584_4fd5_b9d2_e81d422f5089.slice/crio-ed2c7b607326d48103e870c664609837f4f5fb870214546684e23c5caa9f9767 WatchSource:0}: Error finding container ed2c7b607326d48103e870c664609837f4f5fb870214546684e23c5caa9f9767: Status 404 returned error can't find the container with id ed2c7b607326d48103e870c664609837f4f5fb870214546684e23c5caa9f9767 Feb 26 11:08:25 crc kubenswrapper[4724]: W0226 11:08:25.930664 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfeac8cdb_eb8a_4f0d_afee_d18467d73727.slice/crio-c5dd4dc26bd908f505a3d6a679bde26d408ffd51a709d8774303113c3dbefdd6 WatchSource:0}: Error finding container c5dd4dc26bd908f505a3d6a679bde26d408ffd51a709d8774303113c3dbefdd6: Status 404 returned error can't find the container with id c5dd4dc26bd908f505a3d6a679bde26d408ffd51a709d8774303113c3dbefdd6 Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.931786 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mckmm"] Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.939926 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgjlc\" (UniqueName: \"kubernetes.io/projected/444082d7-63dc-4363-ad17-5b61e61895ed-kube-api-access-fgjlc\") pod \"olm-operator-6b444d44fb-nc5bx\" (UID: \"444082d7-63dc-4363-ad17-5b61e61895ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.947664 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djrt7\" (UniqueName: \"kubernetes.io/projected/0981a4e3-56c8-49a4-a65f-94d3d916eef8-kube-api-access-djrt7\") pod \"migrator-59844c95c7-fsm2c\" (UID: \"0981a4e3-56c8-49a4-a65f-94d3d916eef8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsm2c" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.954003 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhfjd\" (UniqueName: \"kubernetes.io/projected/6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52-kube-api-access-rhfjd\") pod \"packageserver-d55dfcdfc-zxggv\" (UID: \"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:25 crc kubenswrapper[4724]: E0226 11:08:25.955219 4724 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:08:25 crc kubenswrapper[4724]: E0226 11:08:25.955309 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9063c94b-5e44-4a4a-9c85-e122cf7751b9-trusted-ca podName:9063c94b-5e44-4a4a-9c85-e122cf7751b9 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:26.955282268 +0000 UTC m=+173.611021383 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9063c94b-5e44-4a4a-9c85-e122cf7751b9-trusted-ca") pod "console-operator-58897d9998-rrbmc" (UID: "9063c94b-5e44-4a4a-9c85-e122cf7751b9") : failed to sync configmap cache: timed out waiting for the condition Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.966955 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gbts\" (UniqueName: \"kubernetes.io/projected/f3546882-cc78-45d2-b99d-9d14605bdc5b-kube-api-access-8gbts\") pod \"collect-profiles-29535060-x9rz4\" (UID: \"f3546882-cc78-45d2-b99d-9d14605bdc5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" Feb 26 11:08:25 crc kubenswrapper[4724]: I0226 11:08:25.987675 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k76vd\" (UniqueName: \"kubernetes.io/projected/6a9effc4-1c10-46ae-9762-1f3308aa9bc9-kube-api-access-k76vd\") pod \"machine-api-operator-5694c8668f-4f5jn\" (UID: \"6a9effc4-1c10-46ae-9762-1f3308aa9bc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.008318 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g8jd\" (UniqueName: \"kubernetes.io/projected/84a12f19-2563-48d0-8682-26dd701b62ce-kube-api-access-2g8jd\") pod \"machine-config-controller-84d6567774-s7c4t\" (UID: \"84a12f19-2563-48d0-8682-26dd701b62ce\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.037644 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jtbr\" (UniqueName: \"kubernetes.io/projected/e87b7bd7-9d39-48f0-b896-fe5da437416f-kube-api-access-2jtbr\") pod \"control-plane-machine-set-operator-78cbb6b69f-xw4vt\" (UID: \"e87b7bd7-9d39-48f0-b896-fe5da437416f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.051593 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ngbn\" (UniqueName: \"kubernetes.io/projected/481dac61-2ecf-46c9-b8f8-981815ceb9c5-kube-api-access-8ngbn\") pod \"marketplace-operator-79b997595-8kd6n\" (UID: \"481dac61-2ecf-46c9-b8f8-981815ceb9c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.053156 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d"] Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.061247 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.071403 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8v4r5" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.080701 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.091423 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-576kc\" (UniqueName: \"kubernetes.io/projected/4a214909-86a1-4cbb-bccc-4f24faa05d4b-kube-api-access-576kc\") pod \"service-ca-operator-777779d784-kkmrh\" (UID: \"4a214909-86a1-4cbb-bccc-4f24faa05d4b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.093697 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.106145 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q849z\" (UniqueName: \"kubernetes.io/projected/f73a3c79-e83b-4cf2-9a39-ca27f3f3feab-kube-api-access-q849z\") pod \"multus-admission-controller-857f4d67dd-c8x4t\" (UID: \"f73a3c79-e83b-4cf2-9a39-ca27f3f3feab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8x4t" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.106445 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.111972 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlwxx\" (UniqueName: \"kubernetes.io/projected/4c9eec4e-df3c-411b-8629-421f3abfb500-kube-api-access-zlwxx\") pod \"csi-hostpathplugin-4rspm\" (UID: \"4c9eec4e-df3c-411b-8629-421f3abfb500\") " pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.116412 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.129771 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7zqt\" (UniqueName: \"kubernetes.io/projected/dfd78b66-3464-48ce-9017-0fd1ff5e26f7-kube-api-access-d7zqt\") pod \"dns-default-zhtn5\" (UID: \"dfd78b66-3464-48ce-9017-0fd1ff5e26f7\") " pod="openshift-dns/dns-default-zhtn5" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.130165 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.141135 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.156710 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8x4t" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.165967 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9blh\" (UniqueName: \"kubernetes.io/projected/80040d88-3ec4-42f5-94f6-9c8afef81d73-kube-api-access-p9blh\") pod \"machine-config-server-nnjz2\" (UID: \"80040d88-3ec4-42f5-94f6-9c8afef81d73\") " pod="openshift-machine-config-operator/machine-config-server-nnjz2" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.166628 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsm2c" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.168693 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.175202 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.181781 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 26 11:08:26 crc kubenswrapper[4724]: E0226 11:08:26.184321 4724 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:08:26 crc kubenswrapper[4724]: E0226 11:08:26.184392 4724 projected.go:194] Error preparing data for projected volume kube-api-access-ff98r for pod openshift-authentication/oauth-openshift-558db77b4-2m27r: failed to sync configmap cache: timed out waiting for the condition Feb 26 11:08:26 crc kubenswrapper[4724]: E0226 11:08:26.184515 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f469f47-990d-4224-8002-c658ef626f48-kube-api-access-ff98r podName:5f469f47-990d-4224-8002-c658ef626f48 nodeName:}" failed. No retries permitted until 2026-02-26 11:08:26.684482858 +0000 UTC m=+173.340221973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ff98r" (UniqueName: "kubernetes.io/projected/5f469f47-990d-4224-8002-c658ef626f48-kube-api-access-ff98r") pod "oauth-openshift-558db77b4-2m27r" (UID: "5f469f47-990d-4224-8002-c658ef626f48") : failed to sync configmap cache: timed out waiting for the condition Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.192735 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.204282 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnj4w\" (UniqueName: \"kubernetes.io/projected/4c344657-7620-4366-80a9-84de8ed2face-kube-api-access-hnj4w\") pod \"cluster-samples-operator-665b6dd947-69bcg\" (UID: \"4c344657-7620-4366-80a9-84de8ed2face\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69bcg" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.206836 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.215598 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4rspm" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.249652 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9cwcb"] Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.251566 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f276b5-977b-4a34-9c9c-2b699d10345c-trusted-ca\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.251662 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-registry-tls\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.251704 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-bound-sa-token\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.251767 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x4ws\" (UniqueName: \"kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-kube-api-access-7x4ws\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.251879 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4f276b5-977b-4a34-9c9c-2b699d10345c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.251987 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.252030 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4f276b5-977b-4a34-9c9c-2b699d10345c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.252064 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4f276b5-977b-4a34-9c9c-2b699d10345c-registry-certificates\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.254263 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nnjz2" Feb 26 11:08:26 crc kubenswrapper[4724]: E0226 11:08:26.281963 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:26.781942814 +0000 UTC m=+173.437682129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.293711 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zhtn5" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.359872 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.360323 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4f276b5-977b-4a34-9c9c-2b699d10345c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.360524 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9mxd\" (UniqueName: \"kubernetes.io/projected/2ee196d3-f421-4369-a590-691e0d0960b7-kube-api-access-v9mxd\") pod \"ingress-canary-mlrhs\" (UID: \"2ee196d3-f421-4369-a590-691e0d0960b7\") " pod="openshift-ingress-canary/ingress-canary-mlrhs" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.360679 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4f276b5-977b-4a34-9c9c-2b699d10345c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.360743 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4f276b5-977b-4a34-9c9c-2b699d10345c-registry-certificates\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.360766 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ee196d3-f421-4369-a590-691e0d0960b7-cert\") pod \"ingress-canary-mlrhs\" (UID: \"2ee196d3-f421-4369-a590-691e0d0960b7\") " pod="openshift-ingress-canary/ingress-canary-mlrhs" Feb 26 11:08:26 crc kubenswrapper[4724]: E0226 11:08:26.370443 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:26.870410412 +0000 UTC m=+173.526149517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.376555 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f276b5-977b-4a34-9c9c-2b699d10345c-trusted-ca\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.389343 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4f276b5-977b-4a34-9c9c-2b699d10345c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.390063 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f276b5-977b-4a34-9c9c-2b699d10345c-trusted-ca\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.390255 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-registry-tls\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.390372 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-bound-sa-token\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.390600 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x4ws\" (UniqueName: \"kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-kube-api-access-7x4ws\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.391244 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4f276b5-977b-4a34-9c9c-2b699d10345c-registry-certificates\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.427929 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69bcg" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.439099 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s92pk"] Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.465052 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-zfscs" podStartSLOduration=110.465022266 podStartE2EDuration="1m50.465022266s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:26.464021237 +0000 UTC m=+173.119760352" watchObservedRunningTime="2026-02-26 11:08:26.465022266 +0000 UTC m=+173.120761381" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.466632 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-md2vv"] Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.474273 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4f276b5-977b-4a34-9c9c-2b699d10345c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.484382 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x4ws\" (UniqueName: \"kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-kube-api-access-7x4ws\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.492554 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9mxd\" (UniqueName: \"kubernetes.io/projected/2ee196d3-f421-4369-a590-691e0d0960b7-kube-api-access-v9mxd\") pod \"ingress-canary-mlrhs\" (UID: \"2ee196d3-f421-4369-a590-691e0d0960b7\") " pod="openshift-ingress-canary/ingress-canary-mlrhs" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.492634 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.492666 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ee196d3-f421-4369-a590-691e0d0960b7-cert\") pod \"ingress-canary-mlrhs\" (UID: \"2ee196d3-f421-4369-a590-691e0d0960b7\") " pod="openshift-ingress-canary/ingress-canary-mlrhs" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.498028 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-bound-sa-token\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.502598 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-registry-tls\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.508266 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt"] Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.516633 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podStartSLOduration=110.51661238 podStartE2EDuration="1m50.51661238s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:26.5162558 +0000 UTC m=+173.171994935" watchObservedRunningTime="2026-02-26 11:08:26.51661238 +0000 UTC m=+173.172351495" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.517653 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7"] Feb 26 11:08:26 crc kubenswrapper[4724]: E0226 11:08:26.526633 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:27.026595426 +0000 UTC m=+173.682334541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.559155 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k5ktg"] Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.581168 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ee196d3-f421-4369-a590-691e0d0960b7-cert\") pod \"ingress-canary-mlrhs\" (UID: \"2ee196d3-f421-4369-a590-691e0d0960b7\") " pod="openshift-ingress-canary/ingress-canary-mlrhs" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.593109 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9mxd\" (UniqueName: \"kubernetes.io/projected/2ee196d3-f421-4369-a590-691e0d0960b7-kube-api-access-v9mxd\") pod \"ingress-canary-mlrhs\" (UID: \"2ee196d3-f421-4369-a590-691e0d0960b7\") " pod="openshift-ingress-canary/ingress-canary-mlrhs" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.593864 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:26 crc kubenswrapper[4724]: E0226 11:08:26.594505 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:27.094481256 +0000 UTC m=+173.750220371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.627706 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ns2kr" podStartSLOduration=110.627678205 podStartE2EDuration="1m50.627678205s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:26.619551372 +0000 UTC m=+173.275290517" watchObservedRunningTime="2026-02-26 11:08:26.627678205 +0000 UTC m=+173.283417320" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.628108 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr"] Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.630947 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7"] Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.659837 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=19.659811703 podStartE2EDuration="19.659811703s" podCreationTimestamp="2026-02-26 11:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:26.659531995 +0000 UTC m=+173.315271110" watchObservedRunningTime="2026-02-26 11:08:26.659811703 +0000 UTC m=+173.315550818" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.696765 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.696824 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff98r\" (UniqueName: \"kubernetes.io/projected/5f469f47-990d-4224-8002-c658ef626f48-kube-api-access-ff98r\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:26 crc kubenswrapper[4724]: E0226 11:08:26.698243 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:27.198217301 +0000 UTC m=+173.853956416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.722375 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=55.72234526 podStartE2EDuration="55.72234526s" podCreationTimestamp="2026-02-26 11:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:26.69608903 +0000 UTC m=+173.351828145" watchObservedRunningTime="2026-02-26 11:08:26.72234526 +0000 UTC m=+173.378084375" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.734003 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff98r\" (UniqueName: \"kubernetes.io/projected/5f469f47-990d-4224-8002-c658ef626f48-kube-api-access-ff98r\") pod \"oauth-openshift-558db77b4-2m27r\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.797955 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:26 crc kubenswrapper[4724]: E0226 11:08:26.798617 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:27.298582039 +0000 UTC m=+173.954321154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:26 crc kubenswrapper[4724]: W0226 11:08:26.806913 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ffca8b8_930c_4a19_93ff_e47500546d2e.slice/crio-4a3fc71af70844cef626607981bf42fa47448fc1ca71db2f83b372351f119fde WatchSource:0}: Error finding container 4a3fc71af70844cef626607981bf42fa47448fc1ca71db2f83b372351f119fde: Status 404 returned error can't find the container with id 4a3fc71af70844cef626607981bf42fa47448fc1ca71db2f83b372351f119fde Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.808944 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" event={"ID":"feac8cdb-eb8a-4f0d-afee-d18467d73727","Type":"ContainerStarted","Data":"c5dd4dc26bd908f505a3d6a679bde26d408ffd51a709d8774303113c3dbefdd6"} Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.827050 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" podStartSLOduration=110.827024352 podStartE2EDuration="1m50.827024352s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:26.82555397 +0000 UTC m=+173.481293085" watchObservedRunningTime="2026-02-26 11:08:26.827024352 +0000 UTC m=+173.482763467" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.828059 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m" event={"ID":"936d9d11-1063-4a6a-b7d6-68a1fe00d9dd","Type":"ContainerStarted","Data":"ed5639e222ece6f3bf571f6b10d49357f5a77c4379fde0d64e48b414813b9789"} Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.838074 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" event={"ID":"b0e436fd-9344-4f55-ae35-4eae3aac24c8","Type":"ContainerStarted","Data":"e6cc2c655d600f839d2235229148c41eb5d383b5bb3328909eb92fcca015d248"} Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.841243 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mlrhs" Feb 26 11:08:26 crc kubenswrapper[4724]: W0226 11:08:26.844269 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7027d958_98c3_4fd1_9442_232be60e1eb7.slice/crio-2c8e3d804b35c2bd2bf7749cde84e3543e934731222409878c0129413fa6ea8e WatchSource:0}: Error finding container 2c8e3d804b35c2bd2bf7749cde84e3543e934731222409878c0129413fa6ea8e: Status 404 returned error can't find the container with id 2c8e3d804b35c2bd2bf7749cde84e3543e934731222409878c0129413fa6ea8e Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.851759 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql" event={"ID":"223205f4-c6e1-4f77-bfc3-667ad541a34e","Type":"ContainerStarted","Data":"5212372ac02e4781869cb92cde7858e1eada0653a1f045a14dca19763d33b2bc"} Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.890284 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7686" podStartSLOduration=110.890248779 podStartE2EDuration="1m50.890248779s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:26.86159837 +0000 UTC m=+173.517337485" watchObservedRunningTime="2026-02-26 11:08:26.890248779 +0000 UTC m=+173.545987884" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.890873 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" event={"ID":"630d11de-abc5-47ed-8284-7bbf4ec5b9c8","Type":"ContainerStarted","Data":"42a6549f29052769ec3aca846c4fc73f7f2e62e036e21152e1b74d0d32f3f68f"} Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.900829 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" event={"ID":"b7e9f338-eb02-4618-aafb-37065b3823f9","Type":"ContainerStarted","Data":"6d6081a988f4097b6d277817221314d5f32c7bd39bcfd9302403fe2ecc5ebd84"} Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.903382 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:26 crc kubenswrapper[4724]: E0226 11:08:26.911276 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:27.411248099 +0000 UTC m=+174.066987214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.921487 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" event={"ID":"74c6d322-04d4-4a3e-b3d7-fa6157c5a696","Type":"ContainerStarted","Data":"c966cb29e4c962a18c8b16e0d90be81b7314c0a1eb8fe539d603484aa9dd1fa5"} Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.928301 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99"] Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.928378 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9cwcb" event={"ID":"0308748d-e26a-4fc4-bc5d-d3bd65936c7b","Type":"ContainerStarted","Data":"1af3954d1c49fe708ed8fb411484e13fafcfff104a198b77ca7b71ccb26dc59d"} Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.944053 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h27ll" event={"ID":"45069e17-f50a-47d5-9552-b32b9eecadce","Type":"ContainerStarted","Data":"c406062b96decba9f53bd5d418c9f5b78fc1732cf5562b9d1c2a1410115443a0"} Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.961227 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.961197047 podStartE2EDuration="1m19.961197047s" podCreationTimestamp="2026-02-26 11:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:26.951616353 +0000 UTC m=+173.607355468" watchObservedRunningTime="2026-02-26 11:08:26.961197047 +0000 UTC m=+173.616936152" Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.964944 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r"] Feb 26 11:08:26 crc kubenswrapper[4724]: I0226 11:08:26.983214 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f" event={"ID":"49a62159-9584-4fd5-b9d2-e81d422f5089","Type":"ContainerStarted","Data":"ed2c7b607326d48103e870c664609837f4f5fb870214546684e23c5caa9f9767"} Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.004361 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.004734 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9063c94b-5e44-4a4a-9c85-e122cf7751b9-trusted-ca\") pod \"console-operator-58897d9998-rrbmc\" (UID: \"9063c94b-5e44-4a4a-9c85-e122cf7751b9\") " pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:27 crc kubenswrapper[4724]: E0226 11:08:27.005493 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:27.505440891 +0000 UTC m=+174.161180006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.006399 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9063c94b-5e44-4a4a-9c85-e122cf7751b9-trusted-ca\") pod \"console-operator-58897d9998-rrbmc\" (UID: \"9063c94b-5e44-4a4a-9c85-e122cf7751b9\") " pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.007072 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.008090 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.020526 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=13.020494581 podStartE2EDuration="13.020494581s" podCreationTimestamp="2026-02-26 11:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:26.986818559 +0000 UTC m=+173.642557694" watchObservedRunningTime="2026-02-26 11:08:27.020494581 +0000 UTC m=+173.676233696" Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.020763 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-wtm5h" podStartSLOduration=111.020758529 podStartE2EDuration="1m51.020758529s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:27.018230747 +0000 UTC m=+173.673969892" watchObservedRunningTime="2026-02-26 11:08:27.020758529 +0000 UTC m=+173.676497634" Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.072076 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-49n4g" podStartSLOduration=111.072059535 podStartE2EDuration="1m51.072059535s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:27.037399204 +0000 UTC m=+173.693138329" watchObservedRunningTime="2026-02-26 11:08:27.072059535 +0000 UTC m=+173.727798650" Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.096254 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nj24t"] Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.106726 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:27 crc kubenswrapper[4724]: E0226 11:08:27.107147 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:27.607129737 +0000 UTC m=+174.262868862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.148548 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p"] Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.151450 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.154243 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.154289 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.180947 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d9shf"] Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.208343 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:27 crc kubenswrapper[4724]: E0226 11:08:27.208953 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:27.708927696 +0000 UTC m=+174.364666811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.209401 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wdxr7"] Feb 26 11:08:27 crc kubenswrapper[4724]: W0226 11:08:27.213810 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42a649e6_a13d_4a1d_94a6_82c03d5a913b.slice/crio-ffecbf2c0deac6f144d0be1d1e65d2c7e327c2010c0a0cb9b3baa9fa69978530 WatchSource:0}: Error finding container ffecbf2c0deac6f144d0be1d1e65d2c7e327c2010c0a0cb9b3baa9fa69978530: Status 404 returned error can't find the container with id ffecbf2c0deac6f144d0be1d1e65d2c7e327c2010c0a0cb9b3baa9fa69978530 Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.271931 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8v4r5"] Feb 26 11:08:27 crc kubenswrapper[4724]: W0226 11:08:27.300343 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01c4a397_4485_49bc_9ee3_c794832fd1ee.slice/crio-0e96fec0a9db4a084f9c439edb219654a9d1d505b89e5b9011692aeaf6bf4d06 WatchSource:0}: Error finding container 0e96fec0a9db4a084f9c439edb219654a9d1d505b89e5b9011692aeaf6bf4d06: Status 404 returned error can't find the container with id 0e96fec0a9db4a084f9c439edb219654a9d1d505b89e5b9011692aeaf6bf4d06 Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.310189 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:27 crc kubenswrapper[4724]: E0226 11:08:27.310742 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:27.810725265 +0000 UTC m=+174.466464380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.311259 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535068-crjcm"] Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.393856 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9kj2m" podStartSLOduration=111.39382872 podStartE2EDuration="1m51.39382872s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:27.390109644 +0000 UTC m=+174.045848759" watchObservedRunningTime="2026-02-26 11:08:27.39382872 +0000 UTC m=+174.049567835" Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.411261 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:27 crc kubenswrapper[4724]: E0226 11:08:27.411687 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:27.9116704 +0000 UTC m=+174.567409505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:27 crc kubenswrapper[4724]: W0226 11:08:27.436869 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d5a1aaf_95fa_4eaf_8f2f_68c48937af6f.slice/crio-eea6b012c1e38f83d2729490bcd9cfcd7f57a70709db6e86a7fab8ee3b0e853e WatchSource:0}: Error finding container eea6b012c1e38f83d2729490bcd9cfcd7f57a70709db6e86a7fab8ee3b0e853e: Status 404 returned error can't find the container with id eea6b012c1e38f83d2729490bcd9cfcd7f57a70709db6e86a7fab8ee3b0e853e Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.474977 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8kd6n"] Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.513501 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:27 crc kubenswrapper[4724]: E0226 11:08:27.514303 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:28.014279492 +0000 UTC m=+174.670018607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.636424 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:27 crc kubenswrapper[4724]: E0226 11:08:27.637310 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:28.137280038 +0000 UTC m=+174.793019143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:27 crc kubenswrapper[4724]: W0226 11:08:27.654312 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91b7ba35_3bf3_4738_8a71_d093b0e7fd12.slice/crio-5b0aa192d079c4e570ac51cd56221f4e74ef0374e350098f7ea3001c3f8001ad WatchSource:0}: Error finding container 5b0aa192d079c4e570ac51cd56221f4e74ef0374e350098f7ea3001c3f8001ad: Status 404 returned error can't find the container with id 5b0aa192d079c4e570ac51cd56221f4e74ef0374e350098f7ea3001c3f8001ad Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.708690 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:08:27 crc kubenswrapper[4724]: W0226 11:08:27.723396 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe850ec7_4df8_4628_ae55_3c922de012e8.slice/crio-dde1a82661e23b6aaf4263bb5c9319358909ca207f800d389811034503a4b170 WatchSource:0}: Error finding container dde1a82661e23b6aaf4263bb5c9319358909ca207f800d389811034503a4b170: Status 404 returned error can't find the container with id dde1a82661e23b6aaf4263bb5c9319358909ca207f800d389811034503a4b170 Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.740400 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:27 crc kubenswrapper[4724]: E0226 11:08:27.740881 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:28.240863308 +0000 UTC m=+174.896602423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:27 crc kubenswrapper[4724]: W0226 11:08:27.765757 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod481dac61_2ecf_46c9_b8f8_981815ceb9c5.slice/crio-c9416bdb5c0137356e8452bd208c2ce63e71a9b96bff1bb953c0a0194faa4c48 WatchSource:0}: Error finding container c9416bdb5c0137356e8452bd208c2ce63e71a9b96bff1bb953c0a0194faa4c48: Status 404 returned error can't find the container with id c9416bdb5c0137356e8452bd208c2ce63e71a9b96bff1bb953c0a0194faa4c48 Feb 26 11:08:27 crc kubenswrapper[4724]: W0226 11:08:27.782603 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80040d88_3ec4_42f5_94f6_9c8afef81d73.slice/crio-4f58109a5502fb2206949e209e8e4839d9e9a4cf5d1c1f81adbfcf040a05a1cd WatchSource:0}: Error finding container 4f58109a5502fb2206949e209e8e4839d9e9a4cf5d1c1f81adbfcf040a05a1cd: Status 404 returned error can't find the container with id 4f58109a5502fb2206949e209e8e4839d9e9a4cf5d1c1f81adbfcf040a05a1cd Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.849627 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt"] Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.849992 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:27 crc kubenswrapper[4724]: E0226 11:08:27.850159 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:28.350133061 +0000 UTC m=+175.005872176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.850611 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:27 crc kubenswrapper[4724]: E0226 11:08:27.851260 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:28.351248723 +0000 UTC m=+175.006987848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.905822 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4f5jn"] Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.955979 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:27 crc kubenswrapper[4724]: E0226 11:08:27.956093 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:28.456068559 +0000 UTC m=+175.111807674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:27 crc kubenswrapper[4724]: I0226 11:08:27.956464 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:27 crc kubenswrapper[4724]: E0226 11:08:27.956920 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:28.456906832 +0000 UTC m=+175.112645957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.034929 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nnjz2" event={"ID":"80040d88-3ec4-42f5-94f6-9c8afef81d73","Type":"ContainerStarted","Data":"4f58109a5502fb2206949e209e8e4839d9e9a4cf5d1c1f81adbfcf040a05a1cd"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.051632 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr" event={"ID":"0bd269f2-74b2-4dfd-bec7-1442d4b438ef","Type":"ContainerStarted","Data":"58f70b6398afebde8c495abd35ad3a25bc06b0538c2265c0d87cfbe588854e23"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.056928 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" event={"ID":"7ffca8b8-930c-4a19-93ff-e47500546d2e","Type":"ContainerStarted","Data":"4a3fc71af70844cef626607981bf42fa47448fc1ca71db2f83b372351f119fde"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.058486 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8v4r5" event={"ID":"fe850ec7-4df8-4628-ae55-3c922de012e8","Type":"ContainerStarted","Data":"dde1a82661e23b6aaf4263bb5c9319358909ca207f800d389811034503a4b170"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.058578 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:28 crc kubenswrapper[4724]: E0226 11:08:28.059028 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:28.558887687 +0000 UTC m=+175.214626802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.059099 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:28 crc kubenswrapper[4724]: E0226 11:08:28.059692 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:28.55968501 +0000 UTC m=+175.215424125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.060049 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f" event={"ID":"49a62159-9584-4fd5-b9d2-e81d422f5089","Type":"ContainerStarted","Data":"6309ea9559506b9cba75bb18722e5b0571c2a3508ed72a5a93ee89faee8246b8"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.071938 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p" event={"ID":"34fa2b5c-f1b3-434f-a307-be966f1d64d9","Type":"ContainerStarted","Data":"d1190ce13eddee63e438bad90558dd5fb5883cdcff37b3018b490daf807757e5"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.074603 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535068-crjcm" event={"ID":"91b7ba35-3bf3-4738-8a71-d093b0e7fd12","Type":"ContainerStarted","Data":"5b0aa192d079c4e570ac51cd56221f4e74ef0374e350098f7ea3001c3f8001ad"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.080337 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" event={"ID":"b7e9f338-eb02-4618-aafb-37065b3823f9","Type":"ContainerStarted","Data":"9e5166e4e789dda37ce360e361f6ef8d555705dcbfa7d5088f54bc5780bbb733"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.082539 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" event={"ID":"481dac61-2ecf-46c9-b8f8-981815ceb9c5","Type":"ContainerStarted","Data":"c9416bdb5c0137356e8452bd208c2ce63e71a9b96bff1bb953c0a0194faa4c48"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.123086 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7" event={"ID":"fab74e98-0cf9-41ea-aebc-ce1cd5011740","Type":"ContainerStarted","Data":"6cd6184e9a1d9c8afea11aecdb1b7738d2e3457a5a7094292b0c8f0c9d031d92"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.136523 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" event={"ID":"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9","Type":"ContainerStarted","Data":"18e592543d0d0cdd4dcc6a9c1d560718d73bcf5f821936ed81853d33e91fdb57"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.155308 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" event={"ID":"01c4a397-4485-49bc-9ee3-c794832fd1ee","Type":"ContainerStarted","Data":"0e96fec0a9db4a084f9c439edb219654a9d1d505b89e5b9011692aeaf6bf4d06"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.160337 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:28 crc kubenswrapper[4724]: E0226 11:08:28.161302 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:28.661273113 +0000 UTC m=+175.317012238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.166404 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" event={"ID":"01a2fc1a-81a3-4607-926f-5e8ee502a3c9","Type":"ContainerStarted","Data":"0ee8f981efd034e7a4a8f2ca070c71ecad544af57bae3232a793900c0825028b"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.175067 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" event={"ID":"207d3079-e7ed-46b9-8744-aed50bb42352","Type":"ContainerStarted","Data":"4c5a8c8f436ae0e8641214f015df7179cfe1741f5c6ef3cb8b7239ee0e580ef0"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.177136 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d9shf" event={"ID":"4d5a1aaf-95fa-4eaf-8f2f-68c48937af6f","Type":"ContainerStarted","Data":"eea6b012c1e38f83d2729490bcd9cfcd7f57a70709db6e86a7fab8ee3b0e853e"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.180151 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r" event={"ID":"42a649e6-a13d-4a1d-94a6-82c03d5a913b","Type":"ContainerStarted","Data":"ffecbf2c0deac6f144d0be1d1e65d2c7e327c2010c0a0cb9b3baa9fa69978530"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.181678 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" event={"ID":"0eb89f1c-1230-4455-86c1-6ad3796969a9","Type":"ContainerStarted","Data":"724386524dd032fa457ccd3c9dc49819bb4860e92f71f0df40e1bd8335cb7c8d"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.183204 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k5ktg" event={"ID":"7027d958-98c3-4fd1-9442-232be60e1eb7","Type":"ContainerStarted","Data":"2c8e3d804b35c2bd2bf7749cde84e3543e934731222409878c0129413fa6ea8e"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.191725 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" event={"ID":"809c874c-661e-43c3-9e0e-6ee95ed8586e","Type":"ContainerStarted","Data":"a921f9e186482348257046a76f97a26b02d59b03386453fbb55ea5523db83eea"} Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.239390 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:28 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:28 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:28 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.239883 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.262841 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:28 crc kubenswrapper[4724]: E0226 11:08:28.263322 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:28.763304649 +0000 UTC m=+175.419043764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.282563 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx"] Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.366026 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:28 crc kubenswrapper[4724]: E0226 11:08:28.366513 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:28.866491838 +0000 UTC m=+175.522230953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.426282 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh"] Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.491763 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:28 crc kubenswrapper[4724]: E0226 11:08:28.492196 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:28.99216412 +0000 UTC m=+175.647903235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.498429 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-h27ll" podStartSLOduration=112.498404018 podStartE2EDuration="1m52.498404018s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:28.462623716 +0000 UTC m=+175.118362841" watchObservedRunningTime="2026-02-26 11:08:28.498404018 +0000 UTC m=+175.154143133" Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.593853 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:28 crc kubenswrapper[4724]: E0226 11:08:28.594672 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:29.094642639 +0000 UTC m=+175.750381754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.623913 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c8x4t"] Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.686128 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4"] Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.698549 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:28 crc kubenswrapper[4724]: E0226 11:08:28.699041 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:29.199025342 +0000 UTC m=+175.854764457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.755629 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t"] Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.763122 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv"] Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.803364 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:28 crc kubenswrapper[4724]: E0226 11:08:28.803741 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:29.303724094 +0000 UTC m=+175.959463209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.810069 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24"] Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.885299 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fsm2c"] Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.889774 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zngnc" podStartSLOduration=112.889745393 podStartE2EDuration="1m52.889745393s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:28.873508619 +0000 UTC m=+175.529247734" watchObservedRunningTime="2026-02-26 11:08:28.889745393 +0000 UTC m=+175.545484508" Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.905275 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:28 crc kubenswrapper[4724]: E0226 11:08:28.915649 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:29.415619552 +0000 UTC m=+176.071358667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:28 crc kubenswrapper[4724]: I0226 11:08:28.927594 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r668f" podStartSLOduration=112.927566884 podStartE2EDuration="1m52.927566884s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:28.913907633 +0000 UTC m=+175.569646748" watchObservedRunningTime="2026-02-26 11:08:28.927566884 +0000 UTC m=+175.583305999" Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.009927 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:29 crc kubenswrapper[4724]: E0226 11:08:29.010623 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:29.510596507 +0000 UTC m=+176.166335622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:29 crc kubenswrapper[4724]: W0226 11:08:29.058850 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f17bea2_a6c9_4d5b_a61e_95ebacfbaf52.slice/crio-1f052cc62e03db72ec8e9e2dccd30b21a38be9cd7d995b10e69ee59d87ad3c0b WatchSource:0}: Error finding container 1f052cc62e03db72ec8e9e2dccd30b21a38be9cd7d995b10e69ee59d87ad3c0b: Status 404 returned error can't find the container with id 1f052cc62e03db72ec8e9e2dccd30b21a38be9cd7d995b10e69ee59d87ad3c0b Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.091747 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mlrhs"] Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.113671 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:29 crc kubenswrapper[4724]: E0226 11:08:29.116808 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:29.616776531 +0000 UTC m=+176.272515636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.153493 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rrbmc"] Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.164813 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4rspm"] Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.174049 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:29 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:29 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:29 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.174445 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.225838 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:29 crc kubenswrapper[4724]: E0226 11:08:29.226700 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:29.726670532 +0000 UTC m=+176.382409657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.272026 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zhtn5"] Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.290932 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69bcg"] Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.324879 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" event={"ID":"fa9aa66a-443b-4cd6-99fb-fbf3b841d8c9","Type":"ContainerStarted","Data":"2576b3495e07b516856812b52fff88e8dc9f3b45c816ae653a05b69d242b797e"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.328579 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:29 crc kubenswrapper[4724]: E0226 11:08:29.329358 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:29.829339056 +0000 UTC m=+176.485078171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:29 crc kubenswrapper[4724]: W0226 11:08:29.340401 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee196d3_f421_4369_a590_691e0d0960b7.slice/crio-99ab305da6a909a632234adc5d2c70ec20ea0d1f61e8cadfb541dbda56a844a5 WatchSource:0}: Error finding container 99ab305da6a909a632234adc5d2c70ec20ea0d1f61e8cadfb541dbda56a844a5: Status 404 returned error can't find the container with id 99ab305da6a909a632234adc5d2c70ec20ea0d1f61e8cadfb541dbda56a844a5 Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.345258 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" event={"ID":"309c37fa-849e-460c-9816-4d67aa631021","Type":"ContainerStarted","Data":"0fdd1e1d4c2d9f4ae984a31e70546e7b3b0150280cc71586129b6646b4018636"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.349032 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt" event={"ID":"e87b7bd7-9d39-48f0-b896-fe5da437416f","Type":"ContainerStarted","Data":"3a68aaddff7d9a8dd3e82ef608cff6d4790e19d8cbabef6e7788fe792bec237c"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.378638 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" event={"ID":"630d11de-abc5-47ed-8284-7bbf4ec5b9c8","Type":"ContainerStarted","Data":"2758c17e69032f96138456ddef30e7b0cc85f89443d6024b96ae380342432c63"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.400943 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p" event={"ID":"34fa2b5c-f1b3-434f-a307-be966f1d64d9","Type":"ContainerStarted","Data":"e62b1f47ef0334700674636235f26e5fdbacb2ebda34d2015ad9a9091d0245a5"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.431391 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.432053 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" event={"ID":"809c874c-661e-43c3-9e0e-6ee95ed8586e","Type":"ContainerStarted","Data":"8d2ff8d6c23db881e33fa18dc0e4c96836041a6715e2955210909176259f1f10"} Feb 26 11:08:29 crc kubenswrapper[4724]: E0226 11:08:29.433351 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:29.933311167 +0000 UTC m=+176.589050292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.449287 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2m27r"] Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.463389 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9cwcb" event={"ID":"0308748d-e26a-4fc4-bc5d-d3bd65936c7b","Type":"ContainerStarted","Data":"0fc75f659e460f371bc3c51d45ba27c80fd4d2290b17204eb936522c20b0b83a"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.469372 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" event={"ID":"84a12f19-2563-48d0-8682-26dd701b62ce","Type":"ContainerStarted","Data":"c57b08d7ff60dc1349f157c36b0173882469926515224547f169c487edaac9ae"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.471479 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-s92pk" podStartSLOduration=113.471435817 podStartE2EDuration="1m53.471435817s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:29.463098469 +0000 UTC m=+176.118837594" watchObservedRunningTime="2026-02-26 11:08:29.471435817 +0000 UTC m=+176.127174932" Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.482422 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" event={"ID":"444082d7-63dc-4363-ad17-5b61e61895ed","Type":"ContainerStarted","Data":"e2ebd93146fea0afe78b01c35ceee8a100d96f966dc706892b0dbd250c9a18ed"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.502382 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8x4t" event={"ID":"f73a3c79-e83b-4cf2-9a39-ca27f3f3feab","Type":"ContainerStarted","Data":"0c16cccd936866072134094c134f9ad4e9c2a9219521490175c214e7c42aa4ef"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.509468 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9cwcb" podStartSLOduration=113.509433033 podStartE2EDuration="1m53.509433033s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:29.501586039 +0000 UTC m=+176.157325164" watchObservedRunningTime="2026-02-26 11:08:29.509433033 +0000 UTC m=+176.165172148" Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.515828 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" event={"ID":"6a9effc4-1c10-46ae-9762-1f3308aa9bc9","Type":"ContainerStarted","Data":"5b3f9ddfac0d0e78b3e09fbba21642ab0c3883083559681d9bdf277b0b167fa7"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.521803 4724 generic.go:334] "Generic (PLEG): container finished" podID="feac8cdb-eb8a-4f0d-afee-d18467d73727" containerID="6e79558617fc57f654015e20243d96f3b6e5816e2265cc82aa672e8a6539430e" exitCode=0 Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.521902 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" event={"ID":"feac8cdb-eb8a-4f0d-afee-d18467d73727","Type":"ContainerDied","Data":"6e79558617fc57f654015e20243d96f3b6e5816e2265cc82aa672e8a6539430e"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.533407 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:29 crc kubenswrapper[4724]: E0226 11:08:29.535478 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:30.035455327 +0000 UTC m=+176.691194442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.569086 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" event={"ID":"b0e436fd-9344-4f55-ae35-4eae3aac24c8","Type":"ContainerStarted","Data":"ae6cfdbb6fe6f539514cec959d49d2aea8e57413c71ac86dd621ef18a736c4c4"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.606093 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" event={"ID":"4a214909-86a1-4cbb-bccc-4f24faa05d4b","Type":"ContainerStarted","Data":"f3ed2286d4d4f745a757f963d2fcfe7a677f88b79a53221e0ee84a60892938f4"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.609083 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g9q8d" podStartSLOduration=113.60903795 podStartE2EDuration="1m53.60903795s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:29.604411487 +0000 UTC m=+176.260150612" watchObservedRunningTime="2026-02-26 11:08:29.60903795 +0000 UTC m=+176.264777075" Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.624089 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" event={"ID":"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52","Type":"ContainerStarted","Data":"1f052cc62e03db72ec8e9e2dccd30b21a38be9cd7d995b10e69ee59d87ad3c0b"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.626271 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsm2c" event={"ID":"0981a4e3-56c8-49a4-a65f-94d3d916eef8","Type":"ContainerStarted","Data":"5263b2ac2272e98568c9a29845a15236e27533eb1910a9dd4ad36d5861dec002"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.629895 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql" event={"ID":"223205f4-c6e1-4f77-bfc3-667ad541a34e","Type":"ContainerStarted","Data":"d13f4955952b9301e06fe03427dfeb5e62b7a7c354f6067b8f624423a4930045"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.632834 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr" event={"ID":"0bd269f2-74b2-4dfd-bec7-1442d4b438ef","Type":"ContainerStarted","Data":"b3880e1100009ec69352396f8e7b4984d3b792509f2e340d36ec11a66c7a91d5"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.634577 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:29 crc kubenswrapper[4724]: E0226 11:08:29.634750 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:30.134719504 +0000 UTC m=+176.790458619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.635354 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.638337 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" event={"ID":"f3546882-cc78-45d2-b99d-9d14605bdc5b","Type":"ContainerStarted","Data":"8886c329888bd4aa2a5199bd788bd2859ccdf6061179b6a771aaa3cb6d45c3f1"} Feb 26 11:08:29 crc kubenswrapper[4724]: E0226 11:08:29.638876 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:30.138849062 +0000 UTC m=+176.794588177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.640824 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" event={"ID":"74c6d322-04d4-4a3e-b3d7-fa6157c5a696","Type":"ContainerStarted","Data":"ad45411aefb4193182c012cbfc417bf6d69691a1afed5fcc9a9d1d179d68bcb5"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.650795 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7" event={"ID":"fab74e98-0cf9-41ea-aebc-ce1cd5011740","Type":"ContainerStarted","Data":"d48a490be9c5823bf62a38106d15aaf6c194417d0177efded98b74cf1a6f4e83"} Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.684165 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6rwql" podStartSLOduration=113.684137356 podStartE2EDuration="1m53.684137356s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:29.656493136 +0000 UTC m=+176.312232261" watchObservedRunningTime="2026-02-26 11:08:29.684137356 +0000 UTC m=+176.339876491" Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.719015 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" podStartSLOduration=113.718984332 podStartE2EDuration="1m53.718984332s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:29.684718093 +0000 UTC m=+176.340457208" watchObservedRunningTime="2026-02-26 11:08:29.718984332 +0000 UTC m=+176.374723467" Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.730896 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-snwd7" podStartSLOduration=113.730874912 podStartE2EDuration="1m53.730874912s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:29.718280552 +0000 UTC m=+176.374019667" watchObservedRunningTime="2026-02-26 11:08:29.730874912 +0000 UTC m=+176.386614027" Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.737399 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:29 crc kubenswrapper[4724]: E0226 11:08:29.737594 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:30.237573453 +0000 UTC m=+176.893312568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.737861 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:29 crc kubenswrapper[4724]: E0226 11:08:29.739117 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:30.239105137 +0000 UTC m=+176.894844252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.839312 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:29 crc kubenswrapper[4724]: E0226 11:08:29.839458 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:30.339439364 +0000 UTC m=+176.995178479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.839762 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:29 crc kubenswrapper[4724]: E0226 11:08:29.840132 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:30.340121244 +0000 UTC m=+176.995860359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:29 crc kubenswrapper[4724]: I0226 11:08:29.941116 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:29 crc kubenswrapper[4724]: E0226 11:08:29.941549 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:30.441530692 +0000 UTC m=+177.097269807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.043173 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:30 crc kubenswrapper[4724]: E0226 11:08:30.043983 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:30.54396491 +0000 UTC m=+177.199704025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.144861 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:30 crc kubenswrapper[4724]: E0226 11:08:30.145452 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:30.645421809 +0000 UTC m=+177.301160934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.151316 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:30 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:30 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:30 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.151404 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.251322 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:30 crc kubenswrapper[4724]: E0226 11:08:30.252543 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:30.75251457 +0000 UTC m=+177.408253685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.355216 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:30 crc kubenswrapper[4724]: E0226 11:08:30.355768 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:30.8557371 +0000 UTC m=+177.511476215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.469934 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:30 crc kubenswrapper[4724]: E0226 11:08:30.470390 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:30.970375596 +0000 UTC m=+177.626114711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.577204 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:30 crc kubenswrapper[4724]: E0226 11:08:30.577792 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:31.077767625 +0000 UTC m=+177.733506740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.680644 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:30 crc kubenswrapper[4724]: E0226 11:08:30.681606 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:31.181582843 +0000 UTC m=+177.837321958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.786221 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:30 crc kubenswrapper[4724]: E0226 11:08:30.786843 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:31.286812839 +0000 UTC m=+177.942551954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.888654 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:30 crc kubenswrapper[4724]: E0226 11:08:30.889286 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:31.389260777 +0000 UTC m=+178.044999892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.948535 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" event={"ID":"7ffca8b8-930c-4a19-93ff-e47500546d2e","Type":"ContainerStarted","Data":"c97f6a80673402dc556d8a667efc01d48311935195a0443f3472e2167a1a0f4c"} Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.949934 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.960669 4724 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mxbr7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.960799 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" podUID="7ffca8b8-930c-4a19-93ff-e47500546d2e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.988083 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b7cfr" podStartSLOduration=114.98805414 podStartE2EDuration="1m54.98805414s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:29.750713459 +0000 UTC m=+176.406452574" watchObservedRunningTime="2026-02-26 11:08:30.98805414 +0000 UTC m=+177.643793255" Feb 26 11:08:30 crc kubenswrapper[4724]: I0226 11:08:30.994780 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:30 crc kubenswrapper[4724]: E0226 11:08:30.997210 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:31.49715675 +0000 UTC m=+178.152895865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.057253 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt" event={"ID":"e87b7bd7-9d39-48f0-b896-fe5da437416f","Type":"ContainerStarted","Data":"508d1c792460bc39db9ae8e965f3827158743c9a641c7c3ea4aa1eeb8901435f"} Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.092361 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt" podStartSLOduration=115.09232891 podStartE2EDuration="1m55.09232891s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:31.089822919 +0000 UTC m=+177.745562044" watchObservedRunningTime="2026-02-26 11:08:31.09232891 +0000 UTC m=+177.748068025" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.092667 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" podStartSLOduration=114.09265883 podStartE2EDuration="1m54.09265883s" podCreationTimestamp="2026-02-26 11:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:30.997330785 +0000 UTC m=+177.653069920" watchObservedRunningTime="2026-02-26 11:08:31.09265883 +0000 UTC m=+177.748397965" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.098139 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:31 crc kubenswrapper[4724]: E0226 11:08:31.098639 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:31.59862184 +0000 UTC m=+178.254360955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.111574 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rrbmc" event={"ID":"9063c94b-5e44-4a4a-9c85-e122cf7751b9","Type":"ContainerStarted","Data":"47e9ba4e299457e9a7da15300bc5c91698171cf6813c25b7f481899b0c7cb9cf"} Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.127334 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r" event={"ID":"42a649e6-a13d-4a1d-94a6-82c03d5a913b","Type":"ContainerStarted","Data":"1aaac033a5336dfbf209d59254a1bd6dff52e6a2dec476192996e0cea32979e9"} Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.167907 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:31 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:31 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:31 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.167988 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.168139 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" event={"ID":"01a2fc1a-81a3-4607-926f-5e8ee502a3c9","Type":"ContainerStarted","Data":"b7f591d9a7257ac391490b135ec80938a253afa160b4f5d43ad5a2185317f306"} Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.197730 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hn56r" podStartSLOduration=115.197700892 podStartE2EDuration="1m55.197700892s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:31.194496221 +0000 UTC m=+177.850235356" watchObservedRunningTime="2026-02-26 11:08:31.197700892 +0000 UTC m=+177.853440017" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.200514 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:31 crc kubenswrapper[4724]: E0226 11:08:31.201696 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:31.701656585 +0000 UTC m=+178.357395880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.219827 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nnjz2" event={"ID":"80040d88-3ec4-42f5-94f6-9c8afef81d73","Type":"ContainerStarted","Data":"84385d6c1d8095601f0e9aa671c63ef9bb7d5067fa78fe7ad950153d23fdb463"} Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.248293 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-nnjz2" podStartSLOduration=8.248256287 podStartE2EDuration="8.248256287s" podCreationTimestamp="2026-02-26 11:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:31.247805074 +0000 UTC m=+177.903544209" watchObservedRunningTime="2026-02-26 11:08:31.248256287 +0000 UTC m=+177.903995412" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.262342 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" event={"ID":"01c4a397-4485-49bc-9ee3-c794832fd1ee","Type":"ContainerStarted","Data":"041b1d84dc2212b765d4c4188790bd561f46298534305662a93d23c2f4aa77ec"} Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.263976 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.270643 4724 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nj24t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.270738 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" podUID="01c4a397-4485-49bc-9ee3-c794832fd1ee" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.310684 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:31 crc kubenswrapper[4724]: E0226 11:08:31.312817 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:31.812790721 +0000 UTC m=+178.468530026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.329637 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" event={"ID":"5f469f47-990d-4224-8002-c658ef626f48","Type":"ContainerStarted","Data":"4a2800e72745c3492bc0bb7d7932c09bb3f178b95199903d706e3b44b78023f1"} Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.417052 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:31 crc kubenswrapper[4724]: E0226 11:08:31.417467 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:31.917437522 +0000 UTC m=+178.573176637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.417942 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:31 crc kubenswrapper[4724]: E0226 11:08:31.418373 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:31.918364949 +0000 UTC m=+178.574104064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.425949 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4rspm" event={"ID":"4c9eec4e-df3c-411b-8629-421f3abfb500","Type":"ContainerStarted","Data":"441670811b657ad77185d29cee2754675d9f76dfedae2f02421131a750ecd3d0"} Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.489097 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d9shf" event={"ID":"4d5a1aaf-95fa-4eaf-8f2f-68c48937af6f","Type":"ContainerStarted","Data":"7ee30a833abdba026698a845fc7e1b88dcbcfad6e78ff9fb4e62c2aeaee3dbbf"} Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.517142 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" podStartSLOduration=115.517112841 podStartE2EDuration="1m55.517112841s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:31.343277483 +0000 UTC m=+177.999016608" watchObservedRunningTime="2026-02-26 11:08:31.517112841 +0000 UTC m=+178.172851976" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.517786 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69bcg" event={"ID":"4c344657-7620-4366-80a9-84de8ed2face","Type":"ContainerStarted","Data":"c78ccbd4c0d395d7bb37a0f15acfbc9215e64b7c895970b134ea10e763069a36"} Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.518055 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" podStartSLOduration=115.518046298 podStartE2EDuration="1m55.518046298s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:31.516650588 +0000 UTC m=+178.172389713" watchObservedRunningTime="2026-02-26 11:08:31.518046298 +0000 UTC m=+178.173785413" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.518767 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:31 crc kubenswrapper[4724]: E0226 11:08:31.520445 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:32.020418995 +0000 UTC m=+178.676158100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.568098 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k5ktg" event={"ID":"7027d958-98c3-4fd1-9442-232be60e1eb7","Type":"ContainerStarted","Data":"ee6d3de71827e2c28c30694e0167b2d98f1b93820b63b7f563d157c0b08b21b9"} Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.568643 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-k5ktg" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.577757 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.577826 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.592895 4724 generic.go:334] "Generic (PLEG): container finished" podID="207d3079-e7ed-46b9-8744-aed50bb42352" containerID="714b2d98148618c4880c37933fdf98f552d40e79cea1b79d29066abe9cd43e59" exitCode=0 Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.593144 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" event={"ID":"207d3079-e7ed-46b9-8744-aed50bb42352","Type":"ContainerDied","Data":"714b2d98148618c4880c37933fdf98f552d40e79cea1b79d29066abe9cd43e59"} Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.601347 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-k5ktg" podStartSLOduration=115.601330588 podStartE2EDuration="1m55.601330588s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:31.599982469 +0000 UTC m=+178.255721584" watchObservedRunningTime="2026-02-26 11:08:31.601330588 +0000 UTC m=+178.257069693" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.621742 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mlrhs" event={"ID":"2ee196d3-f421-4369-a590-691e0d0960b7","Type":"ContainerStarted","Data":"99ab305da6a909a632234adc5d2c70ec20ea0d1f61e8cadfb541dbda56a844a5"} Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.625204 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:31 crc kubenswrapper[4724]: E0226 11:08:31.625666 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:32.125646353 +0000 UTC m=+178.781385468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.705480 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" event={"ID":"481dac61-2ecf-46c9-b8f8-981815ceb9c5","Type":"ContainerStarted","Data":"70484096b07cc818074617dff45ec4339f1ec6e33f114f56f47e4b6f2c344ac9"} Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.734047 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.737058 4724 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8kd6n container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.737107 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.738200 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.744862 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zhtn5" event={"ID":"dfd78b66-3464-48ce-9017-0fd1ff5e26f7","Type":"ContainerStarted","Data":"1d9b195317ca16a2ab1e130295abf17d72d32ddde2c6b08a490c7a67eee7e00e"} Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.745006 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zhtn5" event={"ID":"dfd78b66-3464-48ce-9017-0fd1ff5e26f7","Type":"ContainerStarted","Data":"f2edf78374ef0590567086df48afb02f0017e8d55d77892efe2ae9a51ff630af"} Feb 26 11:08:31 crc kubenswrapper[4724]: E0226 11:08:31.800991 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:32.300952333 +0000 UTC m=+178.956691448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.806569 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-mlrhs" podStartSLOduration=8.806539282 podStartE2EDuration="8.806539282s" podCreationTimestamp="2026-02-26 11:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:31.726571677 +0000 UTC m=+178.382310812" watchObservedRunningTime="2026-02-26 11:08:31.806539282 +0000 UTC m=+178.462278397" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.817199 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podStartSLOduration=114.817142996 podStartE2EDuration="1m54.817142996s" podCreationTimestamp="2026-02-26 11:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:31.798532684 +0000 UTC m=+178.454271819" watchObservedRunningTime="2026-02-26 11:08:31.817142996 +0000 UTC m=+178.472882121" Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.841817 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:31 crc kubenswrapper[4724]: E0226 11:08:31.878197 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:32.378155699 +0000 UTC m=+179.033894814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:31 crc kubenswrapper[4724]: I0226 11:08:31.963025 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:31 crc kubenswrapper[4724]: E0226 11:08:31.963799 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:32.463733655 +0000 UTC m=+179.119472770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.071390 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:32 crc kubenswrapper[4724]: E0226 11:08:32.072061 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:32.5720374 +0000 UTC m=+179.227776515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.173137 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:32 crc kubenswrapper[4724]: E0226 11:08:32.174859 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:32.674827228 +0000 UTC m=+179.330566343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.175091 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:32 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:32 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:32 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.175166 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.276750 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:32 crc kubenswrapper[4724]: E0226 11:08:32.277395 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:32.777369649 +0000 UTC m=+179.433108764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.380556 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:32 crc kubenswrapper[4724]: E0226 11:08:32.381123 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:32.881090623 +0000 UTC m=+179.536829738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.483290 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:32 crc kubenswrapper[4724]: E0226 11:08:32.483808 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:32.983761787 +0000 UTC m=+179.639500902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.584522 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:32 crc kubenswrapper[4724]: E0226 11:08:32.585211 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:33.08500101 +0000 UTC m=+179.740740125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.585402 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:32 crc kubenswrapper[4724]: E0226 11:08:32.585753 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:33.085745602 +0000 UTC m=+179.741484717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.690195 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:32 crc kubenswrapper[4724]: E0226 11:08:32.690367 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:33.190336031 +0000 UTC m=+179.846075156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.691977 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:32 crc kubenswrapper[4724]: E0226 11:08:32.692522 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:33.192512613 +0000 UTC m=+179.848251728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.780309 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" event={"ID":"01a2fc1a-81a3-4607-926f-5e8ee502a3c9","Type":"ContainerStarted","Data":"5e18766a0251357f83bb70b5355dcaa8b05b0ae47df6bca5b4dfe610e067c691"} Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.794096 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:32 crc kubenswrapper[4724]: E0226 11:08:32.794459 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:33.294386315 +0000 UTC m=+179.950125450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.794860 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:32 crc kubenswrapper[4724]: E0226 11:08:32.795497 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:33.295486996 +0000 UTC m=+179.951226111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.798830 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zhtn5" event={"ID":"dfd78b66-3464-48ce-9017-0fd1ff5e26f7","Type":"ContainerStarted","Data":"421db6ff81133cebd488faf3ffcf762b88ab9f20830117b79cea365db033fce5"} Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.800009 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-zhtn5" Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.814389 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" event={"ID":"444082d7-63dc-4363-ad17-5b61e61895ed","Type":"ContainerStarted","Data":"49c370d88ed0899e62676c83cd27bf1eb8bd505baf1c18e5b140716d8c787e68"} Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.815921 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.819681 4724 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-nc5bx container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.819907 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" podUID="444082d7-63dc-4363-ad17-5b61e61895ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.839646 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" event={"ID":"5f469f47-990d-4224-8002-c658ef626f48","Type":"ContainerStarted","Data":"dc312fc8866b864610732aebc21f12d909e36023b79c4cb44fb78526ae16a484"} Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.840538 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.843503 4724 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2m27r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.843622 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" podUID="5f469f47-990d-4224-8002-c658ef626f48" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.844124 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" event={"ID":"f3546882-cc78-45d2-b99d-9d14605bdc5b","Type":"ContainerStarted","Data":"f5d855befd6f0cf09abce249c8c865a342d106e58162aa0b964bfc614c10c871"} Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.860352 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" event={"ID":"feac8cdb-eb8a-4f0d-afee-d18467d73727","Type":"ContainerStarted","Data":"9ede2da8a7e556f4cd4df6d3008e6723e4c619194f78a1b5e7e3e15de9643f0d"} Feb 26 11:08:32 crc kubenswrapper[4724]: E0226 11:08:32.899200 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:33.399145789 +0000 UTC m=+180.054884904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.899296 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.899476 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsm2c" event={"ID":"0981a4e3-56c8-49a4-a65f-94d3d916eef8","Type":"ContainerStarted","Data":"7c87c2eedc09e5aa1a85dc33bc591eb929bb73151a5cb8734de625bfd964c21c"} Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.899829 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:32 crc kubenswrapper[4724]: E0226 11:08:32.900608 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:33.400576429 +0000 UTC m=+180.056315724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.912980 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" event={"ID":"4a214909-86a1-4cbb-bccc-4f24faa05d4b","Type":"ContainerStarted","Data":"8bf4f660fe4035e42fc2e7d75aeab2e00559b2e084011e80caf35629f4e672d7"} Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.914002 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ncl99" podStartSLOduration=116.913973172 podStartE2EDuration="1m56.913973172s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:32.821036646 +0000 UTC m=+179.476775771" watchObservedRunningTime="2026-02-26 11:08:32.913973172 +0000 UTC m=+179.569712287" Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.915237 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-zhtn5" podStartSLOduration=9.915229428 podStartE2EDuration="9.915229428s" podCreationTimestamp="2026-02-26 11:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:32.912312535 +0000 UTC m=+179.568051670" watchObservedRunningTime="2026-02-26 11:08:32.915229428 +0000 UTC m=+179.570968543" Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.935311 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mlrhs" event={"ID":"2ee196d3-f421-4369-a590-691e0d0960b7","Type":"ContainerStarted","Data":"8ec977402e25ba62b29e7ada39c2f445ccccf4d5d67c45303fdad9e198a4ee12"} Feb 26 11:08:32 crc kubenswrapper[4724]: I0226 11:08:32.974616 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" podStartSLOduration=116.974573804 podStartE2EDuration="1m56.974573804s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:32.967305146 +0000 UTC m=+179.623044271" watchObservedRunningTime="2026-02-26 11:08:32.974573804 +0000 UTC m=+179.630312919" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.003576 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:33 crc kubenswrapper[4724]: E0226 11:08:33.004875 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:33.504835309 +0000 UTC m=+180.160574604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.016465 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" event={"ID":"809c874c-661e-43c3-9e0e-6ee95ed8586e","Type":"ContainerStarted","Data":"c75e9711e110eee5b27cdcab269a9148d4fd96c281588ec20eb0d791bf94c91f"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.025531 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kkmrh" podStartSLOduration=116.025339015 podStartE2EDuration="1m56.025339015s" podCreationTimestamp="2026-02-26 11:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:33.02447108 +0000 UTC m=+179.680210195" watchObservedRunningTime="2026-02-26 11:08:33.025339015 +0000 UTC m=+179.681078140" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.042074 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rrbmc" event={"ID":"9063c94b-5e44-4a4a-9c85-e122cf7751b9","Type":"ContainerStarted","Data":"134d5cba4c6dc91b80a43b21de819279b097a8243328e67d2e3bb723ac4cbb1f"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.042804 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.052988 4724 patch_prober.go:28] interesting pod/console-operator-58897d9998-rrbmc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.053454 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rrbmc" podUID="9063c94b-5e44-4a4a-9c85-e122cf7751b9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.069535 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" event={"ID":"309c37fa-849e-460c-9816-4d67aa631021","Type":"ContainerStarted","Data":"ed12c873476e18812a7e8d75a7929dfd5e69c741880d02635d6cce345c5eb076"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.071883 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.099483 4724 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mqt24 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.099583 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" podUID="309c37fa-849e-460c-9816-4d67aa631021" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.113926 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:33 crc kubenswrapper[4724]: E0226 11:08:33.116296 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:33.616246963 +0000 UTC m=+180.271986258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.125890 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" podStartSLOduration=117.125845107 podStartE2EDuration="1m57.125845107s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:33.12382369 +0000 UTC m=+179.779562805" watchObservedRunningTime="2026-02-26 11:08:33.125845107 +0000 UTC m=+179.781584222" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.136764 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8x4t" event={"ID":"f73a3c79-e83b-4cf2-9a39-ca27f3f3feab","Type":"ContainerStarted","Data":"bba11a999384ad61f0972da250d35ca1052da72e6897dfe3aecbf1655074ba93"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.163970 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:33 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:33 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:33 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.165295 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.172451 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69bcg" event={"ID":"4c344657-7620-4366-80a9-84de8ed2face","Type":"ContainerStarted","Data":"f13d5d983fb385f90cec752f2370db52536793ab02258d6749a39bddd5e6b9ed"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.172544 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69bcg" event={"ID":"4c344657-7620-4366-80a9-84de8ed2face","Type":"ContainerStarted","Data":"e15771e720aa9618d4c45e193f1f3680c8056054d7329eeabd3bb71c93cb6672"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.202158 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" podStartSLOduration=116.202129158 podStartE2EDuration="1m56.202129158s" podCreationTimestamp="2026-02-26 11:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:33.201570382 +0000 UTC m=+179.857309497" watchObservedRunningTime="2026-02-26 11:08:33.202129158 +0000 UTC m=+179.857868273" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.204850 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d9shf" event={"ID":"4d5a1aaf-95fa-4eaf-8f2f-68c48937af6f","Type":"ContainerStarted","Data":"ecd90e88cc1ceec2fe85ee14b5f7412d12dc5b7c07ebee71e06eff9d132153c3"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.215522 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:33 crc kubenswrapper[4724]: E0226 11:08:33.217694 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:33.717664892 +0000 UTC m=+180.373404007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.244531 4724 generic.go:334] "Generic (PLEG): container finished" podID="0eb89f1c-1230-4455-86c1-6ad3796969a9" containerID="9a4f32460ffb5adcd97a229e2ac8e022faca3583cb450f0dbf69701ba6f1ad6a" exitCode=0 Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.245152 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" event={"ID":"0eb89f1c-1230-4455-86c1-6ad3796969a9","Type":"ContainerDied","Data":"9a4f32460ffb5adcd97a229e2ac8e022faca3583cb450f0dbf69701ba6f1ad6a"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.257592 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-rrbmc" podStartSLOduration=117.257553222 podStartE2EDuration="1m57.257553222s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:33.239921178 +0000 UTC m=+179.895660303" watchObservedRunningTime="2026-02-26 11:08:33.257553222 +0000 UTC m=+179.913292337" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.280191 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" event={"ID":"84a12f19-2563-48d0-8682-26dd701b62ce","Type":"ContainerStarted","Data":"467fbf0aeb14effb70b48855f4f6b931150209ca7f7d83754ac1a6cd8366c1fd"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.282278 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69bcg" podStartSLOduration=117.282251847 podStartE2EDuration="1m57.282251847s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:33.282036191 +0000 UTC m=+179.937775306" watchObservedRunningTime="2026-02-26 11:08:33.282251847 +0000 UTC m=+179.937990952" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.305973 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" event={"ID":"6a9effc4-1c10-46ae-9762-1f3308aa9bc9","Type":"ContainerStarted","Data":"fcbee01f89554e7c389ff41199fe50ea39d254760ccc2c641d8af11600397108"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.306065 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" event={"ID":"6a9effc4-1c10-46ae-9762-1f3308aa9bc9","Type":"ContainerStarted","Data":"a35e5344978f842f0f83131ca5b49ca84b574830571e3720ee3b997bc22fc0be"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.331016 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:33 crc kubenswrapper[4724]: E0226 11:08:33.332431 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:33.832411701 +0000 UTC m=+180.488150816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.344422 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8v4r5" event={"ID":"fe850ec7-4df8-4628-ae55-3c922de012e8","Type":"ContainerStarted","Data":"087a37d7dcec59723fd4467917533fc3cc8ab499c84afa367e00cbc889f0a353"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.472855 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:33 crc kubenswrapper[4724]: E0226 11:08:33.474832 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:33.974809581 +0000 UTC m=+180.630548696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.477082 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" podStartSLOduration=117.477043745 podStartE2EDuration="1m57.477043745s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:33.47199226 +0000 UTC m=+180.127731385" watchObservedRunningTime="2026-02-26 11:08:33.477043745 +0000 UTC m=+180.132782870" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.493627 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nj24t"] Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.511986 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" event={"ID":"630d11de-abc5-47ed-8284-7bbf4ec5b9c8","Type":"ContainerStarted","Data":"55bbd2b87ed8b7a67c6cb6e185f800581b01c3abb4da57e29a9c51d6e1b4073f"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.536095 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" event={"ID":"207d3079-e7ed-46b9-8744-aed50bb42352","Type":"ContainerStarted","Data":"1a71e98ccb8a3a292b7ca5867b6bec5b6b8afccb2f81166002b0fa2a6db75108"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.542734 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.554471 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-thlzt" podStartSLOduration=117.554436897 podStartE2EDuration="1m57.554436897s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:33.530829282 +0000 UTC m=+180.186568397" watchObservedRunningTime="2026-02-26 11:08:33.554436897 +0000 UTC m=+180.210176022" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.558283 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7"] Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.558392 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" event={"ID":"6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52","Type":"ContainerStarted","Data":"fac469b6580a2b24420d9b207c2531efff1d0897b5107851872eb3b6f528e977"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.558455 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.559140 4724 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zxggv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.559348 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" podUID="6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.575117 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.575686 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs\") pod \"network-metrics-daemon-tj879\" (UID: \"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\") " pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:33 crc kubenswrapper[4724]: E0226 11:08:33.589922 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:34.08988994 +0000 UTC m=+180.745629055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.621029 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p" event={"ID":"34fa2b5c-f1b3-434f-a307-be966f1d64d9","Type":"ContainerStarted","Data":"c7c52062c82e3a52637e77d94086eedd3dd5c446b4991cc07d0a55f49ddb8099"} Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.623104 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.628429 4724 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nj24t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.628507 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" podUID="01c4a397-4485-49bc-9ee3-c794832fd1ee" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.628898 4724 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8kd6n container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.628921 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.629461 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.629630 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.643494 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a83b55-07c3-47d4-9e4a-9d613f82d8a4-metrics-certs\") pod \"network-metrics-daemon-tj879\" (UID: \"00a83b55-07c3-47d4-9e4a-9d613f82d8a4\") " pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.650658 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" podStartSLOduration=117.650615345 podStartE2EDuration="1m57.650615345s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:33.587823681 +0000 UTC m=+180.243562806" watchObservedRunningTime="2026-02-26 11:08:33.650615345 +0000 UTC m=+180.306354460" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.689396 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:33 crc kubenswrapper[4724]: E0226 11:08:33.689894 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:34.189868077 +0000 UTC m=+180.845607182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.766278 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-8v4r5" podStartSLOduration=116.76625517 podStartE2EDuration="1m56.76625517s" podCreationTimestamp="2026-02-26 11:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:33.707861781 +0000 UTC m=+180.363600926" watchObservedRunningTime="2026-02-26 11:08:33.76625517 +0000 UTC m=+180.421994295" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.768673 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" podStartSLOduration=117.768665079 podStartE2EDuration="1m57.768665079s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:33.764318225 +0000 UTC m=+180.420057360" watchObservedRunningTime="2026-02-26 11:08:33.768665079 +0000 UTC m=+180.424404194" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.795134 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:33 crc kubenswrapper[4724]: E0226 11:08:33.795669 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:34.29564695 +0000 UTC m=+180.951386065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.910755 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:33 crc kubenswrapper[4724]: E0226 11:08:33.911246 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:34.411221313 +0000 UTC m=+181.066960428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.915715 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.928136 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tj879" Feb 26 11:08:33 crc kubenswrapper[4724]: I0226 11:08:33.959594 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-d9shf" podStartSLOduration=117.959563875 podStartE2EDuration="1m57.959563875s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:33.94259719 +0000 UTC m=+180.598336335" watchObservedRunningTime="2026-02-26 11:08:33.959563875 +0000 UTC m=+180.615302990" Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.023138 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:34 crc kubenswrapper[4724]: E0226 11:08:34.023780 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:34.523754729 +0000 UTC m=+181.179493844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.136330 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:34 crc kubenswrapper[4724]: E0226 11:08:34.136923 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:34.636872322 +0000 UTC m=+181.292611437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.191125 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:34 crc kubenswrapper[4724]: E0226 11:08:34.192079 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:34.692056339 +0000 UTC m=+181.347795464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.193299 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4f5jn" podStartSLOduration=118.193269284 podStartE2EDuration="1m58.193269284s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:34.19137 +0000 UTC m=+180.847109135" watchObservedRunningTime="2026-02-26 11:08:34.193269284 +0000 UTC m=+180.849008409" Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.222380 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:34 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:34 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:34 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.222475 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.292757 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:34 crc kubenswrapper[4724]: E0226 11:08:34.293593 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:34.79356579 +0000 UTC m=+181.449304905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.310608 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" podStartSLOduration=118.310583846 podStartE2EDuration="1m58.310583846s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:34.309740792 +0000 UTC m=+180.965479937" watchObservedRunningTime="2026-02-26 11:08:34.310583846 +0000 UTC m=+180.966322961" Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.394319 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:34 crc kubenswrapper[4724]: E0226 11:08:34.394749 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:34.894732361 +0000 UTC m=+181.550471476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.415943 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" podStartSLOduration=118.415904746 podStartE2EDuration="1m58.415904746s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:34.388380289 +0000 UTC m=+181.044119424" watchObservedRunningTime="2026-02-26 11:08:34.415904746 +0000 UTC m=+181.071643861" Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.495261 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:34 crc kubenswrapper[4724]: E0226 11:08:34.496452 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:34.996408537 +0000 UTC m=+181.652147822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:34 crc kubenswrapper[4724]: E0226 11:08:34.598912 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:35.098889526 +0000 UTC m=+181.754628641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.598433 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.620252 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p" podStartSLOduration=118.620219365 podStartE2EDuration="1m58.620219365s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:34.505853607 +0000 UTC m=+181.161592722" watchObservedRunningTime="2026-02-26 11:08:34.620219365 +0000 UTC m=+181.275958480" Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.625801 4724 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mxbr7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.625878 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" podUID="7ffca8b8-930c-4a19-93ff-e47500546d2e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.704395 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:34 crc kubenswrapper[4724]: E0226 11:08:34.704909 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:35.204885625 +0000 UTC m=+181.860624740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.718435 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8x4t" event={"ID":"f73a3c79-e83b-4cf2-9a39-ca27f3f3feab","Type":"ContainerStarted","Data":"795c80c6d6606eaff338244028aa546e267c06537f692142edeaa60894fff101"} Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.779890 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" event={"ID":"0eb89f1c-1230-4455-86c1-6ad3796969a9","Type":"ContainerStarted","Data":"34d70b29f86f281f270e6a1ae537affb8d25d29df25b866864dc57e2ecdc08a1"} Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.808053 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:34 crc kubenswrapper[4724]: E0226 11:08:34.809796 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:35.309779023 +0000 UTC m=+181.965518138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.824862 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s7c4t" event={"ID":"84a12f19-2563-48d0-8682-26dd701b62ce","Type":"ContainerStarted","Data":"cfbccc8a363ce28f63900bd6bcfef8c931e93fdc67f7e021eba4fe66db5a9e15"} Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.864189 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" podUID="01c4a397-4485-49bc-9ee3-c794832fd1ee" containerName="controller-manager" containerID="cri-o://041b1d84dc2212b765d4c4188790bd561f46298534305662a93d23c2f4aa77ec" gracePeriod=30 Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.865779 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsm2c" event={"ID":"0981a4e3-56c8-49a4-a65f-94d3d916eef8","Type":"ContainerStarted","Data":"d2cfc9583172a5089e571637557a1f9f69c6fa67d3f0c659da865cb7c59a6f1c"} Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.871344 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" podUID="7ffca8b8-930c-4a19-93ff-e47500546d2e" containerName="route-controller-manager" containerID="cri-o://c97f6a80673402dc556d8a667efc01d48311935195a0443f3472e2167a1a0f4c" gracePeriod=30 Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.874797 4724 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2m27r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.874898 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" podUID="5f469f47-990d-4224-8002-c658ef626f48" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.875025 4724 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zxggv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.875057 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" podUID="6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.875650 4724 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8kd6n container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.875686 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.891341 4724 patch_prober.go:28] interesting pod/console-operator-58897d9998-rrbmc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.891407 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rrbmc" podUID="9063c94b-5e44-4a4a-9c85-e122cf7751b9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.891733 4724 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nj24t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.891761 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" podUID="01c4a397-4485-49bc-9ee3-c794832fd1ee" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.899292 4724 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-nc5bx container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.899395 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" podUID="444082d7-63dc-4363-ad17-5b61e61895ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.901535 4724 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mqt24 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.901616 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" podUID="309c37fa-849e-460c-9816-4d67aa631021" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Feb 26 11:08:34 crc kubenswrapper[4724]: I0226 11:08:34.910338 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:34 crc kubenswrapper[4724]: E0226 11:08:34.937827 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:35.437782021 +0000 UTC m=+182.093521296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.027435 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:35 crc kubenswrapper[4724]: E0226 11:08:35.064650 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:35.564625586 +0000 UTC m=+182.220364701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.136915 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:35 crc kubenswrapper[4724]: E0226 11:08:35.139070 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:35.639014772 +0000 UTC m=+182.294753877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.158294 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.143765 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.158596 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.158628 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:35 crc kubenswrapper[4724]: E0226 11:08:35.159288 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:35.659268401 +0000 UTC m=+182.315007516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.171305 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8x4t" podStartSLOduration=119.171275264 podStartE2EDuration="1m59.171275264s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:34.762652246 +0000 UTC m=+181.418391361" watchObservedRunningTime="2026-02-26 11:08:35.171275264 +0000 UTC m=+181.827014379" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.172623 4724 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-psfvt container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.21:8443/livez\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.172745 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" podUID="feac8cdb-eb8a-4f0d-afee-d18467d73727" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.21:8443/livez\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.172873 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:35 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:35 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:35 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.172892 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.259808 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:35 crc kubenswrapper[4724]: E0226 11:08:35.261587 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:35.761548634 +0000 UTC m=+182.417287749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.313226 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.313841 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.321742 4724 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mxbr7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": read tcp 10.217.0.2:41360->10.217.0.12:8443: read: connection reset by peer" start-of-body= Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.322005 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" podUID="7ffca8b8-930c-4a19-93ff-e47500546d2e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": read tcp 10.217.0.2:41360->10.217.0.12:8443: read: connection reset by peer" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.329666 4724 patch_prober.go:28] interesting pod/console-f9d7485db-9cwcb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.329766 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9cwcb" podUID="0308748d-e26a-4fc4-bc5d-d3bd65936c7b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.351111 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.351202 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.351451 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.351573 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.362591 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:35 crc kubenswrapper[4724]: E0226 11:08:35.362955 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:35.862943852 +0000 UTC m=+182.518682967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.421381 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsm2c" podStartSLOduration=119.421351441 podStartE2EDuration="1m59.421351441s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:35.076368662 +0000 UTC m=+181.732107797" watchObservedRunningTime="2026-02-26 11:08:35.421351441 +0000 UTC m=+182.077090576" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.424632 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tj879"] Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.464261 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:35 crc kubenswrapper[4724]: E0226 11:08:35.465898 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:35.965858003 +0000 UTC m=+182.621597308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:35 crc kubenswrapper[4724]: W0226 11:08:35.478325 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00a83b55_07c3_47d4_9e4a_9d613f82d8a4.slice/crio-e6767c1d31b05eef1d32aa12ed1c74510ec361ea0c003e7b6928faefdacc9b34 WatchSource:0}: Error finding container e6767c1d31b05eef1d32aa12ed1c74510ec361ea0c003e7b6928faefdacc9b34: Status 404 returned error can't find the container with id e6767c1d31b05eef1d32aa12ed1c74510ec361ea0c003e7b6928faefdacc9b34 Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.566024 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:35 crc kubenswrapper[4724]: E0226 11:08:35.566856 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:36.066828679 +0000 UTC m=+182.722567794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.649734 4724 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mxbr7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.649853 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" podUID="7ffca8b8-930c-4a19-93ff-e47500546d2e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.667093 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:35 crc kubenswrapper[4724]: E0226 11:08:35.667999 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:36.16797073 +0000 UTC m=+182.823709845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.747084 4724 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nj24t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.747211 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" podUID="01c4a397-4485-49bc-9ee3-c794832fd1ee" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.773364 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:35 crc kubenswrapper[4724]: E0226 11:08:35.773992 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:36.273961729 +0000 UTC m=+182.929700844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.813634 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.814745 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.837954 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.838088 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.874849 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.875190 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6a646f4-99de-431f-a70b-109291465b0a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a6a646f4-99de-431f-a70b-109291465b0a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.875294 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6a646f4-99de-431f-a70b-109291465b0a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a6a646f4-99de-431f-a70b-109291465b0a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:08:35 crc kubenswrapper[4724]: E0226 11:08:35.875423 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:36.375396978 +0000 UTC m=+183.031136093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.880624 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-879f6c89f-nj24t_01c4a397-4485-49bc-9ee3-c794832fd1ee/controller-manager/0.log" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.880717 4724 generic.go:334] "Generic (PLEG): container finished" podID="01c4a397-4485-49bc-9ee3-c794832fd1ee" containerID="041b1d84dc2212b765d4c4188790bd561f46298534305662a93d23c2f4aa77ec" exitCode=2 Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.880820 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" event={"ID":"01c4a397-4485-49bc-9ee3-c794832fd1ee","Type":"ContainerDied","Data":"041b1d84dc2212b765d4c4188790bd561f46298534305662a93d23c2f4aa77ec"} Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.898580 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tj879" event={"ID":"00a83b55-07c3-47d4-9e4a-9d613f82d8a4","Type":"ContainerStarted","Data":"e6767c1d31b05eef1d32aa12ed1c74510ec361ea0c003e7b6928faefdacc9b34"} Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.899143 4724 patch_prober.go:28] interesting pod/console-operator-58897d9998-rrbmc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.899290 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rrbmc" podUID="9063c94b-5e44-4a4a-9c85-e122cf7751b9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.910306 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nc5bx" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.913954 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.977119 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.978256 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6a646f4-99de-431f-a70b-109291465b0a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a6a646f4-99de-431f-a70b-109291465b0a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.978764 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6a646f4-99de-431f-a70b-109291465b0a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a6a646f4-99de-431f-a70b-109291465b0a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:08:35 crc kubenswrapper[4724]: I0226 11:08:35.984796 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6a646f4-99de-431f-a70b-109291465b0a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a6a646f4-99de-431f-a70b-109291465b0a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:08:35 crc kubenswrapper[4724]: E0226 11:08:35.987482 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:36.4874486 +0000 UTC m=+183.143187925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.049716 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mqt24" Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.081322 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:36 crc kubenswrapper[4724]: E0226 11:08:36.082127 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:36.582057584 +0000 UTC m=+183.237796699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.105521 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6a646f4-99de-431f-a70b-109291465b0a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a6a646f4-99de-431f-a70b-109291465b0a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.111083 4724 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8kd6n container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.111200 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.111315 4724 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8kd6n container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.111348 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.154961 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.161072 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:36 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:36 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:36 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.161157 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.190590 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:36 crc kubenswrapper[4724]: E0226 11:08:36.191073 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:36.691052219 +0000 UTC m=+183.346791334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.294794 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:36 crc kubenswrapper[4724]: E0226 11:08:36.296061 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:36.796031889 +0000 UTC m=+183.451771014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:36 crc kubenswrapper[4724]: E0226 11:08:36.398255 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:36.89822377 +0000 UTC m=+183.553962885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.397723 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.499882 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:36 crc kubenswrapper[4724]: E0226 11:08:36.500349 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:37.000170213 +0000 UTC m=+183.655909328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.500555 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:36 crc kubenswrapper[4724]: E0226 11:08:36.501074 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:37.001052909 +0000 UTC m=+183.656792014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.602881 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:36 crc kubenswrapper[4724]: E0226 11:08:36.603058 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:37.103031263 +0000 UTC m=+183.758770378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.603096 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:36 crc kubenswrapper[4724]: E0226 11:08:36.603594 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:37.103574079 +0000 UTC m=+183.759313194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.704633 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:36 crc kubenswrapper[4724]: E0226 11:08:36.704782 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:37.204754631 +0000 UTC m=+183.860493746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.704966 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:36 crc kubenswrapper[4724]: E0226 11:08:36.705364 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:37.205353188 +0000 UTC m=+183.861092303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.806202 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:36 crc kubenswrapper[4724]: E0226 11:08:36.815204 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:37.315141065 +0000 UTC m=+183.970880180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.906551 4724 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2m27r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": context deadline exceeded" start-of-body= Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.907111 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" podUID="5f469f47-990d-4224-8002-c658ef626f48" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": context deadline exceeded" Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.913760 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:36 crc kubenswrapper[4724]: E0226 11:08:36.914265 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:37.414247048 +0000 UTC m=+184.069986163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.948087 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" event={"ID":"0eb89f1c-1230-4455-86c1-6ad3796969a9","Type":"ContainerStarted","Data":"11136b5f8f893be8325bea1d1875bee364bfa96740da8c07f07e9e5e5f0f1578"} Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.965896 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6576b87f9c-mxbr7_7ffca8b8-930c-4a19-93ff-e47500546d2e/route-controller-manager/0.log" Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.965969 4724 generic.go:334] "Generic (PLEG): container finished" podID="7ffca8b8-930c-4a19-93ff-e47500546d2e" containerID="c97f6a80673402dc556d8a667efc01d48311935195a0443f3472e2167a1a0f4c" exitCode=255 Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.966067 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" event={"ID":"7ffca8b8-930c-4a19-93ff-e47500546d2e","Type":"ContainerDied","Data":"c97f6a80673402dc556d8a667efc01d48311935195a0443f3472e2167a1a0f4c"} Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.980140 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tj879" event={"ID":"00a83b55-07c3-47d4-9e4a-9d613f82d8a4","Type":"ContainerStarted","Data":"6a1ba646ad02cb1e6791f72ccea15e04cedde1e461e61aef1618af053411834f"} Feb 26 11:08:36 crc kubenswrapper[4724]: I0226 11:08:36.997346 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4rspm" event={"ID":"4c9eec4e-df3c-411b-8629-421f3abfb500","Type":"ContainerStarted","Data":"a79169f7662ba29577d5efd3c994fe9aa314e5868b4152f95026858fdb6c798d"} Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.002776 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" podStartSLOduration=121.002738057 podStartE2EDuration="2m1.002738057s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:36.995718846 +0000 UTC m=+183.651457981" watchObservedRunningTime="2026-02-26 11:08:37.002738057 +0000 UTC m=+183.658477172" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.009339 4724 patch_prober.go:28] interesting pod/console-operator-58897d9998-rrbmc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.009431 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rrbmc" podUID="9063c94b-5e44-4a4a-9c85-e122cf7751b9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.011255 4724 patch_prober.go:28] interesting pod/console-operator-58897d9998-rrbmc container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.011390 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-rrbmc" podUID="9063c94b-5e44-4a4a-9c85-e122cf7751b9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.015846 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:37 crc kubenswrapper[4724]: E0226 11:08:37.017492 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:37.517457478 +0000 UTC m=+184.173196603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.100530 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.102301 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.118777 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:37 crc kubenswrapper[4724]: E0226 11:08:37.119303 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:37.619282638 +0000 UTC m=+184.275021753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.121374 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.121735 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.143699 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.147738 4724 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zxggv container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded" start-of-body= Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.147812 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" podUID="6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.150091 4724 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zxggv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.150233 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" podUID="6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.159602 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:37 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:37 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:37 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.159715 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.219800 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.220275 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/939ca031-3abd-432c-a810-b252a35fb690-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"939ca031-3abd-432c-a810-b252a35fb690\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.220371 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/939ca031-3abd-432c-a810-b252a35fb690-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"939ca031-3abd-432c-a810-b252a35fb690\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:08:37 crc kubenswrapper[4724]: E0226 11:08:37.220915 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:37.720887362 +0000 UTC m=+184.376626477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.339159 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/939ca031-3abd-432c-a810-b252a35fb690-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"939ca031-3abd-432c-a810-b252a35fb690\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.339303 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/939ca031-3abd-432c-a810-b252a35fb690-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"939ca031-3abd-432c-a810-b252a35fb690\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.339765 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.339821 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/939ca031-3abd-432c-a810-b252a35fb690-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"939ca031-3abd-432c-a810-b252a35fb690\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:08:37 crc kubenswrapper[4724]: E0226 11:08:37.340674 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:37.840653625 +0000 UTC m=+184.496392740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.395258 4724 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-md2vv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.395344 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" podUID="207d3079-e7ed-46b9-8744-aed50bb42352" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.395906 4724 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-md2vv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.395933 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" podUID="207d3079-e7ed-46b9-8744-aed50bb42352" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.430510 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/939ca031-3abd-432c-a810-b252a35fb690-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"939ca031-3abd-432c-a810-b252a35fb690\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.446977 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:37 crc kubenswrapper[4724]: E0226 11:08:37.447454 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:37.947430306 +0000 UTC m=+184.603169421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.448115 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.561242 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:37 crc kubenswrapper[4724]: E0226 11:08:37.561756 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:38.061729363 +0000 UTC m=+184.717468478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.670359 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:37 crc kubenswrapper[4724]: E0226 11:08:37.671494 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:38.171469949 +0000 UTC m=+184.827209064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.775723 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:37 crc kubenswrapper[4724]: E0226 11:08:37.776195 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:38.276163181 +0000 UTC m=+184.931902296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.877692 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:37 crc kubenswrapper[4724]: E0226 11:08:37.877886 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:38.377859307 +0000 UTC m=+185.033598422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.878070 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:37 crc kubenswrapper[4724]: E0226 11:08:37.878554 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:38.378544946 +0000 UTC m=+185.034284061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:37 crc kubenswrapper[4724]: I0226 11:08:37.979140 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:37 crc kubenswrapper[4724]: E0226 11:08:37.979799 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:38.479749369 +0000 UTC m=+185.135488494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.047805 4724 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2m27r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.047927 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" podUID="5f469f47-990d-4224-8002-c658ef626f48" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.069486 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-92dsj"] Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.071125 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.098681 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.099855 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:38 crc kubenswrapper[4724]: E0226 11:08:38.100303 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:38.600285044 +0000 UTC m=+185.256024159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.127196 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92dsj"] Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.157989 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:38 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:38 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:38 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.158102 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.173336 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tj879" event={"ID":"00a83b55-07c3-47d4-9e4a-9d613f82d8a4","Type":"ContainerStarted","Data":"b619b83f49b0618f38645c7de9a5dbddc5bcbb9363db2d9c379da2ee901361aa"} Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.201110 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.201601 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-catalog-content\") pod \"certified-operators-92dsj\" (UID: \"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc\") " pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.201674 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq4gb\" (UniqueName: \"kubernetes.io/projected/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-kube-api-access-hq4gb\") pod \"certified-operators-92dsj\" (UID: \"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc\") " pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.201701 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-utilities\") pod \"certified-operators-92dsj\" (UID: \"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc\") " pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:08:38 crc kubenswrapper[4724]: E0226 11:08:38.201878 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:38.701850066 +0000 UTC m=+185.357589181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.236338 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p9shd"] Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.242503 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.249898 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.275497 4724 ???:1] "http: TLS handshake error from 192.168.126.11:54950: no serving certificate available for the kubelet" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.306324 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.306565 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-catalog-content\") pod \"certified-operators-92dsj\" (UID: \"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc\") " pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.306772 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq4gb\" (UniqueName: \"kubernetes.io/projected/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-kube-api-access-hq4gb\") pod \"certified-operators-92dsj\" (UID: \"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc\") " pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.306839 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-utilities\") pod \"certified-operators-92dsj\" (UID: \"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc\") " pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:08:38 crc kubenswrapper[4724]: E0226 11:08:38.319071 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:38.819049796 +0000 UTC m=+185.474788911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.321446 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-utilities\") pod \"certified-operators-92dsj\" (UID: \"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc\") " pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.333501 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-tj879" podStartSLOduration=122.333459218 podStartE2EDuration="2m2.333459218s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:38.287576627 +0000 UTC m=+184.943315742" watchObservedRunningTime="2026-02-26 11:08:38.333459218 +0000 UTC m=+184.989198343" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.334532 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9shd"] Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.354959 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-catalog-content\") pod \"certified-operators-92dsj\" (UID: \"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc\") " pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.394596 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq4gb\" (UniqueName: \"kubernetes.io/projected/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-kube-api-access-hq4gb\") pod \"certified-operators-92dsj\" (UID: \"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc\") " pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.419150 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.419595 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb55921-4244-4557-aa72-97cea802c3fb-catalog-content\") pod \"community-operators-p9shd\" (UID: \"0eb55921-4244-4557-aa72-97cea802c3fb\") " pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.419711 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb55921-4244-4557-aa72-97cea802c3fb-utilities\") pod \"community-operators-p9shd\" (UID: \"0eb55921-4244-4557-aa72-97cea802c3fb\") " pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.419736 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sht7\" (UniqueName: \"kubernetes.io/projected/0eb55921-4244-4557-aa72-97cea802c3fb-kube-api-access-5sht7\") pod \"community-operators-p9shd\" (UID: \"0eb55921-4244-4557-aa72-97cea802c3fb\") " pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:08:38 crc kubenswrapper[4724]: E0226 11:08:38.419942 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:38.919912359 +0000 UTC m=+185.575651474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.444916 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.493239 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2gkcb"] Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.494374 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.524214 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb55921-4244-4557-aa72-97cea802c3fb-catalog-content\") pod \"community-operators-p9shd\" (UID: \"0eb55921-4244-4557-aa72-97cea802c3fb\") " pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.524260 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb55921-4244-4557-aa72-97cea802c3fb-catalog-content\") pod \"community-operators-p9shd\" (UID: \"0eb55921-4244-4557-aa72-97cea802c3fb\") " pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.524379 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb55921-4244-4557-aa72-97cea802c3fb-utilities\") pod \"community-operators-p9shd\" (UID: \"0eb55921-4244-4557-aa72-97cea802c3fb\") " pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.524412 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sht7\" (UniqueName: \"kubernetes.io/projected/0eb55921-4244-4557-aa72-97cea802c3fb-kube-api-access-5sht7\") pod \"community-operators-p9shd\" (UID: \"0eb55921-4244-4557-aa72-97cea802c3fb\") " pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.524509 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:38 crc kubenswrapper[4724]: E0226 11:08:38.524954 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:39.02493393 +0000 UTC m=+185.680673045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.525521 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb55921-4244-4557-aa72-97cea802c3fb-utilities\") pod \"community-operators-p9shd\" (UID: \"0eb55921-4244-4557-aa72-97cea802c3fb\") " pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.544195 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2gkcb"] Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.578684 4724 ???:1] "http: TLS handshake error from 192.168.126.11:54952: no serving certificate available for the kubelet" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.625550 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.625910 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a09ba5-1063-467d-b7a6-c1b2c37a135e-catalog-content\") pod \"certified-operators-2gkcb\" (UID: \"35a09ba5-1063-467d-b7a6-c1b2c37a135e\") " pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.625957 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht5jv\" (UniqueName: \"kubernetes.io/projected/35a09ba5-1063-467d-b7a6-c1b2c37a135e-kube-api-access-ht5jv\") pod \"certified-operators-2gkcb\" (UID: \"35a09ba5-1063-467d-b7a6-c1b2c37a135e\") " pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.626014 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a09ba5-1063-467d-b7a6-c1b2c37a135e-utilities\") pod \"certified-operators-2gkcb\" (UID: \"35a09ba5-1063-467d-b7a6-c1b2c37a135e\") " pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:08:38 crc kubenswrapper[4724]: E0226 11:08:38.626240 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:39.126215215 +0000 UTC m=+185.781954330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.656146 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sht7\" (UniqueName: \"kubernetes.io/projected/0eb55921-4244-4557-aa72-97cea802c3fb-kube-api-access-5sht7\") pod \"community-operators-p9shd\" (UID: \"0eb55921-4244-4557-aa72-97cea802c3fb\") " pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.707027 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vlps5"] Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.720381 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.727375 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.727437 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a09ba5-1063-467d-b7a6-c1b2c37a135e-catalog-content\") pod \"certified-operators-2gkcb\" (UID: \"35a09ba5-1063-467d-b7a6-c1b2c37a135e\") " pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.727486 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht5jv\" (UniqueName: \"kubernetes.io/projected/35a09ba5-1063-467d-b7a6-c1b2c37a135e-kube-api-access-ht5jv\") pod \"certified-operators-2gkcb\" (UID: \"35a09ba5-1063-467d-b7a6-c1b2c37a135e\") " pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.727543 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a09ba5-1063-467d-b7a6-c1b2c37a135e-utilities\") pod \"certified-operators-2gkcb\" (UID: \"35a09ba5-1063-467d-b7a6-c1b2c37a135e\") " pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:08:38 crc kubenswrapper[4724]: E0226 11:08:38.728068 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:39.228042585 +0000 UTC m=+185.883781700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.730481 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a09ba5-1063-467d-b7a6-c1b2c37a135e-utilities\") pod \"certified-operators-2gkcb\" (UID: \"35a09ba5-1063-467d-b7a6-c1b2c37a135e\") " pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.731774 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a09ba5-1063-467d-b7a6-c1b2c37a135e-catalog-content\") pod \"certified-operators-2gkcb\" (UID: \"35a09ba5-1063-467d-b7a6-c1b2c37a135e\") " pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.747901 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vlps5"] Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.776964 4724 ???:1] "http: TLS handshake error from 192.168.126.11:54954: no serving certificate available for the kubelet" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.821368 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht5jv\" (UniqueName: \"kubernetes.io/projected/35a09ba5-1063-467d-b7a6-c1b2c37a135e-kube-api-access-ht5jv\") pod \"certified-operators-2gkcb\" (UID: \"35a09ba5-1063-467d-b7a6-c1b2c37a135e\") " pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.830465 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.830759 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ed0863-9bdf-48ba-ad70-c1c728c58730-catalog-content\") pod \"community-operators-vlps5\" (UID: \"f9ed0863-9bdf-48ba-ad70-c1c728c58730\") " pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.830803 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ed0863-9bdf-48ba-ad70-c1c728c58730-utilities\") pod \"community-operators-vlps5\" (UID: \"f9ed0863-9bdf-48ba-ad70-c1c728c58730\") " pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.830841 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnd8c\" (UniqueName: \"kubernetes.io/projected/f9ed0863-9bdf-48ba-ad70-c1c728c58730-kube-api-access-dnd8c\") pod \"community-operators-vlps5\" (UID: \"f9ed0863-9bdf-48ba-ad70-c1c728c58730\") " pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:08:38 crc kubenswrapper[4724]: E0226 11:08:38.830999 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:39.330972427 +0000 UTC m=+185.986711542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.842258 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.871718 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.933439 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.933477 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ed0863-9bdf-48ba-ad70-c1c728c58730-catalog-content\") pod \"community-operators-vlps5\" (UID: \"f9ed0863-9bdf-48ba-ad70-c1c728c58730\") " pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.933509 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ed0863-9bdf-48ba-ad70-c1c728c58730-utilities\") pod \"community-operators-vlps5\" (UID: \"f9ed0863-9bdf-48ba-ad70-c1c728c58730\") " pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.933545 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnd8c\" (UniqueName: \"kubernetes.io/projected/f9ed0863-9bdf-48ba-ad70-c1c728c58730-kube-api-access-dnd8c\") pod \"community-operators-vlps5\" (UID: \"f9ed0863-9bdf-48ba-ad70-c1c728c58730\") " pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:08:38 crc kubenswrapper[4724]: E0226 11:08:38.934105 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:39.434093734 +0000 UTC m=+186.089832849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.934544 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ed0863-9bdf-48ba-ad70-c1c728c58730-catalog-content\") pod \"community-operators-vlps5\" (UID: \"f9ed0863-9bdf-48ba-ad70-c1c728c58730\") " pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.934826 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ed0863-9bdf-48ba-ad70-c1c728c58730-utilities\") pod \"community-operators-vlps5\" (UID: \"f9ed0863-9bdf-48ba-ad70-c1c728c58730\") " pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.946957 4724 ???:1] "http: TLS handshake error from 192.168.126.11:54966: no serving certificate available for the kubelet" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.955640 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:08:38 crc kubenswrapper[4724]: I0226 11:08:38.988631 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnd8c\" (UniqueName: \"kubernetes.io/projected/f9ed0863-9bdf-48ba-ad70-c1c728c58730-kube-api-access-dnd8c\") pod \"community-operators-vlps5\" (UID: \"f9ed0863-9bdf-48ba-ad70-c1c728c58730\") " pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.035480 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:39 crc kubenswrapper[4724]: E0226 11:08:39.035714 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:39.535670737 +0000 UTC m=+186.191409852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.036143 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:39 crc kubenswrapper[4724]: E0226 11:08:39.036696 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:39.536685566 +0000 UTC m=+186.192424681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.079366 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-879f6c89f-nj24t_01c4a397-4485-49bc-9ee3-c794832fd1ee/controller-manager/0.log" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.079776 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.079458 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.143528 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:39 crc kubenswrapper[4724]: E0226 11:08:39.143748 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:39.643715795 +0000 UTC m=+186.299454910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.144117 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:39 crc kubenswrapper[4724]: E0226 11:08:39.144594 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:39.644578339 +0000 UTC m=+186.300317454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.155439 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:39 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:39 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:39 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.155509 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.244975 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-client-ca\") pod \"01c4a397-4485-49bc-9ee3-c794832fd1ee\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.245119 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-proxy-ca-bundles\") pod \"01c4a397-4485-49bc-9ee3-c794832fd1ee\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.245156 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-config\") pod \"01c4a397-4485-49bc-9ee3-c794832fd1ee\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.245451 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.245496 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqhqs\" (UniqueName: \"kubernetes.io/projected/01c4a397-4485-49bc-9ee3-c794832fd1ee-kube-api-access-lqhqs\") pod \"01c4a397-4485-49bc-9ee3-c794832fd1ee\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.245545 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01c4a397-4485-49bc-9ee3-c794832fd1ee-serving-cert\") pod \"01c4a397-4485-49bc-9ee3-c794832fd1ee\" (UID: \"01c4a397-4485-49bc-9ee3-c794832fd1ee\") " Feb 26 11:08:39 crc kubenswrapper[4724]: E0226 11:08:39.247572 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:39.747544862 +0000 UTC m=+186.403283977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.248567 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "01c4a397-4485-49bc-9ee3-c794832fd1ee" (UID: "01c4a397-4485-49bc-9ee3-c794832fd1ee"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.248808 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-client-ca" (OuterVolumeSpecName: "client-ca") pod "01c4a397-4485-49bc-9ee3-c794832fd1ee" (UID: "01c4a397-4485-49bc-9ee3-c794832fd1ee"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.249786 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-config" (OuterVolumeSpecName: "config") pod "01c4a397-4485-49bc-9ee3-c794832fd1ee" (UID: "01c4a397-4485-49bc-9ee3-c794832fd1ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.254165 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.254478 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.254498 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.254512 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01c4a397-4485-49bc-9ee3-c794832fd1ee-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:39 crc kubenswrapper[4724]: E0226 11:08:39.254991 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:39.754968514 +0000 UTC m=+186.410707629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.275456 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c4a397-4485-49bc-9ee3-c794832fd1ee-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01c4a397-4485-49bc-9ee3-c794832fd1ee" (UID: "01c4a397-4485-49bc-9ee3-c794832fd1ee"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.307033 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c4a397-4485-49bc-9ee3-c794832fd1ee-kube-api-access-lqhqs" (OuterVolumeSpecName: "kube-api-access-lqhqs") pod "01c4a397-4485-49bc-9ee3-c794832fd1ee" (UID: "01c4a397-4485-49bc-9ee3-c794832fd1ee"). InnerVolumeSpecName "kube-api-access-lqhqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.322568 4724 ???:1] "http: TLS handshake error from 192.168.126.11:54982: no serving certificate available for the kubelet" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.337902 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-879f6c89f-nj24t_01c4a397-4485-49bc-9ee3-c794832fd1ee/controller-manager/0.log" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.338099 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" event={"ID":"01c4a397-4485-49bc-9ee3-c794832fd1ee","Type":"ContainerDied","Data":"0e96fec0a9db4a084f9c439edb219654a9d1d505b89e5b9011692aeaf6bf4d06"} Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.338168 4724 scope.go:117] "RemoveContainer" containerID="041b1d84dc2212b765d4c4188790bd561f46298534305662a93d23c2f4aa77ec" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.338421 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nj24t" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.358455 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.358880 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqhqs\" (UniqueName: \"kubernetes.io/projected/01c4a397-4485-49bc-9ee3-c794832fd1ee-kube-api-access-lqhqs\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.358893 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01c4a397-4485-49bc-9ee3-c794832fd1ee-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:39 crc kubenswrapper[4724]: E0226 11:08:39.358980 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:39.858957566 +0000 UTC m=+186.514696681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.384942 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a6a646f4-99de-431f-a70b-109291465b0a","Type":"ContainerStarted","Data":"bd7a6e998ae6328a2524e3c62b12876199b993180377328d2a40ad3248c27a58"} Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.411281 4724 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.465102 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:39 crc kubenswrapper[4724]: E0226 11:08:39.465668 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:39.965649655 +0000 UTC m=+186.621388770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.534093 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn"] Feb 26 11:08:39 crc kubenswrapper[4724]: E0226 11:08:39.534653 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c4a397-4485-49bc-9ee3-c794832fd1ee" containerName="controller-manager" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.534669 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c4a397-4485-49bc-9ee3-c794832fd1ee" containerName="controller-manager" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.534779 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c4a397-4485-49bc-9ee3-c794832fd1ee" containerName="controller-manager" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.535227 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.570480 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:39 crc kubenswrapper[4724]: E0226 11:08:39.570858 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:40.070834252 +0000 UTC m=+186.726573367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.570999 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:39 crc kubenswrapper[4724]: E0226 11:08:39.576094 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:40.076066181 +0000 UTC m=+186.731805296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.669726 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.670368 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.670398 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.670928 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.671892 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.672303 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-proxy-ca-bundles\") pod \"controller-manager-77f6dcdd99-w6bvn\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.672344 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bee358c6-9602-47d4-8780-25220a7289b0-serving-cert\") pod \"controller-manager-77f6dcdd99-w6bvn\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.672409 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-config\") pod \"controller-manager-77f6dcdd99-w6bvn\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.672455 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-client-ca\") pod \"controller-manager-77f6dcdd99-w6bvn\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.672507 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c2w7\" (UniqueName: \"kubernetes.io/projected/bee358c6-9602-47d4-8780-25220a7289b0-kube-api-access-4c2w7\") pod \"controller-manager-77f6dcdd99-w6bvn\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.672752 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 11:08:39 crc kubenswrapper[4724]: E0226 11:08:39.674208 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:40.174159795 +0000 UTC m=+186.829898910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.678075 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn"] Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.697683 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.719779 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.784641 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.784711 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-proxy-ca-bundles\") pod \"controller-manager-77f6dcdd99-w6bvn\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.784750 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bee358c6-9602-47d4-8780-25220a7289b0-serving-cert\") pod \"controller-manager-77f6dcdd99-w6bvn\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.784803 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-config\") pod \"controller-manager-77f6dcdd99-w6bvn\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.784859 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-client-ca\") pod \"controller-manager-77f6dcdd99-w6bvn\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.784922 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c2w7\" (UniqueName: \"kubernetes.io/projected/bee358c6-9602-47d4-8780-25220a7289b0-kube-api-access-4c2w7\") pod \"controller-manager-77f6dcdd99-w6bvn\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: E0226 11:08:39.785828 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:40.285813676 +0000 UTC m=+186.941552791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.789771 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-proxy-ca-bundles\") pod \"controller-manager-77f6dcdd99-w6bvn\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.790590 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-client-ca\") pod \"controller-manager-77f6dcdd99-w6bvn\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.806359 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-config\") pod \"controller-manager-77f6dcdd99-w6bvn\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.811637 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nj24t"] Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.827405 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bee358c6-9602-47d4-8780-25220a7289b0-serving-cert\") pod \"controller-manager-77f6dcdd99-w6bvn\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.887319 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:39 crc kubenswrapper[4724]: E0226 11:08:39.888750 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:40.388718337 +0000 UTC m=+187.044457452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.890001 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nj24t"] Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.902477 4724 ???:1] "http: TLS handshake error from 192.168.126.11:54988: no serving certificate available for the kubelet" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.925834 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c2w7\" (UniqueName: \"kubernetes.io/projected/bee358c6-9602-47d4-8780-25220a7289b0-kube-api-access-4c2w7\") pod \"controller-manager-77f6dcdd99-w6bvn\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.941232 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:39 crc kubenswrapper[4724]: W0226 11:08:39.990468 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod939ca031_3abd_432c_a810_b252a35fb690.slice/crio-387b32f18dd79caf503f665a955396c2cd078c6a4e7cf65bdab06b30a3f7a629 WatchSource:0}: Error finding container 387b32f18dd79caf503f665a955396c2cd078c6a4e7cf65bdab06b30a3f7a629: Status 404 returned error can't find the container with id 387b32f18dd79caf503f665a955396c2cd078c6a4e7cf65bdab06b30a3f7a629 Feb 26 11:08:39 crc kubenswrapper[4724]: I0226 11:08:39.992763 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:39 crc kubenswrapper[4724]: E0226 11:08:39.993597 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:40.493573143 +0000 UTC m=+187.149312258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.034730 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01c4a397-4485-49bc-9ee3-c794832fd1ee" path="/var/lib/kubelet/pods/01c4a397-4485-49bc-9ee3-c794832fd1ee/volumes" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.056982 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.101349 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:40 crc kubenswrapper[4724]: E0226 11:08:40.101912 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:40.601887129 +0000 UTC m=+187.257626244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.104086 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6576b87f9c-mxbr7_7ffca8b8-930c-4a19-93ff-e47500546d2e/route-controller-manager/0.log" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.104168 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.133973 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92dsj"] Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.154454 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:40 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:40 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:40 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.154901 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.162541 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:40 crc kubenswrapper[4724]: W0226 11:08:40.164400 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a06e0f8_4c39_4fbd_a7fc_710337cbfafc.slice/crio-88c6fd068c3900d96f1c3cb1e5cdb6638b47d5caf15cc888124d07ed4f75c162 WatchSource:0}: Error finding container 88c6fd068c3900d96f1c3cb1e5cdb6638b47d5caf15cc888124d07ed4f75c162: Status 404 returned error can't find the container with id 88c6fd068c3900d96f1c3cb1e5cdb6638b47d5caf15cc888124d07ed4f75c162 Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.176668 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-psfvt" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.203263 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ffca8b8-930c-4a19-93ff-e47500546d2e-config\") pod \"7ffca8b8-930c-4a19-93ff-e47500546d2e\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.203370 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkks5\" (UniqueName: \"kubernetes.io/projected/7ffca8b8-930c-4a19-93ff-e47500546d2e-kube-api-access-lkks5\") pod \"7ffca8b8-930c-4a19-93ff-e47500546d2e\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.203668 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ffca8b8-930c-4a19-93ff-e47500546d2e-serving-cert\") pod \"7ffca8b8-930c-4a19-93ff-e47500546d2e\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.203729 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ffca8b8-930c-4a19-93ff-e47500546d2e-client-ca\") pod \"7ffca8b8-930c-4a19-93ff-e47500546d2e\" (UID: \"7ffca8b8-930c-4a19-93ff-e47500546d2e\") " Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.204105 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:40 crc kubenswrapper[4724]: E0226 11:08:40.204678 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:40.704657416 +0000 UTC m=+187.360396531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.205072 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ffca8b8-930c-4a19-93ff-e47500546d2e-client-ca" (OuterVolumeSpecName: "client-ca") pod "7ffca8b8-930c-4a19-93ff-e47500546d2e" (UID: "7ffca8b8-930c-4a19-93ff-e47500546d2e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.205616 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ffca8b8-930c-4a19-93ff-e47500546d2e-config" (OuterVolumeSpecName: "config") pod "7ffca8b8-930c-4a19-93ff-e47500546d2e" (UID: "7ffca8b8-930c-4a19-93ff-e47500546d2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.266429 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ffca8b8-930c-4a19-93ff-e47500546d2e-kube-api-access-lkks5" (OuterVolumeSpecName: "kube-api-access-lkks5") pod "7ffca8b8-930c-4a19-93ff-e47500546d2e" (UID: "7ffca8b8-930c-4a19-93ff-e47500546d2e"). InnerVolumeSpecName "kube-api-access-lkks5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.268296 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ffca8b8-930c-4a19-93ff-e47500546d2e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7ffca8b8-930c-4a19-93ff-e47500546d2e" (UID: "7ffca8b8-930c-4a19-93ff-e47500546d2e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.305203 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.305751 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ffca8b8-930c-4a19-93ff-e47500546d2e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.305772 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ffca8b8-930c-4a19-93ff-e47500546d2e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.305783 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ffca8b8-930c-4a19-93ff-e47500546d2e-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.305793 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkks5\" (UniqueName: \"kubernetes.io/projected/7ffca8b8-930c-4a19-93ff-e47500546d2e-kube-api-access-lkks5\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:40 crc kubenswrapper[4724]: E0226 11:08:40.306958 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:40.806933869 +0000 UTC m=+187.462672984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.323261 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xb5gc"] Feb 26 11:08:40 crc kubenswrapper[4724]: E0226 11:08:40.323639 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ffca8b8-930c-4a19-93ff-e47500546d2e" containerName="route-controller-manager" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.323656 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ffca8b8-930c-4a19-93ff-e47500546d2e" containerName="route-controller-manager" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.323785 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ffca8b8-930c-4a19-93ff-e47500546d2e" containerName="route-controller-manager" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.327778 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.332591 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.350873 4724 ???:1] "http: TLS handshake error from 192.168.126.11:54998: no serving certificate available for the kubelet" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.407517 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmbgj\" (UniqueName: \"kubernetes.io/projected/056030ad-19ca-4542-a486-139eb62524b0-kube-api-access-vmbgj\") pod \"redhat-marketplace-xb5gc\" (UID: \"056030ad-19ca-4542-a486-139eb62524b0\") " pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.409319 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056030ad-19ca-4542-a486-139eb62524b0-utilities\") pod \"redhat-marketplace-xb5gc\" (UID: \"056030ad-19ca-4542-a486-139eb62524b0\") " pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.409594 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.409714 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056030ad-19ca-4542-a486-139eb62524b0-catalog-content\") pod \"redhat-marketplace-xb5gc\" (UID: \"056030ad-19ca-4542-a486-139eb62524b0\") " pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:08:40 crc kubenswrapper[4724]: E0226 11:08:40.410341 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:40.910323204 +0000 UTC m=+187.566062319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.518930 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:40 crc kubenswrapper[4724]: E0226 11:08:40.519266 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:41.019227916 +0000 UTC m=+187.674967031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.519697 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmbgj\" (UniqueName: \"kubernetes.io/projected/056030ad-19ca-4542-a486-139eb62524b0-kube-api-access-vmbgj\") pod \"redhat-marketplace-xb5gc\" (UID: \"056030ad-19ca-4542-a486-139eb62524b0\") " pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.519738 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056030ad-19ca-4542-a486-139eb62524b0-utilities\") pod \"redhat-marketplace-xb5gc\" (UID: \"056030ad-19ca-4542-a486-139eb62524b0\") " pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.519823 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.519844 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056030ad-19ca-4542-a486-139eb62524b0-catalog-content\") pod \"redhat-marketplace-xb5gc\" (UID: \"056030ad-19ca-4542-a486-139eb62524b0\") " pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.520383 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056030ad-19ca-4542-a486-139eb62524b0-catalog-content\") pod \"redhat-marketplace-xb5gc\" (UID: \"056030ad-19ca-4542-a486-139eb62524b0\") " pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.520926 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056030ad-19ca-4542-a486-139eb62524b0-utilities\") pod \"redhat-marketplace-xb5gc\" (UID: \"056030ad-19ca-4542-a486-139eb62524b0\") " pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:08:40 crc kubenswrapper[4724]: E0226 11:08:40.521273 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:41.021258964 +0000 UTC m=+187.676998079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.603622 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92dsj" event={"ID":"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc","Type":"ContainerStarted","Data":"88c6fd068c3900d96f1c3cb1e5cdb6638b47d5caf15cc888124d07ed4f75c162"} Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.608447 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6576b87f9c-mxbr7_7ffca8b8-930c-4a19-93ff-e47500546d2e/route-controller-manager/0.log" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.608540 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" event={"ID":"7ffca8b8-930c-4a19-93ff-e47500546d2e","Type":"ContainerDied","Data":"4a3fc71af70844cef626607981bf42fa47448fc1ca71db2f83b372351f119fde"} Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.608587 4724 scope.go:117] "RemoveContainer" containerID="c97f6a80673402dc556d8a667efc01d48311935195a0443f3472e2167a1a0f4c" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.608747 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.621049 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:40 crc kubenswrapper[4724]: E0226 11:08:40.621615 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:41.121587242 +0000 UTC m=+187.777326357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.656406 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xb5gc"] Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.657861 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmbgj\" (UniqueName: \"kubernetes.io/projected/056030ad-19ca-4542-a486-139eb62524b0-kube-api-access-vmbgj\") pod \"redhat-marketplace-xb5gc\" (UID: \"056030ad-19ca-4542-a486-139eb62524b0\") " pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.700333 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hj7c4"] Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.714690 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.721591 4724 ???:1] "http: TLS handshake error from 192.168.126.11:46378: no serving certificate available for the kubelet" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.722898 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"939ca031-3abd-432c-a810-b252a35fb690","Type":"ContainerStarted","Data":"387b32f18dd79caf503f665a955396c2cd078c6a4e7cf65bdab06b30a3f7a629"} Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.724058 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.764661 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hj7c4"] Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.767745 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:08:40 crc kubenswrapper[4724]: E0226 11:08:40.768802 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:41.268781979 +0000 UTC m=+187.924521084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.801965 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.803426 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.813557 4724 patch_prober.go:28] interesting pod/apiserver-76f77b778f-wdxr7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.813665 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" podUID="0eb89f1c-1230-4455-86c1-6ad3796969a9" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.825092 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.825556 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht4q4\" (UniqueName: \"kubernetes.io/projected/f4930fbf-4372-4466-b084-a13dfa8a5415-kube-api-access-ht4q4\") pod \"redhat-marketplace-hj7c4\" (UID: \"f4930fbf-4372-4466-b084-a13dfa8a5415\") " pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.825808 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4930fbf-4372-4466-b084-a13dfa8a5415-catalog-content\") pod \"redhat-marketplace-hj7c4\" (UID: \"f4930fbf-4372-4466-b084-a13dfa8a5415\") " pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.825865 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4930fbf-4372-4466-b084-a13dfa8a5415-utilities\") pod \"redhat-marketplace-hj7c4\" (UID: \"f4930fbf-4372-4466-b084-a13dfa8a5415\") " pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:08:40 crc kubenswrapper[4724]: E0226 11:08:40.826013 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:41.325984193 +0000 UTC m=+187.981723308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.859986 4724 ???:1] "http: TLS handshake error from 192.168.126.11:46394: no serving certificate available for the kubelet" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.941907 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4930fbf-4372-4466-b084-a13dfa8a5415-utilities\") pod \"redhat-marketplace-hj7c4\" (UID: \"f4930fbf-4372-4466-b084-a13dfa8a5415\") " pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.942036 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht4q4\" (UniqueName: \"kubernetes.io/projected/f4930fbf-4372-4466-b084-a13dfa8a5415-kube-api-access-ht4q4\") pod \"redhat-marketplace-hj7c4\" (UID: \"f4930fbf-4372-4466-b084-a13dfa8a5415\") " pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.944860 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4930fbf-4372-4466-b084-a13dfa8a5415-utilities\") pod \"redhat-marketplace-hj7c4\" (UID: \"f4930fbf-4372-4466-b084-a13dfa8a5415\") " pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.947869 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4930fbf-4372-4466-b084-a13dfa8a5415-catalog-content\") pod \"redhat-marketplace-hj7c4\" (UID: \"f4930fbf-4372-4466-b084-a13dfa8a5415\") " pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.950601 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4930fbf-4372-4466-b084-a13dfa8a5415-catalog-content\") pod \"redhat-marketplace-hj7c4\" (UID: \"f4930fbf-4372-4466-b084-a13dfa8a5415\") " pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:08:40 crc kubenswrapper[4724]: I0226 11:08:40.950763 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:40 crc kubenswrapper[4724]: E0226 11:08:40.951276 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:41.451256104 +0000 UTC m=+188.106995219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.033164 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht4q4\" (UniqueName: \"kubernetes.io/projected/f4930fbf-4372-4466-b084-a13dfa8a5415-kube-api-access-ht4q4\") pod \"redhat-marketplace-hj7c4\" (UID: \"f4930fbf-4372-4466-b084-a13dfa8a5415\") " pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.062082 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:41 crc kubenswrapper[4724]: E0226 11:08:41.062856 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:41.562831682 +0000 UTC m=+188.218570797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.081624 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.118698 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7"] Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.140538 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mxbr7"] Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.172131 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:41 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:41 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:41 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.172237 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.178049 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:41 crc kubenswrapper[4724]: E0226 11:08:41.179148 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:41.679128856 +0000 UTC m=+188.334867971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.249308 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mqtct"] Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.250624 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.260027 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.288080 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:41 crc kubenswrapper[4724]: E0226 11:08:41.288634 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:41.788609985 +0000 UTC m=+188.444349100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.296189 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mqtct"] Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.302658 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-zhtn5" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.390456 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-catalog-content\") pod \"redhat-operators-mqtct\" (UID: \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\") " pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.390571 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.390618 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvcv\" (UniqueName: \"kubernetes.io/projected/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-kube-api-access-6vvcv\") pod \"redhat-operators-mqtct\" (UID: \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\") " pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.390648 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-utilities\") pod \"redhat-operators-mqtct\" (UID: \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\") " pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:08:41 crc kubenswrapper[4724]: E0226 11:08:41.391204 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:41.891170546 +0000 UTC m=+188.546909661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.391519 4724 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-md2vv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.391577 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" podUID="207d3079-e7ed-46b9-8744-aed50bb42352" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.402022 4724 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-md2vv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.402090 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" podUID="207d3079-e7ed-46b9-8744-aed50bb42352" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.453733 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2gkcb"] Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.492280 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.492642 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-utilities\") pod \"redhat-operators-mqtct\" (UID: \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\") " pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.492749 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-catalog-content\") pod \"redhat-operators-mqtct\" (UID: \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\") " pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.492860 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vvcv\" (UniqueName: \"kubernetes.io/projected/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-kube-api-access-6vvcv\") pod \"redhat-operators-mqtct\" (UID: \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\") " pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:08:41 crc kubenswrapper[4724]: E0226 11:08:41.493414 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:41.993369456 +0000 UTC m=+188.649108561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.493883 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-utilities\") pod \"redhat-operators-mqtct\" (UID: \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\") " pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.494156 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-catalog-content\") pod \"redhat-operators-mqtct\" (UID: \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\") " pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.546975 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9shd"] Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.585269 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vvcv\" (UniqueName: \"kubernetes.io/projected/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-kube-api-access-6vvcv\") pod \"redhat-operators-mqtct\" (UID: \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\") " pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.601573 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:41 crc kubenswrapper[4724]: E0226 11:08:41.602698 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:42.10268102 +0000 UTC m=+188.758420135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.617543 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.634047 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vlps5"] Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.653606 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7"] Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.654869 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.665347 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.665604 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.665762 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.666010 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.666166 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.669230 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.689793 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-64lrq"] Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.700298 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-64lrq"] Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.700439 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.703514 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:41 crc kubenswrapper[4724]: E0226 11:08:41.704103 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:42.204078118 +0000 UTC m=+188.859817233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.707657 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7"] Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.805409 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62636f36-ae35-4f03-a34d-f3d57c880c2a-config\") pod \"route-controller-manager-d48c54f4-cq7l7\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.806108 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f727f37-5bac-476b-88a0-3d751c47e264-utilities\") pod \"redhat-operators-64lrq\" (UID: \"4f727f37-5bac-476b-88a0-3d751c47e264\") " pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.806213 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k4d8\" (UniqueName: \"kubernetes.io/projected/4f727f37-5bac-476b-88a0-3d751c47e264-kube-api-access-7k4d8\") pod \"redhat-operators-64lrq\" (UID: \"4f727f37-5bac-476b-88a0-3d751c47e264\") " pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.806253 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f62s\" (UniqueName: \"kubernetes.io/projected/62636f36-ae35-4f03-a34d-f3d57c880c2a-kube-api-access-9f62s\") pod \"route-controller-manager-d48c54f4-cq7l7\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.806317 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.806417 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62636f36-ae35-4f03-a34d-f3d57c880c2a-client-ca\") pod \"route-controller-manager-d48c54f4-cq7l7\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.806470 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62636f36-ae35-4f03-a34d-f3d57c880c2a-serving-cert\") pod \"route-controller-manager-d48c54f4-cq7l7\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.806500 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f727f37-5bac-476b-88a0-3d751c47e264-catalog-content\") pod \"redhat-operators-64lrq\" (UID: \"4f727f37-5bac-476b-88a0-3d751c47e264\") " pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:08:41 crc kubenswrapper[4724]: E0226 11:08:41.807030 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:42.30701403 +0000 UTC m=+188.962753145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.811498 4724 generic.go:334] "Generic (PLEG): container finished" podID="f3546882-cc78-45d2-b99d-9d14605bdc5b" containerID="f5d855befd6f0cf09abce249c8c865a342d106e58162aa0b964bfc614c10c871" exitCode=0 Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.811630 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" event={"ID":"f3546882-cc78-45d2-b99d-9d14605bdc5b","Type":"ContainerDied","Data":"f5d855befd6f0cf09abce249c8c865a342d106e58162aa0b964bfc614c10c871"} Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.877847 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a6a646f4-99de-431f-a70b-109291465b0a","Type":"ContainerStarted","Data":"b166e5238b32a70e4994c3d879e66b3a86b182b1497b8e1db7fd5c50e0213652"} Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.894796 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlps5" event={"ID":"f9ed0863-9bdf-48ba-ad70-c1c728c58730","Type":"ContainerStarted","Data":"a6a537e52ccc09d66935aa7c4a8baeb2bc98d2189e1ae1e2faffe5767d58618f"} Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.911067 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.911536 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f727f37-5bac-476b-88a0-3d751c47e264-utilities\") pod \"redhat-operators-64lrq\" (UID: \"4f727f37-5bac-476b-88a0-3d751c47e264\") " pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.911602 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k4d8\" (UniqueName: \"kubernetes.io/projected/4f727f37-5bac-476b-88a0-3d751c47e264-kube-api-access-7k4d8\") pod \"redhat-operators-64lrq\" (UID: \"4f727f37-5bac-476b-88a0-3d751c47e264\") " pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.911627 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f62s\" (UniqueName: \"kubernetes.io/projected/62636f36-ae35-4f03-a34d-f3d57c880c2a-kube-api-access-9f62s\") pod \"route-controller-manager-d48c54f4-cq7l7\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.911696 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62636f36-ae35-4f03-a34d-f3d57c880c2a-client-ca\") pod \"route-controller-manager-d48c54f4-cq7l7\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.911721 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62636f36-ae35-4f03-a34d-f3d57c880c2a-serving-cert\") pod \"route-controller-manager-d48c54f4-cq7l7\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.911737 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f727f37-5bac-476b-88a0-3d751c47e264-catalog-content\") pod \"redhat-operators-64lrq\" (UID: \"4f727f37-5bac-476b-88a0-3d751c47e264\") " pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.911762 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62636f36-ae35-4f03-a34d-f3d57c880c2a-config\") pod \"route-controller-manager-d48c54f4-cq7l7\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.913082 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62636f36-ae35-4f03-a34d-f3d57c880c2a-config\") pod \"route-controller-manager-d48c54f4-cq7l7\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:41 crc kubenswrapper[4724]: E0226 11:08:41.913168 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:42.413150863 +0000 UTC m=+189.068889978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.913509 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f727f37-5bac-476b-88a0-3d751c47e264-utilities\") pod \"redhat-operators-64lrq\" (UID: \"4f727f37-5bac-476b-88a0-3d751c47e264\") " pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.914940 4724 generic.go:334] "Generic (PLEG): container finished" podID="8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" containerID="4813411aa567eae908b02addf2ce6181ac31597794f264c7ee8b1d0852ce8da2" exitCode=0 Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.915194 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92dsj" event={"ID":"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc","Type":"ContainerDied","Data":"4813411aa567eae908b02addf2ce6181ac31597794f264c7ee8b1d0852ce8da2"} Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.919629 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f727f37-5bac-476b-88a0-3d751c47e264-catalog-content\") pod \"redhat-operators-64lrq\" (UID: \"4f727f37-5bac-476b-88a0-3d751c47e264\") " pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.926337 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62636f36-ae35-4f03-a34d-f3d57c880c2a-client-ca\") pod \"route-controller-manager-d48c54f4-cq7l7\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.939915 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=6.939889378 podStartE2EDuration="6.939889378s" podCreationTimestamp="2026-02-26 11:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:41.917325973 +0000 UTC m=+188.573065088" watchObservedRunningTime="2026-02-26 11:08:41.939889378 +0000 UTC m=+188.595628493" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.960106 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hj7c4"] Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.963543 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f62s\" (UniqueName: \"kubernetes.io/projected/62636f36-ae35-4f03-a34d-f3d57c880c2a-kube-api-access-9f62s\") pod \"route-controller-manager-d48c54f4-cq7l7\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:41 crc kubenswrapper[4724]: I0226 11:08:41.965833 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62636f36-ae35-4f03-a34d-f3d57c880c2a-serving-cert\") pod \"route-controller-manager-d48c54f4-cq7l7\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.012156 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k4d8\" (UniqueName: \"kubernetes.io/projected/4f727f37-5bac-476b-88a0-3d751c47e264-kube-api-access-7k4d8\") pod \"redhat-operators-64lrq\" (UID: \"4f727f37-5bac-476b-88a0-3d751c47e264\") " pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.014757 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ffca8b8-930c-4a19-93ff-e47500546d2e" path="/var/lib/kubelet/pods/7ffca8b8-930c-4a19-93ff-e47500546d2e/volumes" Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.019389 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:42 crc kubenswrapper[4724]: E0226 11:08:42.019917 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:42.519879964 +0000 UTC m=+189.175619079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.024722 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9shd" event={"ID":"0eb55921-4244-4557-aa72-97cea802c3fb","Type":"ContainerStarted","Data":"ffaf192074c82abc6ecb1f812222e63630bfb46a7574913f2c2b3e520905ae73"} Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.055109 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.058476 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn"] Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.062663 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gkcb" event={"ID":"35a09ba5-1063-467d-b7a6-c1b2c37a135e","Type":"ContainerStarted","Data":"45c4660e36fb377359a9fee6c2d3bdf4813d4e5e737535224182c76870f555bb"} Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.092585 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xb5gc"] Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.103638 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.115815 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"939ca031-3abd-432c-a810-b252a35fb690","Type":"ContainerStarted","Data":"e861dcae5ef31a3a78a2a98e18023c8e56b94b9f0e026f061911f38438b68399"} Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.120868 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:42 crc kubenswrapper[4724]: E0226 11:08:42.121584 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:42.62155854 +0000 UTC m=+189.277297655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.139356 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=5.139316507 podStartE2EDuration="5.139316507s" podCreationTimestamp="2026-02-26 11:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:42.138537765 +0000 UTC m=+188.794276880" watchObservedRunningTime="2026-02-26 11:08:42.139316507 +0000 UTC m=+188.795055622" Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.154611 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:42 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:42 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:42 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.155261 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.189824 4724 ???:1] "http: TLS handshake error from 192.168.126.11:46402: no serving certificate available for the kubelet" Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.223759 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:42 crc kubenswrapper[4724]: E0226 11:08:42.225330 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:42.725309335 +0000 UTC m=+189.381048450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.325446 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:42 crc kubenswrapper[4724]: E0226 11:08:42.326433 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:42.826405244 +0000 UTC m=+189.482144359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.429857 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:42 crc kubenswrapper[4724]: E0226 11:08:42.430853 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:42.930827038 +0000 UTC m=+189.586566153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.534900 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.534917 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mqtct"] Feb 26 11:08:42 crc kubenswrapper[4724]: E0226 11:08:42.535632 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:43.035607403 +0000 UTC m=+189.691346518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.637020 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:42 crc kubenswrapper[4724]: E0226 11:08:42.637550 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:43.137532346 +0000 UTC m=+189.793271461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.651372 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-64lrq"] Feb 26 11:08:42 crc kubenswrapper[4724]: W0226 11:08:42.669047 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f727f37_5bac_476b_88a0_3d751c47e264.slice/crio-40b5bc5c59227c129f863c296ab3e604d2a35c05fe0380cbbd325edc4f7c77c2 WatchSource:0}: Error finding container 40b5bc5c59227c129f863c296ab3e604d2a35c05fe0380cbbd325edc4f7c77c2: Status 404 returned error can't find the container with id 40b5bc5c59227c129f863c296ab3e604d2a35c05fe0380cbbd325edc4f7c77c2 Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.738881 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:42 crc kubenswrapper[4724]: E0226 11:08:42.739647 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:43.239614203 +0000 UTC m=+189.895353318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.801313 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7"] Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.841234 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:42 crc kubenswrapper[4724]: E0226 11:08:42.841768 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:43.341740932 +0000 UTC m=+189.997480047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.942567 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:42 crc kubenswrapper[4724]: E0226 11:08:42.942762 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:43.442713598 +0000 UTC m=+190.098452713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:42 crc kubenswrapper[4724]: I0226 11:08:42.943720 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:42 crc kubenswrapper[4724]: E0226 11:08:42.944383 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:43.444357765 +0000 UTC m=+190.100096880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.046389 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:43 crc kubenswrapper[4724]: E0226 11:08:43.046552 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:43.546525255 +0000 UTC m=+190.202264370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.050774 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:43 crc kubenswrapper[4724]: E0226 11:08:43.051414 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:43.551396834 +0000 UTC m=+190.207135949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.126361 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj7c4" event={"ID":"f4930fbf-4372-4466-b084-a13dfa8a5415","Type":"ContainerStarted","Data":"8aed15bb9b39edae84defea99105065b1b766858de45578a4b28437949baf680"} Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.128304 4724 generic.go:334] "Generic (PLEG): container finished" podID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" containerID="bbeb21091e47cf3d7762ed124076e5b032725c1afdab7886c32c4090448c1d5d" exitCode=0 Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.128372 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gkcb" event={"ID":"35a09ba5-1063-467d-b7a6-c1b2c37a135e","Type":"ContainerDied","Data":"bbeb21091e47cf3d7762ed124076e5b032725c1afdab7886c32c4090448c1d5d"} Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.129623 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" event={"ID":"bee358c6-9602-47d4-8780-25220a7289b0","Type":"ContainerStarted","Data":"3b818dde758393c50bdb9524620fb00fa265caec4f2e5d5267d6435d7fce879e"} Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.131295 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" event={"ID":"62636f36-ae35-4f03-a34d-f3d57c880c2a","Type":"ContainerStarted","Data":"bc2d54282750a59d726d29e5ea4df1bad3970ab26aa7a2382fff800aace9fe85"} Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.133773 4724 generic.go:334] "Generic (PLEG): container finished" podID="a6a646f4-99de-431f-a70b-109291465b0a" containerID="b166e5238b32a70e4994c3d879e66b3a86b182b1497b8e1db7fd5c50e0213652" exitCode=0 Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.133855 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a6a646f4-99de-431f-a70b-109291465b0a","Type":"ContainerDied","Data":"b166e5238b32a70e4994c3d879e66b3a86b182b1497b8e1db7fd5c50e0213652"} Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.137466 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xb5gc" event={"ID":"056030ad-19ca-4542-a486-139eb62524b0","Type":"ContainerStarted","Data":"fe610450de497b52e584cf20ae2c72ee02d515ac0ee2a8b7e10cb1bd435d3960"} Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.141884 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqtct" event={"ID":"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f","Type":"ContainerStarted","Data":"782efe2a1aceeb3d8ea72d619d55d5d6ff56d16d25dda3d5a2823c041cf21d2e"} Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.151924 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64lrq" event={"ID":"4f727f37-5bac-476b-88a0-3d751c47e264","Type":"ContainerStarted","Data":"40b5bc5c59227c129f863c296ab3e604d2a35c05fe0380cbbd325edc4f7c77c2"} Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.159029 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:43 crc kubenswrapper[4724]: E0226 11:08:43.159551 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:43.659533705 +0000 UTC m=+190.315272820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.160064 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:43 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:43 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:43 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.168270 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.267382 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:43 crc kubenswrapper[4724]: E0226 11:08:43.268781 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:43.768759296 +0000 UTC m=+190.424498411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.368773 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:43 crc kubenswrapper[4724]: E0226 11:08:43.369237 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:43.869221318 +0000 UTC m=+190.524960423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.402066 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-md2vv" Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.470584 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:43 crc kubenswrapper[4724]: E0226 11:08:43.471318 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:43.971294595 +0000 UTC m=+190.627033900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.576619 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:43 crc kubenswrapper[4724]: E0226 11:08:43.577084 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:44.077055947 +0000 UTC m=+190.732795062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.679841 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:43 crc kubenswrapper[4724]: E0226 11:08:43.680395 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:44.1803786 +0000 UTC m=+190.836117715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.748873 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.781446 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:43 crc kubenswrapper[4724]: E0226 11:08:43.781892 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:44.28186091 +0000 UTC m=+190.937600025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.882759 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3546882-cc78-45d2-b99d-9d14605bdc5b-config-volume\") pod \"f3546882-cc78-45d2-b99d-9d14605bdc5b\" (UID: \"f3546882-cc78-45d2-b99d-9d14605bdc5b\") " Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.883388 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gbts\" (UniqueName: \"kubernetes.io/projected/f3546882-cc78-45d2-b99d-9d14605bdc5b-kube-api-access-8gbts\") pod \"f3546882-cc78-45d2-b99d-9d14605bdc5b\" (UID: \"f3546882-cc78-45d2-b99d-9d14605bdc5b\") " Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.883749 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3546882-cc78-45d2-b99d-9d14605bdc5b-secret-volume\") pod \"f3546882-cc78-45d2-b99d-9d14605bdc5b\" (UID: \"f3546882-cc78-45d2-b99d-9d14605bdc5b\") " Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.884217 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:43 crc kubenswrapper[4724]: E0226 11:08:43.884715 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:44.384694899 +0000 UTC m=+191.040434014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.885837 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3546882-cc78-45d2-b99d-9d14605bdc5b-config-volume" (OuterVolumeSpecName: "config-volume") pod "f3546882-cc78-45d2-b99d-9d14605bdc5b" (UID: "f3546882-cc78-45d2-b99d-9d14605bdc5b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.908083 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3546882-cc78-45d2-b99d-9d14605bdc5b-kube-api-access-8gbts" (OuterVolumeSpecName: "kube-api-access-8gbts") pod "f3546882-cc78-45d2-b99d-9d14605bdc5b" (UID: "f3546882-cc78-45d2-b99d-9d14605bdc5b"). InnerVolumeSpecName "kube-api-access-8gbts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.927804 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3546882-cc78-45d2-b99d-9d14605bdc5b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f3546882-cc78-45d2-b99d-9d14605bdc5b" (UID: "f3546882-cc78-45d2-b99d-9d14605bdc5b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.985890 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.986578 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3546882-cc78-45d2-b99d-9d14605bdc5b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.986605 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3546882-cc78-45d2-b99d-9d14605bdc5b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:43 crc kubenswrapper[4724]: I0226 11:08:43.986620 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gbts\" (UniqueName: \"kubernetes.io/projected/f3546882-cc78-45d2-b99d-9d14605bdc5b-kube-api-access-8gbts\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:43 crc kubenswrapper[4724]: E0226 11:08:43.986729 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:44.486703685 +0000 UTC m=+191.142442800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.090023 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:44 crc kubenswrapper[4724]: E0226 11:08:44.092625 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:44.592604941 +0000 UTC m=+191.248344056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.156755 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:44 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:44 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:44 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.156845 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.198486 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:44 crc kubenswrapper[4724]: E0226 11:08:44.199892 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:44.699861637 +0000 UTC m=+191.355600752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.236010 4724 generic.go:334] "Generic (PLEG): container finished" podID="939ca031-3abd-432c-a810-b252a35fb690" containerID="e861dcae5ef31a3a78a2a98e18023c8e56b94b9f0e026f061911f38438b68399" exitCode=0 Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.236103 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"939ca031-3abd-432c-a810-b252a35fb690","Type":"ContainerDied","Data":"e861dcae5ef31a3a78a2a98e18023c8e56b94b9f0e026f061911f38438b68399"} Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.247920 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.248875 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4" event={"ID":"f3546882-cc78-45d2-b99d-9d14605bdc5b","Type":"ContainerDied","Data":"8886c329888bd4aa2a5199bd788bd2859ccdf6061179b6a771aaa3cb6d45c3f1"} Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.248916 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8886c329888bd4aa2a5199bd788bd2859ccdf6061179b6a771aaa3cb6d45c3f1" Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.254473 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" event={"ID":"bee358c6-9602-47d4-8780-25220a7289b0","Type":"ContainerStarted","Data":"63fe21f95ff31f7eb73b318c81e5b366fb4cd85c7aeff656687be0be268de92e"} Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.261450 4724 generic.go:334] "Generic (PLEG): container finished" podID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" containerID="a743aedc9c2c0cd026fe47e2d1e402781a03802a2368c796c9e4969bca547e10" exitCode=0 Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.261519 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlps5" event={"ID":"f9ed0863-9bdf-48ba-ad70-c1c728c58730","Type":"ContainerDied","Data":"a743aedc9c2c0cd026fe47e2d1e402781a03802a2368c796c9e4969bca547e10"} Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.276355 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xb5gc" event={"ID":"056030ad-19ca-4542-a486-139eb62524b0","Type":"ContainerStarted","Data":"68f40a2ae847fff66cf3f8a58bdc36426864f7d66bf3083a49f27bd40f782f52"} Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.285235 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqtct" event={"ID":"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f","Type":"ContainerStarted","Data":"2abcfecb919182f1a655c61fe5d154bff920a083a8832e369dfa1acba547361e"} Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.288026 4724 generic.go:334] "Generic (PLEG): container finished" podID="0eb55921-4244-4557-aa72-97cea802c3fb" containerID="e51d3b0a466d4991aff3d941e6f0f98b9a674da5b67c4304cfeeb58716f85783" exitCode=0 Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.288106 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9shd" event={"ID":"0eb55921-4244-4557-aa72-97cea802c3fb","Type":"ContainerDied","Data":"e51d3b0a466d4991aff3d941e6f0f98b9a674da5b67c4304cfeeb58716f85783"} Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.292930 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4930fbf-4372-4466-b084-a13dfa8a5415" containerID="edc674b6f88033d96851c9533ea95019e2660940d662ade38868a4849382b462" exitCode=0 Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.293328 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj7c4" event={"ID":"f4930fbf-4372-4466-b084-a13dfa8a5415","Type":"ContainerDied","Data":"edc674b6f88033d96851c9533ea95019e2660940d662ade38868a4849382b462"} Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.303123 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:44 crc kubenswrapper[4724]: E0226 11:08:44.303552 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:44.80353762 +0000 UTC m=+191.459276735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.405256 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:44 crc kubenswrapper[4724]: E0226 11:08:44.406038 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:44.905976587 +0000 UTC m=+191.561715702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.406157 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:44 crc kubenswrapper[4724]: E0226 11:08:44.406807 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:44.90678359 +0000 UTC m=+191.562522715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.508045 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:44 crc kubenswrapper[4724]: E0226 11:08:44.508673 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:45.008623111 +0000 UTC m=+191.664362226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.607682 4724 ???:1] "http: TLS handshake error from 192.168.126.11:46410: no serving certificate available for the kubelet" Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.611035 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:44 crc kubenswrapper[4724]: E0226 11:08:44.611481 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:45.11146429 +0000 UTC m=+191.767203405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.715651 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:44 crc kubenswrapper[4724]: E0226 11:08:44.716099 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:45.216027109 +0000 UTC m=+191.871766234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.716583 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:44 crc kubenswrapper[4724]: E0226 11:08:44.717148 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:45.21713162 +0000 UTC m=+191.872870735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.817080 4724 ???:1] "http: TLS handshake error from 192.168.126.11:46412: no serving certificate available for the kubelet" Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.818068 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:44 crc kubenswrapper[4724]: E0226 11:08:44.818584 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:45.318541358 +0000 UTC m=+191.974280473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:44 crc kubenswrapper[4724]: I0226 11:08:44.921500 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:44 crc kubenswrapper[4724]: E0226 11:08:44.922100 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:45.422079227 +0000 UTC m=+192.077818342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.024144 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:45 crc kubenswrapper[4724]: E0226 11:08:45.027367 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:45.527332625 +0000 UTC m=+192.183071740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.048839 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.127589 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6a646f4-99de-431f-a70b-109291465b0a-kube-api-access\") pod \"a6a646f4-99de-431f-a70b-109291465b0a\" (UID: \"a6a646f4-99de-431f-a70b-109291465b0a\") " Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.127897 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6a646f4-99de-431f-a70b-109291465b0a-kubelet-dir\") pod \"a6a646f4-99de-431f-a70b-109291465b0a\" (UID: \"a6a646f4-99de-431f-a70b-109291465b0a\") " Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.128359 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:45 crc kubenswrapper[4724]: E0226 11:08:45.128996 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:45.628976299 +0000 UTC m=+192.284715414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.129218 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6a646f4-99de-431f-a70b-109291465b0a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a6a646f4-99de-431f-a70b-109291465b0a" (UID: "a6a646f4-99de-431f-a70b-109291465b0a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.148107 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6a646f4-99de-431f-a70b-109291465b0a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a6a646f4-99de-431f-a70b-109291465b0a" (UID: "a6a646f4-99de-431f-a70b-109291465b0a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.160258 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:45 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:45 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:45 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.160340 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.229773 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.230440 4724 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6a646f4-99de-431f-a70b-109291465b0a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.230463 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6a646f4-99de-431f-a70b-109291465b0a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:45 crc kubenswrapper[4724]: E0226 11:08:45.230585 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:45.730554082 +0000 UTC m=+192.386293197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.313138 4724 patch_prober.go:28] interesting pod/console-f9d7485db-9cwcb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.313782 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9cwcb" podUID="0308748d-e26a-4fc4-bc5d-d3bd65936c7b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.325821 4724 generic.go:334] "Generic (PLEG): container finished" podID="48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" containerID="2abcfecb919182f1a655c61fe5d154bff920a083a8832e369dfa1acba547361e" exitCode=0 Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.325903 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqtct" event={"ID":"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f","Type":"ContainerDied","Data":"2abcfecb919182f1a655c61fe5d154bff920a083a8832e369dfa1acba547361e"} Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.330585 4724 generic.go:334] "Generic (PLEG): container finished" podID="4f727f37-5bac-476b-88a0-3d751c47e264" containerID="e25b12fb5ff4351b357acff0751cf8300ce03dab8a011152aa5bee7eb0bac4b6" exitCode=0 Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.330641 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64lrq" event={"ID":"4f727f37-5bac-476b-88a0-3d751c47e264","Type":"ContainerDied","Data":"e25b12fb5ff4351b357acff0751cf8300ce03dab8a011152aa5bee7eb0bac4b6"} Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.332372 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:45 crc kubenswrapper[4724]: E0226 11:08:45.332743 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:45.832729222 +0000 UTC m=+192.488468337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.348722 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" event={"ID":"62636f36-ae35-4f03-a34d-f3d57c880c2a","Type":"ContainerStarted","Data":"199721e1b203772f03088425a91c9a390a6a80498fb8a2c23f614043eecc34f7"} Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.348876 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.348913 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.349157 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.349195 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.356562 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.357022 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a6a646f4-99de-431f-a70b-109291465b0a","Type":"ContainerDied","Data":"bd7a6e998ae6328a2524e3c62b12876199b993180377328d2a40ad3248c27a58"} Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.357092 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd7a6e998ae6328a2524e3c62b12876199b993180377328d2a40ad3248c27a58" Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.372561 4724 generic.go:334] "Generic (PLEG): container finished" podID="056030ad-19ca-4542-a486-139eb62524b0" containerID="68f40a2ae847fff66cf3f8a58bdc36426864f7d66bf3083a49f27bd40f782f52" exitCode=0 Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.374068 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xb5gc" event={"ID":"056030ad-19ca-4542-a486-139eb62524b0","Type":"ContainerDied","Data":"68f40a2ae847fff66cf3f8a58bdc36426864f7d66bf3083a49f27bd40f782f52"} Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.433828 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:45 crc kubenswrapper[4724]: E0226 11:08:45.434640 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:45.934599194 +0000 UTC m=+192.590338319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.440753 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:45 crc kubenswrapper[4724]: E0226 11:08:45.443456 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:45.943436056 +0000 UTC m=+192.599175431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.469036 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" podStartSLOduration=11.469011267 podStartE2EDuration="11.469011267s" podCreationTimestamp="2026-02-26 11:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:45.466023612 +0000 UTC m=+192.121762727" watchObservedRunningTime="2026-02-26 11:08:45.469011267 +0000 UTC m=+192.124750382" Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.543854 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:45 crc kubenswrapper[4724]: E0226 11:08:45.544587 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:46.044557746 +0000 UTC m=+192.700296871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.647417 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:45 crc kubenswrapper[4724]: E0226 11:08:45.649388 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:46.149366612 +0000 UTC m=+192.805105727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.752814 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:45 crc kubenswrapper[4724]: E0226 11:08:45.753395 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:46.253369644 +0000 UTC m=+192.909108759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.811245 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.845970 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-wdxr7" Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.855298 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:45 crc kubenswrapper[4724]: E0226 11:08:45.857851 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:46.357819409 +0000 UTC m=+193.013558734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.967134 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:45 crc kubenswrapper[4724]: E0226 11:08:45.967563 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:46.467519144 +0000 UTC m=+193.123258259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:45 crc kubenswrapper[4724]: I0226 11:08:45.968155 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:45 crc kubenswrapper[4724]: E0226 11:08:45.968617 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:46.468607206 +0000 UTC m=+193.124346321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.055272 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.069060 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:46 crc kubenswrapper[4724]: E0226 11:08:46.069887 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:46.569852469 +0000 UTC m=+193.225591594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.124586 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.150389 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.156710 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:46 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:46 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:46 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.156809 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.170848 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/939ca031-3abd-432c-a810-b252a35fb690-kube-api-access\") pod \"939ca031-3abd-432c-a810-b252a35fb690\" (UID: \"939ca031-3abd-432c-a810-b252a35fb690\") " Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.170914 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/939ca031-3abd-432c-a810-b252a35fb690-kubelet-dir\") pod \"939ca031-3abd-432c-a810-b252a35fb690\" (UID: \"939ca031-3abd-432c-a810-b252a35fb690\") " Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.171545 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.174636 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/939ca031-3abd-432c-a810-b252a35fb690-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "939ca031-3abd-432c-a810-b252a35fb690" (UID: "939ca031-3abd-432c-a810-b252a35fb690"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:08:46 crc kubenswrapper[4724]: E0226 11:08:46.174987 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:46.674972363 +0000 UTC m=+193.330711478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.207672 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/939ca031-3abd-432c-a810-b252a35fb690-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "939ca031-3abd-432c-a810-b252a35fb690" (UID: "939ca031-3abd-432c-a810-b252a35fb690"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.276338 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.277246 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/939ca031-3abd-432c-a810-b252a35fb690-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.277299 4724 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/939ca031-3abd-432c-a810-b252a35fb690-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:08:46 crc kubenswrapper[4724]: E0226 11:08:46.279132 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:46.779078209 +0000 UTC m=+193.434817324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.399701 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:46 crc kubenswrapper[4724]: E0226 11:08:46.400912 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:46.90089093 +0000 UTC m=+193.556630045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.431918 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"939ca031-3abd-432c-a810-b252a35fb690","Type":"ContainerDied","Data":"387b32f18dd79caf503f665a955396c2cd078c6a4e7cf65bdab06b30a3f7a629"} Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.432027 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="387b32f18dd79caf503f665a955396c2cd078c6a4e7cf65bdab06b30a3f7a629" Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.432418 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.435289 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.454709 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.502682 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:46 crc kubenswrapper[4724]: E0226 11:08:46.513926 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:47.01389325 +0000 UTC m=+193.669632365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.548590 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" podStartSLOduration=12.54854741 podStartE2EDuration="12.54854741s" podCreationTimestamp="2026-02-26 11:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:46.537501344 +0000 UTC m=+193.193240479" watchObservedRunningTime="2026-02-26 11:08:46.54854741 +0000 UTC m=+193.204286525" Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.617694 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:46 crc kubenswrapper[4724]: E0226 11:08:46.618477 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:47.118458318 +0000 UTC m=+193.774197433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.719138 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:46 crc kubenswrapper[4724]: E0226 11:08:46.719982 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:47.219953619 +0000 UTC m=+193.875692734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.822796 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:46 crc kubenswrapper[4724]: E0226 11:08:46.823403 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:47.323379525 +0000 UTC m=+193.979118640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:46 crc kubenswrapper[4724]: I0226 11:08:46.924391 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:46 crc kubenswrapper[4724]: E0226 11:08:46.924800 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:47.424777823 +0000 UTC m=+194.080516938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.014638 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.014991 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-rrbmc" Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.026575 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.027222 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:47.52719136 +0000 UTC m=+194.182930465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.127999 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.128320 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:47.628269418 +0000 UTC m=+194.284008533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.128474 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.128985 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:47.628967818 +0000 UTC m=+194.284706933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.152087 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:47 crc kubenswrapper[4724]: [-]has-synced failed: reason withheld Feb 26 11:08:47 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:47 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.152202 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.230221 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.230401 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:47.730369356 +0000 UTC m=+194.386108471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.230678 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.231154 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:47.731142818 +0000 UTC m=+194.386881933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.332306 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.333034 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:47.8330038 +0000 UTC m=+194.488742925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.433694 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.436291 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.436650 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:47.936629401 +0000 UTC m=+194.592368706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.538136 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.538387 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.038355108 +0000 UTC m=+194.694094233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.538686 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.539136 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.03912436 +0000 UTC m=+194.694863475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.640529 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.640846 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.140765305 +0000 UTC m=+194.796504420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.640935 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.641275 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.141260189 +0000 UTC m=+194.796999304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.743052 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.743252 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.243221364 +0000 UTC m=+194.898960479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.743465 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.744030 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.244015656 +0000 UTC m=+194.899754771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.845102 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.845520 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.345417074 +0000 UTC m=+195.001156189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.845761 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.846495 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.346486375 +0000 UTC m=+195.002225490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.947001 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.947303 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.447260455 +0000 UTC m=+195.102999570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:47 crc kubenswrapper[4724]: I0226 11:08:47.947646 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:47 crc kubenswrapper[4724]: E0226 11:08:47.948124 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.448115949 +0000 UTC m=+195.103855054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.048794 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:48 crc kubenswrapper[4724]: E0226 11:08:48.049700 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.54962561 +0000 UTC m=+195.205364725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.149520 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 11:08:48 crc kubenswrapper[4724]: [+]has-synced ok Feb 26 11:08:48 crc kubenswrapper[4724]: [+]process-running ok Feb 26 11:08:48 crc kubenswrapper[4724]: healthz check failed Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.149673 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.150565 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:48 crc kubenswrapper[4724]: E0226 11:08:48.151046 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.651030629 +0000 UTC m=+195.306769744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.252278 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:48 crc kubenswrapper[4724]: E0226 11:08:48.252898 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.752863029 +0000 UTC m=+195.408602144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.356213 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:48 crc kubenswrapper[4724]: E0226 11:08:48.356775 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.856755918 +0000 UTC m=+195.512495033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.457641 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:48 crc kubenswrapper[4724]: E0226 11:08:48.457797 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.957768965 +0000 UTC m=+195.613508080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.458112 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:48 crc kubenswrapper[4724]: E0226 11:08:48.458568 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:48.958555997 +0000 UTC m=+195.614295112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.463206 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4rspm" event={"ID":"4c9eec4e-df3c-411b-8629-421f3abfb500","Type":"ContainerStarted","Data":"2740a294989ca02f7643a720809d688955c0ff878ed2b86bf3135f5cf91af660"} Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.559216 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:48 crc kubenswrapper[4724]: E0226 11:08:48.559891 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:49.059840852 +0000 UTC m=+195.715579967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.668783 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:48 crc kubenswrapper[4724]: E0226 11:08:48.669460 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:49.169432133 +0000 UTC m=+195.825171438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.769949 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:48 crc kubenswrapper[4724]: E0226 11:08:48.770161 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:49.270117661 +0000 UTC m=+195.925856776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.772085 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:48 crc kubenswrapper[4724]: E0226 11:08:48.772928 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:49.2729049 +0000 UTC m=+195.928644015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.876782 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:48 crc kubenswrapper[4724]: E0226 11:08:48.876940 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:49.376905113 +0000 UTC m=+196.032644228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.877068 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:48 crc kubenswrapper[4724]: E0226 11:08:48.877617 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:49.377605953 +0000 UTC m=+196.033345068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:48 crc kubenswrapper[4724]: I0226 11:08:48.979558 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:48 crc kubenswrapper[4724]: E0226 11:08:48.980149 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:49.480102332 +0000 UTC m=+196.135841447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.082285 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:49 crc kubenswrapper[4724]: E0226 11:08:49.082892 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:49.582867529 +0000 UTC m=+196.238606644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.155671 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.166215 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h27ll" Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.184322 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:49 crc kubenswrapper[4724]: E0226 11:08:49.184519 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:49.684477173 +0000 UTC m=+196.340216288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.185444 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:49 crc kubenswrapper[4724]: E0226 11:08:49.185878 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:49.685870083 +0000 UTC m=+196.341609198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.290537 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:49 crc kubenswrapper[4724]: E0226 11:08:49.292084 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:49.792053487 +0000 UTC m=+196.447792602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.393220 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:49 crc kubenswrapper[4724]: E0226 11:08:49.393883 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:49.893853637 +0000 UTC m=+196.549592932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.494319 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:49 crc kubenswrapper[4724]: E0226 11:08:49.494486 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:49.994459222 +0000 UTC m=+196.650198337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.494839 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:49 crc kubenswrapper[4724]: E0226 11:08:49.495380 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:49.995372018 +0000 UTC m=+196.651111123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.568467 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4rspm" event={"ID":"4c9eec4e-df3c-411b-8629-421f3abfb500","Type":"ContainerStarted","Data":"2859171d3d80af1949c96c1f38771b23efb41396bb42d845fd4880777a4f8e45"} Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.600590 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:49 crc kubenswrapper[4724]: E0226 11:08:49.601292 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:50.101262435 +0000 UTC m=+196.757001550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.703078 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:49 crc kubenswrapper[4724]: E0226 11:08:49.704662 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:50.204647639 +0000 UTC m=+196.860386754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.810221 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:49 crc kubenswrapper[4724]: E0226 11:08:49.810734 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:50.310711751 +0000 UTC m=+196.966450856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.912552 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:49 crc kubenswrapper[4724]: E0226 11:08:49.913520 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:50.413491168 +0000 UTC m=+197.069230283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.943594 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:49 crc kubenswrapper[4724]: I0226 11:08:49.951710 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:08:50 crc kubenswrapper[4724]: I0226 11:08:50.030014 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:50 crc kubenswrapper[4724]: E0226 11:08:50.031981 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:50.531953964 +0000 UTC m=+197.187693079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:50 crc kubenswrapper[4724]: I0226 11:08:50.132488 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:50 crc kubenswrapper[4724]: E0226 11:08:50.133101 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:50.633068994 +0000 UTC m=+197.288808329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:50 crc kubenswrapper[4724]: I0226 11:08:50.233660 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:50 crc kubenswrapper[4724]: E0226 11:08:50.233895 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:50.733852304 +0000 UTC m=+197.389591429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:50 crc kubenswrapper[4724]: I0226 11:08:50.234132 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:50 crc kubenswrapper[4724]: E0226 11:08:50.234619 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:50.734610096 +0000 UTC m=+197.390349201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:50 crc kubenswrapper[4724]: I0226 11:08:50.335996 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:50 crc kubenswrapper[4724]: E0226 11:08:50.336292 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:50.83624818 +0000 UTC m=+197.491987295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:50 crc kubenswrapper[4724]: I0226 11:08:50.336468 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:50 crc kubenswrapper[4724]: E0226 11:08:50.336898 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:50.836876728 +0000 UTC m=+197.492615843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:50 crc kubenswrapper[4724]: I0226 11:08:50.442888 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:50 crc kubenswrapper[4724]: E0226 11:08:50.443047 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:50.943016042 +0000 UTC m=+197.598755157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:50 crc kubenswrapper[4724]: I0226 11:08:50.443355 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:50 crc kubenswrapper[4724]: E0226 11:08:50.443709 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:50.943700911 +0000 UTC m=+197.599440016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:50 crc kubenswrapper[4724]: I0226 11:08:50.545419 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:50 crc kubenswrapper[4724]: E0226 11:08:50.545917 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:51.045891092 +0000 UTC m=+197.701630207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:50 crc kubenswrapper[4724]: I0226 11:08:50.597054 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4rspm" event={"ID":"4c9eec4e-df3c-411b-8629-421f3abfb500","Type":"ContainerStarted","Data":"a05a491af71b25b82b568d7e409a2646556944ddc792a65e5de4e357d74c7bdb"} Feb 26 11:08:50 crc kubenswrapper[4724]: I0226 11:08:50.647071 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:50 crc kubenswrapper[4724]: E0226 11:08:50.647559 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:51.147536605 +0000 UTC m=+197.803275720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:50 crc kubenswrapper[4724]: I0226 11:08:50.771066 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:50 crc kubenswrapper[4724]: E0226 11:08:50.771347 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:51.271304425 +0000 UTC m=+197.927043540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:50 crc kubenswrapper[4724]: I0226 11:08:50.774565 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:50 crc kubenswrapper[4724]: E0226 11:08:50.776904 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:51.276882082 +0000 UTC m=+197.932621197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:50 crc kubenswrapper[4724]: I0226 11:08:50.881447 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:50 crc kubenswrapper[4724]: E0226 11:08:50.881779 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:51.381701433 +0000 UTC m=+198.037440548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:50 crc kubenswrapper[4724]: I0226 11:08:50.882086 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:50 crc kubenswrapper[4724]: E0226 11:08:50.882819 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:51.382801552 +0000 UTC m=+198.038540667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.000490 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:51 crc kubenswrapper[4724]: E0226 11:08:51.001094 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:51.501046437 +0000 UTC m=+198.156785552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.102765 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:51 crc kubenswrapper[4724]: E0226 11:08:51.103230 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:51.603213579 +0000 UTC m=+198.258952694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.161554 4724 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.204365 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:51 crc kubenswrapper[4724]: E0226 11:08:51.204722 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:51.704678511 +0000 UTC m=+198.360417626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.204815 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:51 crc kubenswrapper[4724]: E0226 11:08:51.205506 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:51.705495703 +0000 UTC m=+198.361234818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.243934 4724 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-26T11:08:51.161600427Z","Handler":null,"Name":""} Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.306197 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:51 crc kubenswrapper[4724]: E0226 11:08:51.306990 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:51.806959926 +0000 UTC m=+198.462699041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.408433 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:51 crc kubenswrapper[4724]: E0226 11:08:51.408884 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:51.90886805 +0000 UTC m=+198.564607165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.509749 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:51 crc kubenswrapper[4724]: E0226 11:08:51.509986 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 11:08:52.009933073 +0000 UTC m=+198.665672188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.510380 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:51 crc kubenswrapper[4724]: E0226 11:08:51.510874 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 11:08:52.010863347 +0000 UTC m=+198.666602462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vxxfb" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.547157 4724 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.547315 4724 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.612088 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.630390 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.652560 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-4rspm" podStartSLOduration=28.652536309 podStartE2EDuration="28.652536309s" podCreationTimestamp="2026-02-26 11:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:08:51.650006132 +0000 UTC m=+198.305745267" watchObservedRunningTime="2026-02-26 11:08:51.652536309 +0000 UTC m=+198.308275424" Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.715067 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.717981 4724 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.718037 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:51 crc kubenswrapper[4724]: I0226 11:08:51.990407 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 26 11:08:52 crc kubenswrapper[4724]: I0226 11:08:52.123002 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vxxfb\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:52 crc kubenswrapper[4724]: I0226 11:08:52.219333 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-4rspm" podUID="4c9eec4e-df3c-411b-8629-421f3abfb500" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.36:9898/healthz\": dial tcp 10.217.0.36:9898: connect: connection refused" Feb 26 11:08:52 crc kubenswrapper[4724]: I0226 11:08:52.374340 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 11:08:52 crc kubenswrapper[4724]: I0226 11:08:52.383557 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:08:52 crc kubenswrapper[4724]: I0226 11:08:52.545091 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn"] Feb 26 11:08:52 crc kubenswrapper[4724]: I0226 11:08:52.545403 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" podUID="bee358c6-9602-47d4-8780-25220a7289b0" containerName="controller-manager" containerID="cri-o://63fe21f95ff31f7eb73b318c81e5b366fb4cd85c7aeff656687be0be268de92e" gracePeriod=30 Feb 26 11:08:52 crc kubenswrapper[4724]: I0226 11:08:52.641765 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7"] Feb 26 11:08:52 crc kubenswrapper[4724]: I0226 11:08:52.642124 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" podUID="62636f36-ae35-4f03-a34d-f3d57c880c2a" containerName="route-controller-manager" containerID="cri-o://199721e1b203772f03088425a91c9a390a6a80498fb8a2c23f614043eecc34f7" gracePeriod=30 Feb 26 11:08:55 crc kubenswrapper[4724]: I0226 11:08:55.134931 4724 ???:1] "http: TLS handshake error from 192.168.126.11:45338: no serving certificate available for the kubelet" Feb 26 11:08:55 crc kubenswrapper[4724]: I0226 11:08:55.329360 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:55 crc kubenswrapper[4724]: I0226 11:08:55.333339 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:08:55 crc kubenswrapper[4724]: I0226 11:08:55.347873 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:08:55 crc kubenswrapper[4724]: I0226 11:08:55.347936 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:08:55 crc kubenswrapper[4724]: I0226 11:08:55.348359 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:08:55 crc kubenswrapper[4724]: I0226 11:08:55.348388 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:08:55 crc kubenswrapper[4724]: I0226 11:08:55.348426 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-k5ktg" Feb 26 11:08:55 crc kubenswrapper[4724]: I0226 11:08:55.349270 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"ee6d3de71827e2c28c30694e0167b2d98f1b93820b63b7f563d157c0b08b21b9"} pod="openshift-console/downloads-7954f5f757-k5ktg" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 26 11:08:55 crc kubenswrapper[4724]: I0226 11:08:55.349371 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" containerID="cri-o://ee6d3de71827e2c28c30694e0167b2d98f1b93820b63b7f563d157c0b08b21b9" gracePeriod=2 Feb 26 11:08:55 crc kubenswrapper[4724]: I0226 11:08:55.349149 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:08:55 crc kubenswrapper[4724]: I0226 11:08:55.353924 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:08:55 crc kubenswrapper[4724]: I0226 11:08:55.686881 4724 generic.go:334] "Generic (PLEG): container finished" podID="62636f36-ae35-4f03-a34d-f3d57c880c2a" containerID="199721e1b203772f03088425a91c9a390a6a80498fb8a2c23f614043eecc34f7" exitCode=0 Feb 26 11:08:55 crc kubenswrapper[4724]: I0226 11:08:55.686947 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" event={"ID":"62636f36-ae35-4f03-a34d-f3d57c880c2a","Type":"ContainerDied","Data":"199721e1b203772f03088425a91c9a390a6a80498fb8a2c23f614043eecc34f7"} Feb 26 11:08:56 crc kubenswrapper[4724]: I0226 11:08:56.723835 4724 generic.go:334] "Generic (PLEG): container finished" podID="bee358c6-9602-47d4-8780-25220a7289b0" containerID="63fe21f95ff31f7eb73b318c81e5b366fb4cd85c7aeff656687be0be268de92e" exitCode=0 Feb 26 11:08:56 crc kubenswrapper[4724]: I0226 11:08:56.724272 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" event={"ID":"bee358c6-9602-47d4-8780-25220a7289b0","Type":"ContainerDied","Data":"63fe21f95ff31f7eb73b318c81e5b366fb4cd85c7aeff656687be0be268de92e"} Feb 26 11:08:58 crc kubenswrapper[4724]: I0226 11:08:58.764821 4724 generic.go:334] "Generic (PLEG): container finished" podID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerID="ee6d3de71827e2c28c30694e0167b2d98f1b93820b63b7f563d157c0b08b21b9" exitCode=0 Feb 26 11:08:58 crc kubenswrapper[4724]: I0226 11:08:58.764916 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k5ktg" event={"ID":"7027d958-98c3-4fd1-9442-232be60e1eb7","Type":"ContainerDied","Data":"ee6d3de71827e2c28c30694e0167b2d98f1b93820b63b7f563d157c0b08b21b9"} Feb 26 11:08:59 crc kubenswrapper[4724]: I0226 11:08:59.943321 4724 patch_prober.go:28] interesting pod/controller-manager-77f6dcdd99-w6bvn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Feb 26 11:08:59 crc kubenswrapper[4724]: I0226 11:08:59.943404 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" podUID="bee358c6-9602-47d4-8780-25220a7289b0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Feb 26 11:09:02 crc kubenswrapper[4724]: I0226 11:09:02.057211 4724 patch_prober.go:28] interesting pod/route-controller-manager-d48c54f4-cq7l7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 26 11:09:02 crc kubenswrapper[4724]: I0226 11:09:02.057873 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" podUID="62636f36-ae35-4f03-a34d-f3d57c880c2a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 26 11:09:05 crc kubenswrapper[4724]: I0226 11:09:05.347206 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:09:05 crc kubenswrapper[4724]: I0226 11:09:05.347305 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:09:05 crc kubenswrapper[4724]: I0226 11:09:05.912127 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jl45p" Feb 26 11:09:09 crc kubenswrapper[4724]: I0226 11:09:09.943045 4724 patch_prober.go:28] interesting pod/controller-manager-77f6dcdd99-w6bvn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Feb 26 11:09:09 crc kubenswrapper[4724]: I0226 11:09:09.943850 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" podUID="bee358c6-9602-47d4-8780-25220a7289b0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Feb 26 11:09:12 crc kubenswrapper[4724]: I0226 11:09:12.056422 4724 patch_prober.go:28] interesting pod/route-controller-manager-d48c54f4-cq7l7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 26 11:09:12 crc kubenswrapper[4724]: I0226 11:09:12.056579 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" podUID="62636f36-ae35-4f03-a34d-f3d57c880c2a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 26 11:09:12 crc kubenswrapper[4724]: I0226 11:09:12.890110 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:09:12 crc kubenswrapper[4724]: I0226 11:09:12.890214 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:09:12 crc kubenswrapper[4724]: I0226 11:09:12.890298 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:09:12 crc kubenswrapper[4724]: I0226 11:09:12.893144 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 11:09:12 crc kubenswrapper[4724]: I0226 11:09:12.894008 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 11:09:12 crc kubenswrapper[4724]: I0226 11:09:12.894258 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 11:09:12 crc kubenswrapper[4724]: I0226 11:09:12.902594 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 11:09:12 crc kubenswrapper[4724]: I0226 11:09:12.909447 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:09:12 crc kubenswrapper[4724]: I0226 11:09:12.914926 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:09:12 crc kubenswrapper[4724]: I0226 11:09:12.991968 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:09:12 crc kubenswrapper[4724]: I0226 11:09:12.996359 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.194080 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.205161 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.392125 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.396955 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.702680 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 11:09:13 crc kubenswrapper[4724]: E0226 11:09:13.703048 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939ca031-3abd-432c-a810-b252a35fb690" containerName="pruner" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.703085 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="939ca031-3abd-432c-a810-b252a35fb690" containerName="pruner" Feb 26 11:09:13 crc kubenswrapper[4724]: E0226 11:09:13.703129 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3546882-cc78-45d2-b99d-9d14605bdc5b" containerName="collect-profiles" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.703138 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3546882-cc78-45d2-b99d-9d14605bdc5b" containerName="collect-profiles" Feb 26 11:09:13 crc kubenswrapper[4724]: E0226 11:09:13.703148 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a646f4-99de-431f-a70b-109291465b0a" containerName="pruner" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.703156 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a646f4-99de-431f-a70b-109291465b0a" containerName="pruner" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.703329 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6a646f4-99de-431f-a70b-109291465b0a" containerName="pruner" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.703350 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3546882-cc78-45d2-b99d-9d14605bdc5b" containerName="collect-profiles" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.703358 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="939ca031-3abd-432c-a810-b252a35fb690" containerName="pruner" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.704011 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.708227 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.708409 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.718389 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.802425 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/224453ca-c149-4f84-b22f-d50a9994043e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"224453ca-c149-4f84-b22f-d50a9994043e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.802484 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/224453ca-c149-4f84-b22f-d50a9994043e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"224453ca-c149-4f84-b22f-d50a9994043e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.904360 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/224453ca-c149-4f84-b22f-d50a9994043e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"224453ca-c149-4f84-b22f-d50a9994043e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.904457 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/224453ca-c149-4f84-b22f-d50a9994043e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"224453ca-c149-4f84-b22f-d50a9994043e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.904529 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/224453ca-c149-4f84-b22f-d50a9994043e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"224453ca-c149-4f84-b22f-d50a9994043e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:09:13 crc kubenswrapper[4724]: I0226 11:09:13.930826 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/224453ca-c149-4f84-b22f-d50a9994043e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"224453ca-c149-4f84-b22f-d50a9994043e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:09:14 crc kubenswrapper[4724]: I0226 11:09:14.028848 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:09:15 crc kubenswrapper[4724]: I0226 11:09:15.348648 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:09:15 crc kubenswrapper[4724]: I0226 11:09:15.348832 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:09:15 crc kubenswrapper[4724]: I0226 11:09:15.645840 4724 ???:1] "http: TLS handshake error from 192.168.126.11:54884: no serving certificate available for the kubelet" Feb 26 11:09:18 crc kubenswrapper[4724]: I0226 11:09:18.696718 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 11:09:18 crc kubenswrapper[4724]: I0226 11:09:18.697794 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:09:18 crc kubenswrapper[4724]: I0226 11:09:18.710424 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 11:09:18 crc kubenswrapper[4724]: I0226 11:09:18.783257 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-kube-api-access\") pod \"installer-9-crc\" (UID: \"99d4a0b0-dbd2-44f9-afb9-087ea5165db7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:09:18 crc kubenswrapper[4724]: I0226 11:09:18.783572 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-var-lock\") pod \"installer-9-crc\" (UID: \"99d4a0b0-dbd2-44f9-afb9-087ea5165db7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:09:18 crc kubenswrapper[4724]: I0226 11:09:18.783674 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"99d4a0b0-dbd2-44f9-afb9-087ea5165db7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:09:18 crc kubenswrapper[4724]: I0226 11:09:18.884936 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-var-lock\") pod \"installer-9-crc\" (UID: \"99d4a0b0-dbd2-44f9-afb9-087ea5165db7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:09:18 crc kubenswrapper[4724]: I0226 11:09:18.885342 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"99d4a0b0-dbd2-44f9-afb9-087ea5165db7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:09:18 crc kubenswrapper[4724]: I0226 11:09:18.885467 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"99d4a0b0-dbd2-44f9-afb9-087ea5165db7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:09:18 crc kubenswrapper[4724]: I0226 11:09:18.885257 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-var-lock\") pod \"installer-9-crc\" (UID: \"99d4a0b0-dbd2-44f9-afb9-087ea5165db7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:09:18 crc kubenswrapper[4724]: I0226 11:09:18.885504 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-kube-api-access\") pod \"installer-9-crc\" (UID: \"99d4a0b0-dbd2-44f9-afb9-087ea5165db7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:09:18 crc kubenswrapper[4724]: I0226 11:09:18.927388 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-kube-api-access\") pod \"installer-9-crc\" (UID: \"99d4a0b0-dbd2-44f9-afb9-087ea5165db7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:09:19 crc kubenswrapper[4724]: I0226 11:09:19.017443 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:09:19 crc kubenswrapper[4724]: I0226 11:09:19.944093 4724 patch_prober.go:28] interesting pod/controller-manager-77f6dcdd99-w6bvn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Feb 26 11:09:19 crc kubenswrapper[4724]: I0226 11:09:19.944211 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" podUID="bee358c6-9602-47d4-8780-25220a7289b0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Feb 26 11:09:22 crc kubenswrapper[4724]: I0226 11:09:22.057653 4724 patch_prober.go:28] interesting pod/route-controller-manager-d48c54f4-cq7l7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 26 11:09:22 crc kubenswrapper[4724]: I0226 11:09:22.058514 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" podUID="62636f36-ae35-4f03-a34d-f3d57c880c2a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 26 11:09:25 crc kubenswrapper[4724]: I0226 11:09:25.347718 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:09:25 crc kubenswrapper[4724]: I0226 11:09:25.349056 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:09:29 crc kubenswrapper[4724]: I0226 11:09:29.943674 4724 patch_prober.go:28] interesting pod/controller-manager-77f6dcdd99-w6bvn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Feb 26 11:09:29 crc kubenswrapper[4724]: I0226 11:09:29.944556 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" podUID="bee358c6-9602-47d4-8780-25220a7289b0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Feb 26 11:09:32 crc kubenswrapper[4724]: I0226 11:09:32.056682 4724 patch_prober.go:28] interesting pod/route-controller-manager-d48c54f4-cq7l7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 26 11:09:32 crc kubenswrapper[4724]: I0226 11:09:32.056815 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" podUID="62636f36-ae35-4f03-a34d-f3d57c880c2a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 26 11:09:35 crc kubenswrapper[4724]: I0226 11:09:35.346339 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:09:35 crc kubenswrapper[4724]: I0226 11:09:35.347385 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:09:39 crc kubenswrapper[4724]: E0226 11:09:39.229898 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 26 11:09:39 crc kubenswrapper[4724]: E0226 11:09:39.231092 4724 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 11:09:39 crc kubenswrapper[4724]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 26 11:09:39 crc kubenswrapper[4724]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s26nd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29535068-crjcm_openshift-infra(91b7ba35-3bf3-4738-8a71-d093b0e7fd12): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 26 11:09:39 crc kubenswrapper[4724]: > logger="UnhandledError" Feb 26 11:09:39 crc kubenswrapper[4724]: E0226 11:09:39.232364 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29535068-crjcm" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" Feb 26 11:09:40 crc kubenswrapper[4724]: E0226 11:09:40.065050 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29535068-crjcm" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" Feb 26 11:09:40 crc kubenswrapper[4724]: I0226 11:09:40.942626 4724 patch_prober.go:28] interesting pod/controller-manager-77f6dcdd99-w6bvn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:09:40 crc kubenswrapper[4724]: I0226 11:09:40.942796 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" podUID="bee358c6-9602-47d4-8780-25220a7289b0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.056856 4724 patch_prober.go:28] interesting pod/route-controller-manager-d48c54f4-cq7l7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.056943 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" podUID="62636f36-ae35-4f03-a34d-f3d57c880c2a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.889390 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.935706 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bee358c6-9602-47d4-8780-25220a7289b0-serving-cert\") pod \"bee358c6-9602-47d4-8780-25220a7289b0\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.935855 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-proxy-ca-bundles\") pod \"bee358c6-9602-47d4-8780-25220a7289b0\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.938483 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c2w7\" (UniqueName: \"kubernetes.io/projected/bee358c6-9602-47d4-8780-25220a7289b0-kube-api-access-4c2w7\") pod \"bee358c6-9602-47d4-8780-25220a7289b0\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.938611 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-config\") pod \"bee358c6-9602-47d4-8780-25220a7289b0\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.938765 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-client-ca\") pod \"bee358c6-9602-47d4-8780-25220a7289b0\" (UID: \"bee358c6-9602-47d4-8780-25220a7289b0\") " Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.948541 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bee358c6-9602-47d4-8780-25220a7289b0" (UID: "bee358c6-9602-47d4-8780-25220a7289b0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.948680 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-74b89969d5-gwmk8"] Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.948745 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-config" (OuterVolumeSpecName: "config") pod "bee358c6-9602-47d4-8780-25220a7289b0" (UID: "bee358c6-9602-47d4-8780-25220a7289b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:09:43 crc kubenswrapper[4724]: E0226 11:09:43.948950 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee358c6-9602-47d4-8780-25220a7289b0" containerName="controller-manager" Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.948961 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee358c6-9602-47d4-8780-25220a7289b0" containerName="controller-manager" Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.949100 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee358c6-9602-47d4-8780-25220a7289b0" containerName="controller-manager" Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.949713 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.949757 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.950088 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-client-ca" (OuterVolumeSpecName: "client-ca") pod "bee358c6-9602-47d4-8780-25220a7289b0" (UID: "bee358c6-9602-47d4-8780-25220a7289b0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.950487 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.956712 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74b89969d5-gwmk8"] Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.959031 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.965614 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee358c6-9602-47d4-8780-25220a7289b0-kube-api-access-4c2w7" (OuterVolumeSpecName: "kube-api-access-4c2w7") pod "bee358c6-9602-47d4-8780-25220a7289b0" (UID: "bee358c6-9602-47d4-8780-25220a7289b0"). InnerVolumeSpecName "kube-api-access-4c2w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:09:43 crc kubenswrapper[4724]: I0226 11:09:43.987827 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee358c6-9602-47d4-8780-25220a7289b0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bee358c6-9602-47d4-8780-25220a7289b0" (UID: "bee358c6-9602-47d4-8780-25220a7289b0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.050846 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62636f36-ae35-4f03-a34d-f3d57c880c2a-client-ca\") pod \"62636f36-ae35-4f03-a34d-f3d57c880c2a\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.050907 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f62s\" (UniqueName: \"kubernetes.io/projected/62636f36-ae35-4f03-a34d-f3d57c880c2a-kube-api-access-9f62s\") pod \"62636f36-ae35-4f03-a34d-f3d57c880c2a\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.050999 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62636f36-ae35-4f03-a34d-f3d57c880c2a-serving-cert\") pod \"62636f36-ae35-4f03-a34d-f3d57c880c2a\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.051106 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62636f36-ae35-4f03-a34d-f3d57c880c2a-config\") pod \"62636f36-ae35-4f03-a34d-f3d57c880c2a\" (UID: \"62636f36-ae35-4f03-a34d-f3d57c880c2a\") " Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.051587 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-config\") pod \"controller-manager-74b89969d5-gwmk8\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.051796 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9371739-6d1a-4872-b11e-b2e915349056-serving-cert\") pod \"controller-manager-74b89969d5-gwmk8\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.051906 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-proxy-ca-bundles\") pod \"controller-manager-74b89969d5-gwmk8\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.052005 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-client-ca\") pod \"controller-manager-74b89969d5-gwmk8\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.052056 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cx2v\" (UniqueName: \"kubernetes.io/projected/c9371739-6d1a-4872-b11e-b2e915349056-kube-api-access-2cx2v\") pod \"controller-manager-74b89969d5-gwmk8\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.052122 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c2w7\" (UniqueName: \"kubernetes.io/projected/bee358c6-9602-47d4-8780-25220a7289b0-kube-api-access-4c2w7\") on node \"crc\" DevicePath \"\"" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.052141 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bee358c6-9602-47d4-8780-25220a7289b0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.052454 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bee358c6-9602-47d4-8780-25220a7289b0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.052249 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62636f36-ae35-4f03-a34d-f3d57c880c2a-client-ca" (OuterVolumeSpecName: "client-ca") pod "62636f36-ae35-4f03-a34d-f3d57c880c2a" (UID: "62636f36-ae35-4f03-a34d-f3d57c880c2a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.053761 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62636f36-ae35-4f03-a34d-f3d57c880c2a-config" (OuterVolumeSpecName: "config") pod "62636f36-ae35-4f03-a34d-f3d57c880c2a" (UID: "62636f36-ae35-4f03-a34d-f3d57c880c2a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.056522 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62636f36-ae35-4f03-a34d-f3d57c880c2a-kube-api-access-9f62s" (OuterVolumeSpecName: "kube-api-access-9f62s") pod "62636f36-ae35-4f03-a34d-f3d57c880c2a" (UID: "62636f36-ae35-4f03-a34d-f3d57c880c2a"). InnerVolumeSpecName "kube-api-access-9f62s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.072581 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62636f36-ae35-4f03-a34d-f3d57c880c2a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "62636f36-ae35-4f03-a34d-f3d57c880c2a" (UID: "62636f36-ae35-4f03-a34d-f3d57c880c2a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.088248 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" event={"ID":"bee358c6-9602-47d4-8780-25220a7289b0","Type":"ContainerDied","Data":"3b818dde758393c50bdb9524620fb00fa265caec4f2e5d5267d6435d7fce879e"} Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.088317 4724 scope.go:117] "RemoveContainer" containerID="63fe21f95ff31f7eb73b318c81e5b366fb4cd85c7aeff656687be0be268de92e" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.088545 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.095642 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" event={"ID":"62636f36-ae35-4f03-a34d-f3d57c880c2a","Type":"ContainerDied","Data":"bc2d54282750a59d726d29e5ea4df1bad3970ab26aa7a2382fff800aace9fe85"} Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.095814 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.140271 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn"] Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.152524 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77f6dcdd99-w6bvn"] Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.154545 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-client-ca\") pod \"controller-manager-74b89969d5-gwmk8\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.153555 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-client-ca\") pod \"controller-manager-74b89969d5-gwmk8\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.155442 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cx2v\" (UniqueName: \"kubernetes.io/projected/c9371739-6d1a-4872-b11e-b2e915349056-kube-api-access-2cx2v\") pod \"controller-manager-74b89969d5-gwmk8\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.155907 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-config\") pod \"controller-manager-74b89969d5-gwmk8\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.158465 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-config\") pod \"controller-manager-74b89969d5-gwmk8\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.158616 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9371739-6d1a-4872-b11e-b2e915349056-serving-cert\") pod \"controller-manager-74b89969d5-gwmk8\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.159231 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-proxy-ca-bundles\") pod \"controller-manager-74b89969d5-gwmk8\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.159353 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62636f36-ae35-4f03-a34d-f3d57c880c2a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.159369 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f62s\" (UniqueName: \"kubernetes.io/projected/62636f36-ae35-4f03-a34d-f3d57c880c2a-kube-api-access-9f62s\") on node \"crc\" DevicePath \"\"" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.159382 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62636f36-ae35-4f03-a34d-f3d57c880c2a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.159392 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62636f36-ae35-4f03-a34d-f3d57c880c2a-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.163215 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-proxy-ca-bundles\") pod \"controller-manager-74b89969d5-gwmk8\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.181785 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9371739-6d1a-4872-b11e-b2e915349056-serving-cert\") pod \"controller-manager-74b89969d5-gwmk8\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.186874 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cx2v\" (UniqueName: \"kubernetes.io/projected/c9371739-6d1a-4872-b11e-b2e915349056-kube-api-access-2cx2v\") pod \"controller-manager-74b89969d5-gwmk8\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.208692 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7"] Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.212709 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d48c54f4-cq7l7"] Feb 26 11:09:44 crc kubenswrapper[4724]: I0226 11:09:44.357783 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:09:45 crc kubenswrapper[4724]: I0226 11:09:45.346085 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:09:45 crc kubenswrapper[4724]: I0226 11:09:45.346434 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:09:45 crc kubenswrapper[4724]: I0226 11:09:45.982815 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62636f36-ae35-4f03-a34d-f3d57c880c2a" path="/var/lib/kubelet/pods/62636f36-ae35-4f03-a34d-f3d57c880c2a/volumes" Feb 26 11:09:45 crc kubenswrapper[4724]: I0226 11:09:45.983623 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bee358c6-9602-47d4-8780-25220a7289b0" path="/var/lib/kubelet/pods/bee358c6-9602-47d4-8780-25220a7289b0/volumes" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.714875 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw"] Feb 26 11:09:46 crc kubenswrapper[4724]: E0226 11:09:46.715708 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62636f36-ae35-4f03-a34d-f3d57c880c2a" containerName="route-controller-manager" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.715726 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="62636f36-ae35-4f03-a34d-f3d57c880c2a" containerName="route-controller-manager" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.715916 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="62636f36-ae35-4f03-a34d-f3d57c880c2a" containerName="route-controller-manager" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.721232 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw"] Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.721407 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.724520 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.729093 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.729719 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.729923 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.730125 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.730363 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.731223 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.797041 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdh2g\" (UniqueName: \"kubernetes.io/projected/f0e798af-3465-4040-a183-3319e609a282-kube-api-access-fdh2g\") pod \"route-controller-manager-d558c998b-ftqpw\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.797130 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e798af-3465-4040-a183-3319e609a282-config\") pod \"route-controller-manager-d558c998b-ftqpw\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.797189 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0e798af-3465-4040-a183-3319e609a282-serving-cert\") pod \"route-controller-manager-d558c998b-ftqpw\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.797236 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0e798af-3465-4040-a183-3319e609a282-client-ca\") pod \"route-controller-manager-d558c998b-ftqpw\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.898641 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdh2g\" (UniqueName: \"kubernetes.io/projected/f0e798af-3465-4040-a183-3319e609a282-kube-api-access-fdh2g\") pod \"route-controller-manager-d558c998b-ftqpw\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.898720 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e798af-3465-4040-a183-3319e609a282-config\") pod \"route-controller-manager-d558c998b-ftqpw\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.898762 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0e798af-3465-4040-a183-3319e609a282-serving-cert\") pod \"route-controller-manager-d558c998b-ftqpw\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.898806 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0e798af-3465-4040-a183-3319e609a282-client-ca\") pod \"route-controller-manager-d558c998b-ftqpw\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.901138 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0e798af-3465-4040-a183-3319e609a282-client-ca\") pod \"route-controller-manager-d558c998b-ftqpw\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.901220 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e798af-3465-4040-a183-3319e609a282-config\") pod \"route-controller-manager-d558c998b-ftqpw\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.906409 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.906493 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.909637 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0e798af-3465-4040-a183-3319e609a282-serving-cert\") pod \"route-controller-manager-d558c998b-ftqpw\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:09:46 crc kubenswrapper[4724]: I0226 11:09:46.921054 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdh2g\" (UniqueName: \"kubernetes.io/projected/f0e798af-3465-4040-a183-3319e609a282-kube-api-access-fdh2g\") pod \"route-controller-manager-d558c998b-ftqpw\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:09:47 crc kubenswrapper[4724]: I0226 11:09:47.057935 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:09:47 crc kubenswrapper[4724]: E0226 11:09:47.407929 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051: Get \"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051\": context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 26 11:09:47 crc kubenswrapper[4724]: E0226 11:09:47.408115 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5sht7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-p9shd_openshift-marketplace(0eb55921-4244-4557-aa72-97cea802c3fb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051: Get \"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051\": context canceled" logger="UnhandledError" Feb 26 11:09:47 crc kubenswrapper[4724]: E0226 11:09:47.409331 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051: Get \\\"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051\\\": context canceled\"" pod="openshift-marketplace/community-operators-p9shd" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" Feb 26 11:09:55 crc kubenswrapper[4724]: I0226 11:09:55.346293 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:09:55 crc kubenswrapper[4724]: I0226 11:09:55.346632 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:09:56 crc kubenswrapper[4724]: I0226 11:09:56.624702 4724 ???:1] "http: TLS handshake error from 192.168.126.11:54824: no serving certificate available for the kubelet" Feb 26 11:09:57 crc kubenswrapper[4724]: I0226 11:09:57.079823 4724 scope.go:117] "RemoveContainer" containerID="199721e1b203772f03088425a91c9a390a6a80498fb8a2c23f614043eecc34f7" Feb 26 11:09:57 crc kubenswrapper[4724]: I0226 11:09:57.218616 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"99d4a0b0-dbd2-44f9-afb9-087ea5165db7","Type":"ContainerStarted","Data":"e2fa1a57cbca97fde024c43e899fd8b502fd1b02dc7bba7e87dc7aee100ffe22"} Feb 26 11:09:57 crc kubenswrapper[4724]: I0226 11:09:57.435915 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 11:09:57 crc kubenswrapper[4724]: I0226 11:09:57.676207 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vxxfb"] Feb 26 11:09:58 crc kubenswrapper[4724]: E0226 11:09:58.859661 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:cf6d845794adf5448325bc506389d32e0330b3e9db6bf5f46ec1e824f4c04363: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:cf6d845794adf5448325bc506389d32e0330b3e9db6bf5f46ec1e824f4c04363\": context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 26 11:09:58 crc kubenswrapper[4724]: E0226 11:09:58.860011 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ht4q4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hj7c4_openshift-marketplace(f4930fbf-4372-4466-b084-a13dfa8a5415): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:cf6d845794adf5448325bc506389d32e0330b3e9db6bf5f46ec1e824f4c04363: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:cf6d845794adf5448325bc506389d32e0330b3e9db6bf5f46ec1e824f4c04363\": context canceled" logger="UnhandledError" Feb 26 11:09:58 crc kubenswrapper[4724]: E0226 11:09:58.861246 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:cf6d845794adf5448325bc506389d32e0330b3e9db6bf5f46ec1e824f4c04363: Get \\\"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:cf6d845794adf5448325bc506389d32e0330b3e9db6bf5f46ec1e824f4c04363\\\": context canceled\"" pod="openshift-marketplace/redhat-marketplace-hj7c4" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" Feb 26 11:09:59 crc kubenswrapper[4724]: E0226 11:09:59.094254 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051: Get \"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051\": context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 26 11:09:59 crc kubenswrapper[4724]: E0226 11:09:59.094576 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnd8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vlps5_openshift-marketplace(f9ed0863-9bdf-48ba-ad70-c1c728c58730): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051: Get \"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051\": context canceled" logger="UnhandledError" Feb 26 11:09:59 crc kubenswrapper[4724]: E0226 11:09:59.095689 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051: Get \\\"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051\\\": context canceled\"" pod="openshift-marketplace/community-operators-vlps5" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" Feb 26 11:10:00 crc kubenswrapper[4724]: I0226 11:10:00.135392 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535070-lxjqb"] Feb 26 11:10:00 crc kubenswrapper[4724]: I0226 11:10:00.136249 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" Feb 26 11:10:00 crc kubenswrapper[4724]: I0226 11:10:00.139900 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:10:00 crc kubenswrapper[4724]: I0226 11:10:00.143293 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535070-lxjqb"] Feb 26 11:10:00 crc kubenswrapper[4724]: I0226 11:10:00.203924 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk25d\" (UniqueName: \"kubernetes.io/projected/7940e7c1-723b-42e3-818f-dfbd7a795e71-kube-api-access-hk25d\") pod \"auto-csr-approver-29535070-lxjqb\" (UID: \"7940e7c1-723b-42e3-818f-dfbd7a795e71\") " pod="openshift-infra/auto-csr-approver-29535070-lxjqb" Feb 26 11:10:00 crc kubenswrapper[4724]: I0226 11:10:00.305295 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk25d\" (UniqueName: \"kubernetes.io/projected/7940e7c1-723b-42e3-818f-dfbd7a795e71-kube-api-access-hk25d\") pod \"auto-csr-approver-29535070-lxjqb\" (UID: \"7940e7c1-723b-42e3-818f-dfbd7a795e71\") " pod="openshift-infra/auto-csr-approver-29535070-lxjqb" Feb 26 11:10:00 crc kubenswrapper[4724]: I0226 11:10:00.324743 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk25d\" (UniqueName: \"kubernetes.io/projected/7940e7c1-723b-42e3-818f-dfbd7a795e71-kube-api-access-hk25d\") pod \"auto-csr-approver-29535070-lxjqb\" (UID: \"7940e7c1-723b-42e3-818f-dfbd7a795e71\") " pod="openshift-infra/auto-csr-approver-29535070-lxjqb" Feb 26 11:10:00 crc kubenswrapper[4724]: I0226 11:10:00.479005 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" Feb 26 11:10:03 crc kubenswrapper[4724]: E0226 11:10:03.465875 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vlps5" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" Feb 26 11:10:03 crc kubenswrapper[4724]: E0226 11:10:03.466032 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-hj7c4" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" Feb 26 11:10:03 crc kubenswrapper[4724]: W0226 11:10:03.492702 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-5232016e153f4fbfdad962f00a6a96ada92ea59145925e0f56d8c937f84ac335 WatchSource:0}: Error finding container 5232016e153f4fbfdad962f00a6a96ada92ea59145925e0f56d8c937f84ac335: Status 404 returned error can't find the container with id 5232016e153f4fbfdad962f00a6a96ada92ea59145925e0f56d8c937f84ac335 Feb 26 11:10:03 crc kubenswrapper[4724]: E0226 11:10:03.548524 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 26 11:10:03 crc kubenswrapper[4724]: E0226 11:10:03.548716 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ht5jv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2gkcb_openshift-marketplace(35a09ba5-1063-467d-b7a6-c1b2c37a135e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 11:10:03 crc kubenswrapper[4724]: E0226 11:10:03.550090 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2gkcb" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" Feb 26 11:10:04 crc kubenswrapper[4724]: I0226 11:10:04.253314 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" event={"ID":"c4f276b5-977b-4a34-9c9c-2b699d10345c","Type":"ContainerStarted","Data":"b1c5b6937deee5464fc1c9ff64df0a816291780ef22bcbea9fc436040b4fa385"} Feb 26 11:10:04 crc kubenswrapper[4724]: I0226 11:10:04.254091 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b4fc3a633736fe1fdb3586de450e231abd26f91c44a4ab7ae10f9481179df26d"} Feb 26 11:10:04 crc kubenswrapper[4724]: I0226 11:10:04.254946 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b9725956721a7bf9f44b82c44a2a7653e1e2fa4c8c2d12df7ff634c0f07784e7"} Feb 26 11:10:04 crc kubenswrapper[4724]: I0226 11:10:04.255782 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"224453ca-c149-4f84-b22f-d50a9994043e","Type":"ContainerStarted","Data":"7360ec9aac00e124027e2b6c8ff682614a9f14112a19246a36e045d857ffe1a2"} Feb 26 11:10:04 crc kubenswrapper[4724]: I0226 11:10:04.256520 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5232016e153f4fbfdad962f00a6a96ada92ea59145925e0f56d8c937f84ac335"} Feb 26 11:10:05 crc kubenswrapper[4724]: I0226 11:10:05.346841 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:10:05 crc kubenswrapper[4724]: I0226 11:10:05.346926 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:10:10 crc kubenswrapper[4724]: E0226 11:10:10.309795 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 26 11:10:10 crc kubenswrapper[4724]: E0226 11:10:10.310411 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hq4gb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-92dsj_openshift-marketplace(8a06e0f8-4c39-4fbd-a7fc-710337cbfafc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 11:10:10 crc kubenswrapper[4724]: E0226 11:10:10.313083 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-92dsj" podUID="8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" Feb 26 11:10:13 crc kubenswrapper[4724]: E0226 11:10:13.190188 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 26 11:10:13 crc kubenswrapper[4724]: E0226 11:10:13.190675 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmbgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xb5gc_openshift-marketplace(056030ad-19ca-4542-a486-139eb62524b0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 11:10:13 crc kubenswrapper[4724]: E0226 11:10:13.191865 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xb5gc" podUID="056030ad-19ca-4542-a486-139eb62524b0" Feb 26 11:10:16 crc kubenswrapper[4724]: I0226 11:10:15.346767 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:10:16 crc kubenswrapper[4724]: I0226 11:10:15.346826 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:10:17 crc kubenswrapper[4724]: I0226 11:10:16.906401 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:10:17 crc kubenswrapper[4724]: I0226 11:10:16.906452 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:10:19 crc kubenswrapper[4724]: E0226 11:10:19.594665 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xb5gc" podUID="056030ad-19ca-4542-a486-139eb62524b0" Feb 26 11:10:19 crc kubenswrapper[4724]: E0226 11:10:19.597173 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-92dsj" podUID="8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" Feb 26 11:10:19 crc kubenswrapper[4724]: E0226 11:10:19.678768 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 26 11:10:19 crc kubenswrapper[4724]: E0226 11:10:19.678978 4724 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 11:10:19 crc kubenswrapper[4724]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 26 11:10:19 crc kubenswrapper[4724]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s26nd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29535068-crjcm_openshift-infra(91b7ba35-3bf3-4738-8a71-d093b0e7fd12): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 26 11:10:19 crc kubenswrapper[4724]: > logger="UnhandledError" Feb 26 11:10:19 crc kubenswrapper[4724]: E0226 11:10:19.680090 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29535068-crjcm" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" Feb 26 11:10:19 crc kubenswrapper[4724]: E0226 11:10:19.710471 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 26 11:10:19 crc kubenswrapper[4724]: E0226 11:10:19.710751 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vvcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mqtct_openshift-marketplace(48a2c1ec-376b-440a-9dd2-6037d5dfdd1f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 11:10:19 crc kubenswrapper[4724]: E0226 11:10:19.712533 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mqtct" podUID="48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" Feb 26 11:10:19 crc kubenswrapper[4724]: I0226 11:10:19.843236 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535070-lxjqb"] Feb 26 11:10:20 crc kubenswrapper[4724]: I0226 11:10:20.103808 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74b89969d5-gwmk8"] Feb 26 11:10:20 crc kubenswrapper[4724]: W0226 11:10:20.111839 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9371739_6d1a_4872_b11e_b2e915349056.slice/crio-830dc4cbc8c4c294f74154cf912d090370b9a3fa45a60377747c9d6b79b4fce7 WatchSource:0}: Error finding container 830dc4cbc8c4c294f74154cf912d090370b9a3fa45a60377747c9d6b79b4fce7: Status 404 returned error can't find the container with id 830dc4cbc8c4c294f74154cf912d090370b9a3fa45a60377747c9d6b79b4fce7 Feb 26 11:10:20 crc kubenswrapper[4724]: I0226 11:10:20.197264 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw"] Feb 26 11:10:20 crc kubenswrapper[4724]: I0226 11:10:20.341905 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" event={"ID":"c9371739-6d1a-4872-b11e-b2e915349056","Type":"ContainerStarted","Data":"830dc4cbc8c4c294f74154cf912d090370b9a3fa45a60377747c9d6b79b4fce7"} Feb 26 11:10:20 crc kubenswrapper[4724]: I0226 11:10:20.343781 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" event={"ID":"f0e798af-3465-4040-a183-3319e609a282","Type":"ContainerStarted","Data":"c7bd9e93a0dd5305cd60c7277bdc0c129aa5c41eb71a6877daf6e22ba0daf4e8"} Feb 26 11:10:20 crc kubenswrapper[4724]: I0226 11:10:20.346636 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" event={"ID":"7940e7c1-723b-42e3-818f-dfbd7a795e71","Type":"ContainerStarted","Data":"6b472680e8475c9fdf19ab0621a756d7c25f022ed11de4b5020bad0ee57a10f8"} Feb 26 11:10:20 crc kubenswrapper[4724]: E0226 11:10:20.388559 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mqtct" podUID="48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" Feb 26 11:10:21 crc kubenswrapper[4724]: I0226 11:10:21.355219 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"99d4a0b0-dbd2-44f9-afb9-087ea5165db7","Type":"ContainerStarted","Data":"1760f6703aa4a52014c45a95d133ffbff5f9d8a169e42d7e7dede3b3fc3b1781"} Feb 26 11:10:21 crc kubenswrapper[4724]: I0226 11:10:21.358720 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0f87ce826254699842586159e23e15ee44149834dfceeffb6c4e63e0c3cc60eb"} Feb 26 11:10:21 crc kubenswrapper[4724]: I0226 11:10:21.367390 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" event={"ID":"c9371739-6d1a-4872-b11e-b2e915349056","Type":"ContainerStarted","Data":"5af7c65324ab0f200fe0e88f16f385156e91593d56676f447b255f4db869dc9e"} Feb 26 11:10:21 crc kubenswrapper[4724]: I0226 11:10:21.370358 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" event={"ID":"f0e798af-3465-4040-a183-3319e609a282","Type":"ContainerStarted","Data":"b24f353caba69372f8a150eaf10b5f15989d9b887803a64b9eb0e589392042dc"} Feb 26 11:10:21 crc kubenswrapper[4724]: I0226 11:10:21.371927 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" event={"ID":"c4f276b5-977b-4a34-9c9c-2b699d10345c","Type":"ContainerStarted","Data":"f7450fbe9852626c163bf1fcddaa5fa2c930b946b28a67abd5887779ffb80483"} Feb 26 11:10:21 crc kubenswrapper[4724]: I0226 11:10:21.377368 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"af34f37fa0696179c95cb4df467683a4ac81fadf2958e3b52690226ad390f057"} Feb 26 11:10:21 crc kubenswrapper[4724]: I0226 11:10:21.381587 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k5ktg" event={"ID":"7027d958-98c3-4fd1-9442-232be60e1eb7","Type":"ContainerStarted","Data":"c6b33d90fde6af6eda5d3779d45e2539e07cb779fb37997e2a90a3f47dbadc92"} Feb 26 11:10:21 crc kubenswrapper[4724]: I0226 11:10:21.383462 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"19b842db083c5a80e4965f1df0d888a59cb341cc80e95b2ab140c60cdae91a0d"} Feb 26 11:10:21 crc kubenswrapper[4724]: I0226 11:10:21.385011 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"224453ca-c149-4f84-b22f-d50a9994043e","Type":"ContainerStarted","Data":"5cdc52ad0b44b6b25225029a9c680d1331be2ae27b2df5f0ee438395a6827f36"} Feb 26 11:10:22 crc kubenswrapper[4724]: I0226 11:10:22.394059 4724 generic.go:334] "Generic (PLEG): container finished" podID="224453ca-c149-4f84-b22f-d50a9994043e" containerID="5cdc52ad0b44b6b25225029a9c680d1331be2ae27b2df5f0ee438395a6827f36" exitCode=0 Feb 26 11:10:22 crc kubenswrapper[4724]: I0226 11:10:22.394193 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"224453ca-c149-4f84-b22f-d50a9994043e","Type":"ContainerDied","Data":"5cdc52ad0b44b6b25225029a9c680d1331be2ae27b2df5f0ee438395a6827f36"} Feb 26 11:10:22 crc kubenswrapper[4724]: I0226 11:10:22.396273 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:10:22 crc kubenswrapper[4724]: I0226 11:10:22.396329 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:10:22 crc kubenswrapper[4724]: I0226 11:10:22.396550 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:10:22 crc kubenswrapper[4724]: I0226 11:10:22.397094 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:10:22 crc kubenswrapper[4724]: E0226 11:10:22.427709 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 26 11:10:22 crc kubenswrapper[4724]: E0226 11:10:22.427887 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7k4d8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-64lrq_openshift-marketplace(4f727f37-5bac-476b-88a0-3d751c47e264): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 11:10:22 crc kubenswrapper[4724]: E0226 11:10:22.429016 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-64lrq" podUID="4f727f37-5bac-476b-88a0-3d751c47e264" Feb 26 11:10:22 crc kubenswrapper[4724]: I0226 11:10:22.499872 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=64.499835273 podStartE2EDuration="1m4.499835273s" podCreationTimestamp="2026-02-26 11:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:10:22.492772221 +0000 UTC m=+289.148511356" watchObservedRunningTime="2026-02-26 11:10:22.499835273 +0000 UTC m=+289.155574388" Feb 26 11:10:22 crc kubenswrapper[4724]: I0226 11:10:22.568725 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" podStartSLOduration=71.568700253 podStartE2EDuration="1m11.568700253s" podCreationTimestamp="2026-02-26 11:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:10:22.564328068 +0000 UTC m=+289.220067203" watchObservedRunningTime="2026-02-26 11:10:22.568700253 +0000 UTC m=+289.224439368" Feb 26 11:10:22 crc kubenswrapper[4724]: I0226 11:10:22.569997 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" podStartSLOduration=71.56998824 podStartE2EDuration="1m11.56998824s" podCreationTimestamp="2026-02-26 11:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:10:22.539701844 +0000 UTC m=+289.195440979" watchObservedRunningTime="2026-02-26 11:10:22.56998824 +0000 UTC m=+289.225727375" Feb 26 11:10:22 crc kubenswrapper[4724]: I0226 11:10:22.669029 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" podStartSLOduration=226.669003742 podStartE2EDuration="3m46.669003742s" podCreationTimestamp="2026-02-26 11:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:10:22.665027369 +0000 UTC m=+289.320766494" watchObservedRunningTime="2026-02-26 11:10:22.669003742 +0000 UTC m=+289.324742857" Feb 26 11:10:24 crc kubenswrapper[4724]: I0226 11:10:24.358314 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:10:24 crc kubenswrapper[4724]: I0226 11:10:24.372961 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:10:25 crc kubenswrapper[4724]: I0226 11:10:25.345953 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-k5ktg" Feb 26 11:10:25 crc kubenswrapper[4724]: I0226 11:10:25.346479 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:10:25 crc kubenswrapper[4724]: I0226 11:10:25.346637 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:10:25 crc kubenswrapper[4724]: I0226 11:10:25.346896 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:10:25 crc kubenswrapper[4724]: I0226 11:10:25.346928 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:10:25 crc kubenswrapper[4724]: I0226 11:10:25.346992 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:10:25 crc kubenswrapper[4724]: I0226 11:10:25.347018 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:10:25 crc kubenswrapper[4724]: E0226 11:10:25.892911 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-64lrq" podUID="4f727f37-5bac-476b-88a0-3d751c47e264" Feb 26 11:10:25 crc kubenswrapper[4724]: I0226 11:10:25.937707 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:10:26 crc kubenswrapper[4724]: I0226 11:10:26.076357 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/224453ca-c149-4f84-b22f-d50a9994043e-kube-api-access\") pod \"224453ca-c149-4f84-b22f-d50a9994043e\" (UID: \"224453ca-c149-4f84-b22f-d50a9994043e\") " Feb 26 11:10:26 crc kubenswrapper[4724]: I0226 11:10:26.076439 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/224453ca-c149-4f84-b22f-d50a9994043e-kubelet-dir\") pod \"224453ca-c149-4f84-b22f-d50a9994043e\" (UID: \"224453ca-c149-4f84-b22f-d50a9994043e\") " Feb 26 11:10:26 crc kubenswrapper[4724]: I0226 11:10:26.076727 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/224453ca-c149-4f84-b22f-d50a9994043e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "224453ca-c149-4f84-b22f-d50a9994043e" (UID: "224453ca-c149-4f84-b22f-d50a9994043e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:10:26 crc kubenswrapper[4724]: I0226 11:10:26.083584 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/224453ca-c149-4f84-b22f-d50a9994043e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "224453ca-c149-4f84-b22f-d50a9994043e" (UID: "224453ca-c149-4f84-b22f-d50a9994043e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:10:26 crc kubenswrapper[4724]: I0226 11:10:26.177410 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/224453ca-c149-4f84-b22f-d50a9994043e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:10:26 crc kubenswrapper[4724]: I0226 11:10:26.177441 4724 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/224453ca-c149-4f84-b22f-d50a9994043e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:10:26 crc kubenswrapper[4724]: I0226 11:10:26.433253 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"224453ca-c149-4f84-b22f-d50a9994043e","Type":"ContainerDied","Data":"7360ec9aac00e124027e2b6c8ff682614a9f14112a19246a36e045d857ffe1a2"} Feb 26 11:10:26 crc kubenswrapper[4724]: I0226 11:10:26.433296 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7360ec9aac00e124027e2b6c8ff682614a9f14112a19246a36e045d857ffe1a2" Feb 26 11:10:26 crc kubenswrapper[4724]: I0226 11:10:26.433323 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 11:10:27 crc kubenswrapper[4724]: I0226 11:10:27.058369 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:10:27 crc kubenswrapper[4724]: I0226 11:10:27.074784 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:10:35 crc kubenswrapper[4724]: I0226 11:10:35.346259 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:10:35 crc kubenswrapper[4724]: I0226 11:10:35.346312 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:10:35 crc kubenswrapper[4724]: I0226 11:10:35.346813 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:10:35 crc kubenswrapper[4724]: I0226 11:10:35.346846 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:10:42 crc kubenswrapper[4724]: I0226 11:10:42.389254 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:10:45 crc kubenswrapper[4724]: I0226 11:10:45.346081 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:10:45 crc kubenswrapper[4724]: I0226 11:10:45.347287 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:10:45 crc kubenswrapper[4724]: I0226 11:10:45.347356 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-k5ktg" Feb 26 11:10:45 crc kubenswrapper[4724]: I0226 11:10:45.346164 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:10:45 crc kubenswrapper[4724]: I0226 11:10:45.347722 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:10:45 crc kubenswrapper[4724]: I0226 11:10:45.347965 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:10:45 crc kubenswrapper[4724]: I0226 11:10:45.348002 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:10:45 crc kubenswrapper[4724]: I0226 11:10:45.348013 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"c6b33d90fde6af6eda5d3779d45e2539e07cb779fb37997e2a90a3f47dbadc92"} pod="openshift-console/downloads-7954f5f757-k5ktg" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 26 11:10:45 crc kubenswrapper[4724]: I0226 11:10:45.348054 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" containerID="cri-o://c6b33d90fde6af6eda5d3779d45e2539e07cb779fb37997e2a90a3f47dbadc92" gracePeriod=2 Feb 26 11:10:46 crc kubenswrapper[4724]: I0226 11:10:46.536268 4724 generic.go:334] "Generic (PLEG): container finished" podID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerID="c6b33d90fde6af6eda5d3779d45e2539e07cb779fb37997e2a90a3f47dbadc92" exitCode=0 Feb 26 11:10:46 crc kubenswrapper[4724]: I0226 11:10:46.536305 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k5ktg" event={"ID":"7027d958-98c3-4fd1-9442-232be60e1eb7","Type":"ContainerDied","Data":"c6b33d90fde6af6eda5d3779d45e2539e07cb779fb37997e2a90a3f47dbadc92"} Feb 26 11:10:46 crc kubenswrapper[4724]: I0226 11:10:46.536358 4724 scope.go:117] "RemoveContainer" containerID="ee6d3de71827e2c28c30694e0167b2d98f1b93820b63b7f563d157c0b08b21b9" Feb 26 11:10:46 crc kubenswrapper[4724]: I0226 11:10:46.906118 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:10:46 crc kubenswrapper[4724]: I0226 11:10:46.906205 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:10:46 crc kubenswrapper[4724]: I0226 11:10:46.906259 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:10:46 crc kubenswrapper[4724]: I0226 11:10:46.906824 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:10:46 crc kubenswrapper[4724]: I0226 11:10:46.906900 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5" gracePeriod=600 Feb 26 11:10:47 crc kubenswrapper[4724]: I0226 11:10:47.543764 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5" exitCode=0 Feb 26 11:10:47 crc kubenswrapper[4724]: I0226 11:10:47.543799 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5"} Feb 26 11:10:53 crc kubenswrapper[4724]: I0226 11:10:53.209016 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 11:10:55 crc kubenswrapper[4724]: I0226 11:10:55.346071 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:10:55 crc kubenswrapper[4724]: I0226 11:10:55.346172 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.570104 4724 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 11:10:58 crc kubenswrapper[4724]: E0226 11:10:58.571227 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="224453ca-c149-4f84-b22f-d50a9994043e" containerName="pruner" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.571307 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="224453ca-c149-4f84-b22f-d50a9994043e" containerName="pruner" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.571524 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="224453ca-c149-4f84-b22f-d50a9994043e" containerName="pruner" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.571978 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.572975 4724 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.573289 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec" gracePeriod=15 Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.573356 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342" gracePeriod=15 Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.573355 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c" gracePeriod=15 Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.573445 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f" gracePeriod=15 Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.573490 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8" gracePeriod=15 Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.575831 4724 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 11:10:58 crc kubenswrapper[4724]: E0226 11:10:58.576095 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.577196 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:10:58 crc kubenswrapper[4724]: E0226 11:10:58.577369 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.577483 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:10:58 crc kubenswrapper[4724]: E0226 11:10:58.577599 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.577711 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 11:10:58 crc kubenswrapper[4724]: E0226 11:10:58.577822 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.577931 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 11:10:58 crc kubenswrapper[4724]: E0226 11:10:58.578060 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.578167 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 11:10:58 crc kubenswrapper[4724]: E0226 11:10:58.578327 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.578432 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 26 11:10:58 crc kubenswrapper[4724]: E0226 11:10:58.578540 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.578642 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:10:58 crc kubenswrapper[4724]: E0226 11:10:58.578765 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.578865 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.579151 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.579283 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.579394 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.579508 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.579620 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.579710 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.579798 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 11:10:58 crc kubenswrapper[4724]: E0226 11:10:58.580027 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.580126 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:10:58 crc kubenswrapper[4724]: E0226 11:10:58.580244 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.580354 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.580791 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.580907 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.652664 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.688150 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.688238 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.688329 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.688364 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.688439 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.688465 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.688558 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.688588 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.789822 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.789915 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.789960 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.789988 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.789980 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.790032 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.790059 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.790066 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.790126 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.790126 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.790120 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.790166 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.790147 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.790214 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.790224 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.790242 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:58 crc kubenswrapper[4724]: I0226 11:10:58.950483 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:10:59 crc kubenswrapper[4724]: I0226 11:10:59.004554 4724 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Feb 26 11:10:59 crc kubenswrapper[4724]: I0226 11:10:59.004633 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Feb 26 11:11:00 crc kubenswrapper[4724]: I0226 11:11:00.627475 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 11:11:00 crc kubenswrapper[4724]: I0226 11:11:00.629794 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 11:11:00 crc kubenswrapper[4724]: I0226 11:11:00.630741 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342" exitCode=2 Feb 26 11:11:01 crc kubenswrapper[4724]: I0226 11:11:01.638894 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 11:11:01 crc kubenswrapper[4724]: I0226 11:11:01.640368 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 11:11:01 crc kubenswrapper[4724]: I0226 11:11:01.641947 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8" exitCode=0 Feb 26 11:11:01 crc kubenswrapper[4724]: I0226 11:11:01.641971 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f" exitCode=0 Feb 26 11:11:01 crc kubenswrapper[4724]: I0226 11:11:01.641980 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c" exitCode=0 Feb 26 11:11:01 crc kubenswrapper[4724]: I0226 11:11:01.645006 4724 generic.go:334] "Generic (PLEG): container finished" podID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" containerID="1760f6703aa4a52014c45a95d133ffbff5f9d8a169e42d7e7dede3b3fc3b1781" exitCode=0 Feb 26 11:11:01 crc kubenswrapper[4724]: I0226 11:11:01.645041 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"99d4a0b0-dbd2-44f9-afb9-087ea5165db7","Type":"ContainerDied","Data":"1760f6703aa4a52014c45a95d133ffbff5f9d8a169e42d7e7dede3b3fc3b1781"} Feb 26 11:11:01 crc kubenswrapper[4724]: I0226 11:11:01.645834 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:01 crc kubenswrapper[4724]: I0226 11:11:01.646098 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:03 crc kubenswrapper[4724]: I0226 11:11:03.661915 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 11:11:03 crc kubenswrapper[4724]: I0226 11:11:03.665053 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 11:11:03 crc kubenswrapper[4724]: I0226 11:11:03.666342 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec" exitCode=0 Feb 26 11:11:03 crc kubenswrapper[4724]: I0226 11:11:03.978946 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:03 crc kubenswrapper[4724]: I0226 11:11:03.980226 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:04 crc kubenswrapper[4724]: E0226 11:11:04.293253 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/redhat-operators-mqtct.1897c7501f33bf7e\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-mqtct.1897c7501f33bf7e openshift-marketplace 28461 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-mqtct,UID:48a2c1ec-376b-440a-9dd2-6037d5dfdd1f,APIVersion:v1,ResourceVersion:28317,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:08:45 +0000 UTC,LastTimestamp:2026-02-26 11:11:04.292422828 +0000 UTC m=+330.948161943,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.403090 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.403911 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.404399 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.573605 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-kube-api-access\") pod \"99d4a0b0-dbd2-44f9-afb9-087ea5165db7\" (UID: \"99d4a0b0-dbd2-44f9-afb9-087ea5165db7\") " Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.573956 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-kubelet-dir\") pod \"99d4a0b0-dbd2-44f9-afb9-087ea5165db7\" (UID: \"99d4a0b0-dbd2-44f9-afb9-087ea5165db7\") " Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.574003 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-var-lock\") pod \"99d4a0b0-dbd2-44f9-afb9-087ea5165db7\" (UID: \"99d4a0b0-dbd2-44f9-afb9-087ea5165db7\") " Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.574080 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "99d4a0b0-dbd2-44f9-afb9-087ea5165db7" (UID: "99d4a0b0-dbd2-44f9-afb9-087ea5165db7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.574200 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-var-lock" (OuterVolumeSpecName: "var-lock") pod "99d4a0b0-dbd2-44f9-afb9-087ea5165db7" (UID: "99d4a0b0-dbd2-44f9-afb9-087ea5165db7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.574424 4724 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.574452 4724 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-var-lock\") on node \"crc\" DevicePath \"\"" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.580344 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "99d4a0b0-dbd2-44f9-afb9-087ea5165db7" (UID: "99d4a0b0-dbd2-44f9-afb9-087ea5165db7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.674135 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"99d4a0b0-dbd2-44f9-afb9-087ea5165db7","Type":"ContainerDied","Data":"e2fa1a57cbca97fde024c43e899fd8b502fd1b02dc7bba7e87dc7aee100ffe22"} Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.675798 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2fa1a57cbca97fde024c43e899fd8b502fd1b02dc7bba7e87dc7aee100ffe22" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.675836 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99d4a0b0-dbd2-44f9-afb9-087ea5165db7-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.674202 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.697826 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.698769 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.914172 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.915625 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.916465 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.917031 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.917418 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:04 crc kubenswrapper[4724]: I0226 11:11:04.918040 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.080131 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.081262 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.081313 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.081696 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.081596 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.081521 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.082658 4724 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.082677 4724 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.184380 4724 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.315112 4724 scope.go:117] "RemoveContainer" containerID="a43b9054457490b7111a0fd260c4933ee852a58bddebfa6de0a5578b9718fe75" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.346340 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.346400 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:11:05 crc kubenswrapper[4724]: W0226 11:11:05.480803 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-b33b4cbd3e7d976dc325cf6881e373d9d55988291251c31895f99ac7ed35eac5 WatchSource:0}: Error finding container b33b4cbd3e7d976dc325cf6881e373d9d55988291251c31895f99ac7ed35eac5: Status 404 returned error can't find the container with id b33b4cbd3e7d976dc325cf6881e373d9d55988291251c31895f99ac7ed35eac5 Feb 26 11:11:05 crc kubenswrapper[4724]: E0226 11:11:05.609898 4724 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:05 crc kubenswrapper[4724]: E0226 11:11:05.610580 4724 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:05 crc kubenswrapper[4724]: E0226 11:11:05.611060 4724 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:05 crc kubenswrapper[4724]: E0226 11:11:05.611316 4724 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:05 crc kubenswrapper[4724]: E0226 11:11:05.611514 4724 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.611546 4724 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 26 11:11:05 crc kubenswrapper[4724]: E0226 11:11:05.611727 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="200ms" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.683315 4724 scope.go:117] "RemoveContainer" containerID="3da75d49e36931537c05d9fcf7d740d231c59362c50cb52eb8726e923f39a3e8" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.683475 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.684750 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.685119 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.685558 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.699245 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b33b4cbd3e7d976dc325cf6881e373d9d55988291251c31895f99ac7ed35eac5"} Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.731383 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.731855 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.732086 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.772104 4724 scope.go:117] "RemoveContainer" containerID="bf8723866cde1469761fe6479d790554d0d53e71a069e9298e01d2308ba2f18f" Feb 26 11:11:05 crc kubenswrapper[4724]: E0226 11:11:05.812481 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="400ms" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.865495 4724 scope.go:117] "RemoveContainer" containerID="ad37c199711acb5e5b146710676dfd554d2aedf3c2c9eab630d37bc1c49efe3c" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.984235 4724 scope.go:117] "RemoveContainer" containerID="ac92c268a3263732e65883fb0c0ba9287fd368740b396b7f043baf8799dc9342" Feb 26 11:11:05 crc kubenswrapper[4724]: I0226 11:11:05.989246 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.016697 4724 scope.go:117] "RemoveContainer" containerID="92535be6b2e4183113b12e04c6f0b8ea7f5684807545b27079b45dcfb2510bec" Feb 26 11:11:06 crc kubenswrapper[4724]: E0226 11:11:06.213671 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="800ms" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.712164 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"512c865cae468760a5a7701ee00c685edb3eb8ce270a9fed6d0b0e6c4c9fab74"} Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.713715 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.714954 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.715370 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.715699 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlps5" event={"ID":"f9ed0863-9bdf-48ba-ad70-c1c728c58730","Type":"ContainerStarted","Data":"6ffbc836214cfb8e477bfe122ea8fdc26dbc986c7008c6eba7e7ca4b82d59382"} Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.717112 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.717416 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.717652 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.717963 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.725039 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9shd" event={"ID":"0eb55921-4244-4557-aa72-97cea802c3fb","Type":"ContainerStarted","Data":"ce9109683db87f4fd1179b0fd5816a4a83accab340c5a68563b265d2fffe6f21"} Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.726207 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.727410 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.727469 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4d06de794208fb38d65f1efc91a7a47e65f41860e048a0d5b7b11c7983277ea2"} Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.727972 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.728721 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.729358 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.729578 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj7c4" event={"ID":"f4930fbf-4372-4466-b084-a13dfa8a5415","Type":"ContainerStarted","Data":"bad0c8048f0f86adbd3544de02fa1c490ba522a114e060f53a248dff2131b3e8"} Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.730537 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.730977 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.731420 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.733722 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.733975 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gkcb" event={"ID":"35a09ba5-1063-467d-b7a6-c1b2c37a135e","Type":"ContainerStarted","Data":"10e4e9ada4e71131b5fb57a63d9abf86f8e0f271e001852d075326cf07cb4a40"} Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.734423 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.735303 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.735708 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.736165 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.736485 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k5ktg" event={"ID":"7027d958-98c3-4fd1-9442-232be60e1eb7","Type":"ContainerStarted","Data":"347e7d1182f6186e4eb7004d857aaab0aeae0d3b53cd175234e1d3ee2590be7f"} Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.736928 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-k5ktg" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.740348 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.740363 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.740738 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.740753 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.740937 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.741140 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.741403 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.741716 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.741947 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.742265 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.742544 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.742761 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.742972 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.743211 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:06 crc kubenswrapper[4724]: I0226 11:11:06.743449 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: E0226 11:11:07.015206 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="1.6s" Feb 26 11:11:07 crc kubenswrapper[4724]: E0226 11:11:07.192779 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:07Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:07Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:07Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:07Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1216936646},{\\\"names\\\":[],\\\"sizeBytes\\\":1215623375},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: E0226 11:11:07.193314 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: E0226 11:11:07.193518 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: E0226 11:11:07.193666 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: E0226 11:11:07.193807 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: E0226 11:11:07.193825 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:11:07 crc kubenswrapper[4724]: E0226 11:11:07.287631 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/redhat-operators-mqtct.1897c7501f33bf7e\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-mqtct.1897c7501f33bf7e openshift-marketplace 28461 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-mqtct,UID:48a2c1ec-376b-440a-9dd2-6037d5dfdd1f,APIVersion:v1,ResourceVersion:28317,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:08:45 +0000 UTC,LastTimestamp:2026-02-26 11:11:04.292422828 +0000 UTC m=+330.948161943,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.745218 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" event={"ID":"7940e7c1-723b-42e3-818f-dfbd7a795e71","Type":"ContainerStarted","Data":"5b76f1a8012fe6e0eeb4815d4aefb6b5593c7df2aacd6955d88d6b9bc93d2046"} Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.747666 4724 generic.go:334] "Generic (PLEG): container finished" podID="0eb55921-4244-4557-aa72-97cea802c3fb" containerID="ce9109683db87f4fd1179b0fd5816a4a83accab340c5a68563b265d2fffe6f21" exitCode=0 Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.747735 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9shd" event={"ID":"0eb55921-4244-4557-aa72-97cea802c3fb","Type":"ContainerDied","Data":"ce9109683db87f4fd1179b0fd5816a4a83accab340c5a68563b265d2fffe6f21"} Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.748554 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.749775 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.750431 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.750741 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4930fbf-4372-4466-b084-a13dfa8a5415" containerID="bad0c8048f0f86adbd3544de02fa1c490ba522a114e060f53a248dff2131b3e8" exitCode=0 Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.750947 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj7c4" event={"ID":"f4930fbf-4372-4466-b084-a13dfa8a5415","Type":"ContainerDied","Data":"bad0c8048f0f86adbd3544de02fa1c490ba522a114e060f53a248dff2131b3e8"} Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.751591 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.751936 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.752448 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.752755 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.752994 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.755716 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.756973 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.757310 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.757402 4724 generic.go:334] "Generic (PLEG): container finished" podID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" containerID="10e4e9ada4e71131b5fb57a63d9abf86f8e0f271e001852d075326cf07cb4a40" exitCode=0 Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.757493 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gkcb" event={"ID":"35a09ba5-1063-467d-b7a6-c1b2c37a135e","Type":"ContainerDied","Data":"10e4e9ada4e71131b5fb57a63d9abf86f8e0f271e001852d075326cf07cb4a40"} Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.757537 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.757723 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.757926 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.758463 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.758803 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.759691 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.760037 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.760303 4724 generic.go:334] "Generic (PLEG): container finished" podID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" containerID="6ffbc836214cfb8e477bfe122ea8fdc26dbc986c7008c6eba7e7ca4b82d59382" exitCode=0 Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.760391 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlps5" event={"ID":"f9ed0863-9bdf-48ba-ad70-c1c728c58730","Type":"ContainerDied","Data":"6ffbc836214cfb8e477bfe122ea8fdc26dbc986c7008c6eba7e7ca4b82d59382"} Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.760485 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.760760 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.760969 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.761166 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.761470 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.761512 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.761557 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.761664 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.761949 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.762206 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.762429 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.762620 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.762794 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.762991 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.763170 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:07 crc kubenswrapper[4724]: I0226 11:11:07.763454 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: E0226 11:11:08.616105 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="3.2s" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.766348 4724 generic.go:334] "Generic (PLEG): container finished" podID="7940e7c1-723b-42e3-818f-dfbd7a795e71" containerID="5b76f1a8012fe6e0eeb4815d4aefb6b5593c7df2aacd6955d88d6b9bc93d2046" exitCode=0 Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.766406 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" event={"ID":"7940e7c1-723b-42e3-818f-dfbd7a795e71","Type":"ContainerDied","Data":"5b76f1a8012fe6e0eeb4815d4aefb6b5593c7df2aacd6955d88d6b9bc93d2046"} Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.767184 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.767771 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.768115 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.768321 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.768483 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.768628 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.768770 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.768920 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.768920 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535068-crjcm" event={"ID":"91b7ba35-3bf3-4738-8a71-d093b0e7fd12","Type":"ContainerStarted","Data":"b3cfc0eb4e47a693d43dd4113a84c88d408e112e4e68c57057e6512a8879bc5e"} Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.769268 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.769779 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.770004 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.770155 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.770418 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.770647 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.770837 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.771034 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.771317 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:08 crc kubenswrapper[4724]: I0226 11:11:08.772328 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:09 crc kubenswrapper[4724]: I0226 11:11:09.780109 4724 generic.go:334] "Generic (PLEG): container finished" podID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" containerID="b3cfc0eb4e47a693d43dd4113a84c88d408e112e4e68c57057e6512a8879bc5e" exitCode=0 Feb 26 11:11:09 crc kubenswrapper[4724]: I0226 11:11:09.781115 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535068-crjcm" event={"ID":"91b7ba35-3bf3-4738-8a71-d093b0e7fd12","Type":"ContainerDied","Data":"b3cfc0eb4e47a693d43dd4113a84c88d408e112e4e68c57057e6512a8879bc5e"} Feb 26 11:11:09 crc kubenswrapper[4724]: I0226 11:11:09.781239 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:09 crc kubenswrapper[4724]: I0226 11:11:09.781429 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:09 crc kubenswrapper[4724]: I0226 11:11:09.781619 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:09 crc kubenswrapper[4724]: I0226 11:11:09.782102 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:09 crc kubenswrapper[4724]: I0226 11:11:09.782899 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:09 crc kubenswrapper[4724]: I0226 11:11:09.783122 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:09 crc kubenswrapper[4724]: I0226 11:11:09.783322 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:09 crc kubenswrapper[4724]: I0226 11:11:09.783487 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:09 crc kubenswrapper[4724]: I0226 11:11:09.783650 4724 status_manager.go:851] "Failed to get status for pod" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" pod="openshift-infra/auto-csr-approver-29535068-crjcm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535068-crjcm\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:09 crc kubenswrapper[4724]: I0226 11:11:09.783851 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.588773 4724 scope.go:117] "RemoveContainer" containerID="3f21062d3c30a5dcc62b35ef2bb008492d72d18bb91f46cfa94d4cddd5a4bfd2" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.662673 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.663277 4724 status_manager.go:851] "Failed to get status for pod" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" pod="openshift-infra/auto-csr-approver-29535068-crjcm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535068-crjcm\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.663768 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.663979 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.664167 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.664375 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.664655 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.665021 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.665430 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.665789 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.666363 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.667614 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535068-crjcm" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.668194 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.668624 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.668921 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.669209 4724 status_manager.go:851] "Failed to get status for pod" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" pod="openshift-infra/auto-csr-approver-29535068-crjcm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535068-crjcm\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.669557 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.669801 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.670049 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.670584 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.670939 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.671189 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.776791 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s26nd\" (UniqueName: \"kubernetes.io/projected/91b7ba35-3bf3-4738-8a71-d093b0e7fd12-kube-api-access-s26nd\") pod \"91b7ba35-3bf3-4738-8a71-d093b0e7fd12\" (UID: \"91b7ba35-3bf3-4738-8a71-d093b0e7fd12\") " Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.776834 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk25d\" (UniqueName: \"kubernetes.io/projected/7940e7c1-723b-42e3-818f-dfbd7a795e71-kube-api-access-hk25d\") pod \"7940e7c1-723b-42e3-818f-dfbd7a795e71\" (UID: \"7940e7c1-723b-42e3-818f-dfbd7a795e71\") " Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.781814 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7940e7c1-723b-42e3-818f-dfbd7a795e71-kube-api-access-hk25d" (OuterVolumeSpecName: "kube-api-access-hk25d") pod "7940e7c1-723b-42e3-818f-dfbd7a795e71" (UID: "7940e7c1-723b-42e3-818f-dfbd7a795e71"). InnerVolumeSpecName "kube-api-access-hk25d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.783066 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b7ba35-3bf3-4738-8a71-d093b0e7fd12-kube-api-access-s26nd" (OuterVolumeSpecName: "kube-api-access-s26nd") pod "91b7ba35-3bf3-4738-8a71-d093b0e7fd12" (UID: "91b7ba35-3bf3-4738-8a71-d093b0e7fd12"). InnerVolumeSpecName "kube-api-access-s26nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.790811 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535068-crjcm" event={"ID":"91b7ba35-3bf3-4738-8a71-d093b0e7fd12","Type":"ContainerDied","Data":"5b0aa192d079c4e570ac51cd56221f4e74ef0374e350098f7ea3001c3f8001ad"} Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.790868 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b0aa192d079c4e570ac51cd56221f4e74ef0374e350098f7ea3001c3f8001ad" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.790822 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535068-crjcm" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.793343 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" event={"ID":"7940e7c1-723b-42e3-818f-dfbd7a795e71","Type":"ContainerDied","Data":"6b472680e8475c9fdf19ab0621a756d7c25f022ed11de4b5020bad0ee57a10f8"} Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.793393 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b472680e8475c9fdf19ab0621a756d7c25f022ed11de4b5020bad0ee57a10f8" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.793455 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" Feb 26 11:11:11 crc kubenswrapper[4724]: E0226 11:11:11.817761 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="6.4s" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.818037 4724 status_manager.go:851] "Failed to get status for pod" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" pod="openshift-infra/auto-csr-approver-29535068-crjcm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535068-crjcm\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.818361 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.818612 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.818768 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.818927 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.819105 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.819328 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.819539 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.819742 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.819895 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.820122 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.820284 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.820457 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.820609 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.820751 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.820891 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.821087 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.821249 4724 status_manager.go:851] "Failed to get status for pod" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" pod="openshift-infra/auto-csr-approver-29535068-crjcm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535068-crjcm\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.821436 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.821596 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.878435 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s26nd\" (UniqueName: \"kubernetes.io/projected/91b7ba35-3bf3-4738-8a71-d093b0e7fd12-kube-api-access-s26nd\") on node \"crc\" DevicePath \"\"" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.878467 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk25d\" (UniqueName: \"kubernetes.io/projected/7940e7c1-723b-42e3-818f-dfbd7a795e71-kube-api-access-hk25d\") on node \"crc\" DevicePath \"\"" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.975246 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.977079 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.977613 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.977978 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.978252 4724 status_manager.go:851] "Failed to get status for pod" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" pod="openshift-infra/auto-csr-approver-29535068-crjcm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535068-crjcm\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.978724 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.979493 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.979791 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.979991 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.982901 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.983381 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.993601 4724 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b0ab7cfa-6ae2-41b0-be77-ceebb3b237da" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.993649 4724 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b0ab7cfa-6ae2-41b0-be77-ceebb3b237da" Feb 26 11:11:11 crc kubenswrapper[4724]: E0226 11:11:11.994431 4724 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:11 crc kubenswrapper[4724]: I0226 11:11:11.995119 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:13 crc kubenswrapper[4724]: I0226 11:11:13.010019 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 11:11:13 crc kubenswrapper[4724]: I0226 11:11:13.010374 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 11:11:13 crc kubenswrapper[4724]: I0226 11:11:13.985169 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:13 crc kubenswrapper[4724]: I0226 11:11:13.985428 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:13 crc kubenswrapper[4724]: I0226 11:11:13.985609 4724 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:13 crc kubenswrapper[4724]: I0226 11:11:13.985825 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:13 crc kubenswrapper[4724]: I0226 11:11:13.986000 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:13 crc kubenswrapper[4724]: I0226 11:11:13.986244 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:13 crc kubenswrapper[4724]: I0226 11:11:13.986431 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:13 crc kubenswrapper[4724]: I0226 11:11:13.986648 4724 status_manager.go:851] "Failed to get status for pod" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" pod="openshift-infra/auto-csr-approver-29535068-crjcm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535068-crjcm\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:13 crc kubenswrapper[4724]: I0226 11:11:13.987100 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:13 crc kubenswrapper[4724]: I0226 11:11:13.987487 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:13 crc kubenswrapper[4724]: I0226 11:11:13.987716 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.815962 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.817497 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.817571 4724 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="30fabdbc207283755407fe33ee609345de6dcab0c4a0272e0b04a3cf02daf7eb" exitCode=1 Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.817621 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"30fabdbc207283755407fe33ee609345de6dcab0c4a0272e0b04a3cf02daf7eb"} Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.818490 4724 scope.go:117] "RemoveContainer" containerID="30fabdbc207283755407fe33ee609345de6dcab0c4a0272e0b04a3cf02daf7eb" Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.818530 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.818937 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.819345 4724 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.819941 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.820291 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.820690 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.820961 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.821269 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.821555 4724 status_manager.go:851] "Failed to get status for pod" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" pod="openshift-infra/auto-csr-approver-29535068-crjcm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535068-crjcm\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.821872 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.822152 4724 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:14 crc kubenswrapper[4724]: I0226 11:11:14.822436 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:15 crc kubenswrapper[4724]: I0226 11:11:15.346367 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:11:15 crc kubenswrapper[4724]: I0226 11:11:15.346424 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:11:15 crc kubenswrapper[4724]: I0226 11:11:15.346379 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:11:15 crc kubenswrapper[4724]: I0226 11:11:15.346472 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:11:17 crc kubenswrapper[4724]: E0226 11:11:17.288723 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/redhat-operators-mqtct.1897c7501f33bf7e\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-mqtct.1897c7501f33bf7e openshift-marketplace 28461 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-mqtct,UID:48a2c1ec-376b-440a-9dd2-6037d5dfdd1f,APIVersion:v1,ResourceVersion:28317,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:08:45 +0000 UTC,LastTimestamp:2026-02-26 11:11:04.292422828 +0000 UTC m=+330.948161943,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:17 crc kubenswrapper[4724]: E0226 11:11:17.303704 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:17Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:17Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:17Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:17Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1216936646},{\\\"names\\\":[],\\\"sizeBytes\\\":1215623375},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:17 crc kubenswrapper[4724]: E0226 11:11:17.304053 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:17 crc kubenswrapper[4724]: E0226 11:11:17.304431 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:17 crc kubenswrapper[4724]: E0226 11:11:17.304760 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:17 crc kubenswrapper[4724]: E0226 11:11:17.305770 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:17 crc kubenswrapper[4724]: E0226 11:11:17.305866 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:11:18 crc kubenswrapper[4724]: E0226 11:11:18.220061 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="7s" Feb 26 11:11:19 crc kubenswrapper[4724]: W0226 11:11:19.782139 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-74b3e92962a1604427ac4ba36b3368a629470ebc8c29773dbb917fd61f3e909d WatchSource:0}: Error finding container 74b3e92962a1604427ac4ba36b3368a629470ebc8c29773dbb917fd61f3e909d: Status 404 returned error can't find the container with id 74b3e92962a1604427ac4ba36b3368a629470ebc8c29773dbb917fd61f3e909d Feb 26 11:11:19 crc kubenswrapper[4724]: I0226 11:11:19.845132 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"74b3e92962a1604427ac4ba36b3368a629470ebc8c29773dbb917fd61f3e909d"} Feb 26 11:11:20 crc kubenswrapper[4724]: I0226 11:11:20.854921 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:22 crc kubenswrapper[4724]: I0226 11:11:22.442133 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:23 crc kubenswrapper[4724]: I0226 11:11:23.009571 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:23 crc kubenswrapper[4724]: I0226 11:11:23.980046 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:23 crc kubenswrapper[4724]: I0226 11:11:23.982316 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:23 crc kubenswrapper[4724]: I0226 11:11:23.983097 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:23 crc kubenswrapper[4724]: I0226 11:11:23.983790 4724 status_manager.go:851] "Failed to get status for pod" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" pod="openshift-infra/auto-csr-approver-29535068-crjcm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535068-crjcm\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:23 crc kubenswrapper[4724]: I0226 11:11:23.985870 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:23 crc kubenswrapper[4724]: I0226 11:11:23.986333 4724 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:23 crc kubenswrapper[4724]: I0226 11:11:23.987004 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:23 crc kubenswrapper[4724]: I0226 11:11:23.987580 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:23 crc kubenswrapper[4724]: I0226 11:11:23.987872 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:23 crc kubenswrapper[4724]: I0226 11:11:23.988311 4724 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:23 crc kubenswrapper[4724]: I0226 11:11:23.988886 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:23 crc kubenswrapper[4724]: I0226 11:11:23.989203 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:24 crc kubenswrapper[4724]: I0226 11:11:24.875055 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 11:11:24 crc kubenswrapper[4724]: I0226 11:11:24.876585 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 11:11:24 crc kubenswrapper[4724]: I0226 11:11:24.876712 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5982ccc4e0be4127f06743b6c974c80d48e068250ba4adadde6014566cd152b2"} Feb 26 11:11:24 crc kubenswrapper[4724]: I0226 11:11:24.877613 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:24 crc kubenswrapper[4724]: I0226 11:11:24.877885 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:24 crc kubenswrapper[4724]: I0226 11:11:24.878120 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:24 crc kubenswrapper[4724]: I0226 11:11:24.878365 4724 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:24 crc kubenswrapper[4724]: I0226 11:11:24.878603 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:24 crc kubenswrapper[4724]: I0226 11:11:24.878837 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:24 crc kubenswrapper[4724]: I0226 11:11:24.879062 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:24 crc kubenswrapper[4724]: I0226 11:11:24.879299 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:24 crc kubenswrapper[4724]: I0226 11:11:24.879512 4724 status_manager.go:851] "Failed to get status for pod" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" pod="openshift-infra/auto-csr-approver-29535068-crjcm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535068-crjcm\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:24 crc kubenswrapper[4724]: I0226 11:11:24.879727 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:24 crc kubenswrapper[4724]: I0226 11:11:24.879940 4724 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:24 crc kubenswrapper[4724]: I0226 11:11:24.880155 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:25 crc kubenswrapper[4724]: E0226 11:11:25.221340 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="7s" Feb 26 11:11:25 crc kubenswrapper[4724]: I0226 11:11:25.346412 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:11:25 crc kubenswrapper[4724]: I0226 11:11:25.346426 4724 patch_prober.go:28] interesting pod/downloads-7954f5f757-k5ktg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 26 11:11:25 crc kubenswrapper[4724]: I0226 11:11:25.346486 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:11:25 crc kubenswrapper[4724]: I0226 11:11:25.346503 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k5ktg" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 26 11:11:27 crc kubenswrapper[4724]: E0226 11:11:27.291166 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/redhat-operators-mqtct.1897c7501f33bf7e\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-mqtct.1897c7501f33bf7e openshift-marketplace 28461 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-mqtct,UID:48a2c1ec-376b-440a-9dd2-6037d5dfdd1f,APIVersion:v1,ResourceVersion:28317,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:08:45 +0000 UTC,LastTimestamp:2026-02-26 11:11:04.292422828 +0000 UTC m=+330.948161943,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:27 crc kubenswrapper[4724]: E0226 11:11:27.583955 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:27Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:27Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:27Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:27Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1216936646},{\\\"names\\\":[],\\\"sizeBytes\\\":1215623375},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:27 crc kubenswrapper[4724]: E0226 11:11:27.584791 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:27 crc kubenswrapper[4724]: E0226 11:11:27.585029 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:27 crc kubenswrapper[4724]: E0226 11:11:27.585301 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:27 crc kubenswrapper[4724]: E0226 11:11:27.585685 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:27 crc kubenswrapper[4724]: E0226 11:11:27.585715 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:11:30 crc kubenswrapper[4724]: I0226 11:11:30.855434 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:30 crc kubenswrapper[4724]: I0226 11:11:30.855783 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 11:11:30 crc kubenswrapper[4724]: I0226 11:11:30.855874 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 11:11:32 crc kubenswrapper[4724]: E0226 11:11:32.221744 4724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="7s" Feb 26 11:11:32 crc kubenswrapper[4724]: I0226 11:11:32.442368 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:32 crc kubenswrapper[4724]: I0226 11:11:32.941236 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"55b300cfc31bf5c9398fbf368803a19bfdb358f6e44ff074bb448fffed901982"} Feb 26 11:11:33 crc kubenswrapper[4724]: I0226 11:11:33.977796 4724 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:33 crc kubenswrapper[4724]: I0226 11:11:33.978617 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:33 crc kubenswrapper[4724]: I0226 11:11:33.978882 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:33 crc kubenswrapper[4724]: I0226 11:11:33.979151 4724 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:33 crc kubenswrapper[4724]: I0226 11:11:33.979435 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:33 crc kubenswrapper[4724]: I0226 11:11:33.979760 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:33 crc kubenswrapper[4724]: I0226 11:11:33.980219 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:33 crc kubenswrapper[4724]: I0226 11:11:33.980485 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:33 crc kubenswrapper[4724]: I0226 11:11:33.980780 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:33 crc kubenswrapper[4724]: I0226 11:11:33.981107 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:33 crc kubenswrapper[4724]: I0226 11:11:33.981412 4724 status_manager.go:851] "Failed to get status for pod" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" pod="openshift-infra/auto-csr-approver-29535068-crjcm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535068-crjcm\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:33 crc kubenswrapper[4724]: I0226 11:11:33.981748 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:34 crc kubenswrapper[4724]: I0226 11:11:34.952933 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xb5gc" event={"ID":"056030ad-19ca-4542-a486-139eb62524b0","Type":"ContainerStarted","Data":"d1c91108a0a9f4f857d499b0086fed9ccc010e058317c1644a4aeb51182bd1bb"} Feb 26 11:11:34 crc kubenswrapper[4724]: I0226 11:11:34.955616 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqtct" event={"ID":"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f","Type":"ContainerStarted","Data":"de482463721a91271fe141f706afecc2d0f53a6e1e3ede673e072d21189c8d40"} Feb 26 11:11:34 crc kubenswrapper[4724]: I0226 11:11:34.957872 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj7c4" event={"ID":"f4930fbf-4372-4466-b084-a13dfa8a5415","Type":"ContainerStarted","Data":"bb1fdb4149f54b4f3af9e9c18ddce04d2a874a6a9c95b8e4640df2793389849d"} Feb 26 11:11:34 crc kubenswrapper[4724]: I0226 11:11:34.960565 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gkcb" event={"ID":"35a09ba5-1063-467d-b7a6-c1b2c37a135e","Type":"ContainerStarted","Data":"079fd25ad4396d8206b795d4fec7a9d4bb6beffb4beab106d70f958f78325732"} Feb 26 11:11:34 crc kubenswrapper[4724]: I0226 11:11:34.962546 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlps5" event={"ID":"f9ed0863-9bdf-48ba-ad70-c1c728c58730","Type":"ContainerStarted","Data":"2d1fc0c80fcf15e43a8c5169b44284cda7d8f675992e7dcb7907b13561a59d10"} Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.354080 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-k5ktg" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.354576 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.354861 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.355124 4724 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.355478 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.355930 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.356246 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.356674 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.357044 4724 status_manager.go:851] "Failed to get status for pod" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" pod="openshift-infra/auto-csr-approver-29535068-crjcm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535068-crjcm\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.357333 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.357699 4724 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.358087 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.358424 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.968910 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9shd" event={"ID":"0eb55921-4244-4557-aa72-97cea802c3fb","Type":"ContainerStarted","Data":"2fd06c1e0167a1445e0f39f8f9447bbecd8e138bc772f2f5c6d35250766ad7db"} Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.970284 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64lrq" event={"ID":"4f727f37-5bac-476b-88a0-3d751c47e264","Type":"ContainerStarted","Data":"14cd4ae4993277a3eb5cca5ead8fad774ad1a080f1a23b66453e3662019b0d6b"} Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.971939 4724 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="55b300cfc31bf5c9398fbf368803a19bfdb358f6e44ff074bb448fffed901982" exitCode=0 Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.972035 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"55b300cfc31bf5c9398fbf368803a19bfdb358f6e44ff074bb448fffed901982"} Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.972359 4724 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b0ab7cfa-6ae2-41b0-be77-ceebb3b237da" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.972374 4724 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b0ab7cfa-6ae2-41b0-be77-ceebb3b237da" Feb 26 11:11:35 crc kubenswrapper[4724]: E0226 11:11:35.972986 4724 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.973031 4724 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.973334 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.973588 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.973828 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.974109 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.974357 4724 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.974676 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.975147 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.975607 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.975786 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.976023 4724 status_manager.go:851] "Failed to get status for pod" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" pod="openshift-infra/auto-csr-approver-29535068-crjcm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535068-crjcm\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.976652 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.976730 4724 generic.go:334] "Generic (PLEG): container finished" podID="056030ad-19ca-4542-a486-139eb62524b0" containerID="d1c91108a0a9f4f857d499b0086fed9ccc010e058317c1644a4aeb51182bd1bb" exitCode=0 Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.979376 4724 generic.go:334] "Generic (PLEG): container finished" podID="48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" containerID="de482463721a91271fe141f706afecc2d0f53a6e1e3ede673e072d21189c8d40" exitCode=0 Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.982344 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92dsj" event={"ID":"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc","Type":"ContainerStarted","Data":"34bd3724bd7f361d1cde69fdf74167630cb7f6bd8f6b0023e121e01a2c0b03f2"} Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.982375 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xb5gc" event={"ID":"056030ad-19ca-4542-a486-139eb62524b0","Type":"ContainerDied","Data":"d1c91108a0a9f4f857d499b0086fed9ccc010e058317c1644a4aeb51182bd1bb"} Feb 26 11:11:35 crc kubenswrapper[4724]: I0226 11:11:35.982387 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqtct" event={"ID":"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f","Type":"ContainerDied","Data":"de482463721a91271fe141f706afecc2d0f53a6e1e3ede673e072d21189c8d40"} Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.987645 4724 generic.go:334] "Generic (PLEG): container finished" podID="4f727f37-5bac-476b-88a0-3d751c47e264" containerID="14cd4ae4993277a3eb5cca5ead8fad774ad1a080f1a23b66453e3662019b0d6b" exitCode=0 Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.987676 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64lrq" event={"ID":"4f727f37-5bac-476b-88a0-3d751c47e264","Type":"ContainerDied","Data":"14cd4ae4993277a3eb5cca5ead8fad774ad1a080f1a23b66453e3662019b0d6b"} Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.988352 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.989890 4724 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.990136 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.991334 4724 status_manager.go:851] "Failed to get status for pod" podUID="4f727f37-5bac-476b-88a0-3d751c47e264" pod="openshift-marketplace/redhat-operators-64lrq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64lrq\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.991566 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.992469 4724 generic.go:334] "Generic (PLEG): container finished" podID="8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" containerID="34bd3724bd7f361d1cde69fdf74167630cb7f6bd8f6b0023e121e01a2c0b03f2" exitCode=0 Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.992509 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92dsj" event={"ID":"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc","Type":"ContainerDied","Data":"34bd3724bd7f361d1cde69fdf74167630cb7f6bd8f6b0023e121e01a2c0b03f2"} Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.992897 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.993291 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.994098 4724 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.994371 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.994655 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.994883 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.995153 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.995448 4724 status_manager.go:851] "Failed to get status for pod" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" pod="openshift-infra/auto-csr-approver-29535068-crjcm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535068-crjcm\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.995799 4724 status_manager.go:851] "Failed to get status for pod" podUID="8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" pod="openshift-marketplace/certified-operators-92dsj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-92dsj\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.995987 4724 status_manager.go:851] "Failed to get status for pod" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" pod="openshift-marketplace/community-operators-vlps5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vlps5\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.996197 4724 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.996396 4724 status_manager.go:851] "Failed to get status for pod" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" pod="openshift-marketplace/certified-operators-2gkcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2gkcb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.996592 4724 status_manager.go:851] "Failed to get status for pod" podUID="48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" pod="openshift-marketplace/redhat-operators-mqtct" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mqtct\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.996779 4724 status_manager.go:851] "Failed to get status for pod" podUID="4f727f37-5bac-476b-88a0-3d751c47e264" pod="openshift-marketplace/redhat-operators-64lrq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-64lrq\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.996972 4724 status_manager.go:851] "Failed to get status for pod" podUID="7027d958-98c3-4fd1-9442-232be60e1eb7" pod="openshift-console/downloads-7954f5f757-k5ktg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-k5ktg\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.997192 4724 status_manager.go:851] "Failed to get status for pod" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" pod="openshift-marketplace/community-operators-p9shd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9shd\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.997385 4724 status_manager.go:851] "Failed to get status for pod" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.997577 4724 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.997742 4724 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.998105 4724 status_manager.go:851] "Failed to get status for pod" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" pod="openshift-marketplace/redhat-marketplace-hj7c4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-hj7c4\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.998432 4724 status_manager.go:851] "Failed to get status for pod" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" pod="openshift-infra/auto-csr-approver-29535070-lxjqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535070-lxjqb\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.998976 4724 status_manager.go:851] "Failed to get status for pod" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gv7d\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.999336 4724 status_manager.go:851] "Failed to get status for pod" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" pod="openshift-infra/auto-csr-approver-29535068-crjcm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535068-crjcm\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:36 crc kubenswrapper[4724]: I0226 11:11:36.999676 4724 status_manager.go:851] "Failed to get status for pod" podUID="056030ad-19ca-4542-a486-139eb62524b0" pod="openshift-marketplace/redhat-marketplace-xb5gc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xb5gc\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:37 crc kubenswrapper[4724]: E0226 11:11:37.293101 4724 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/redhat-operators-mqtct.1897c7501f33bf7e\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-mqtct.1897c7501f33bf7e openshift-marketplace 28461 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-mqtct,UID:48a2c1ec-376b-440a-9dd2-6037d5dfdd1f,APIVersion:v1,ResourceVersion:28317,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 11:08:45 +0000 UTC,LastTimestamp:2026-02-26 11:11:04.292422828 +0000 UTC m=+330.948161943,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 11:11:37 crc kubenswrapper[4724]: E0226 11:11:37.956951 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:37Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:37Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:37Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T11:11:37Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:6f99fe8f155ece83937498888b07c76622c4a9d57faf85421c58e98dbe91a201\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:a0392905d5528ae4396253f0fb315540a65e9d041a23fa7204ff4c50096706ae\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1706887383},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a39473d1443594317812b9e453bc1338c8a047114ef1036a02fa1a6f727cc400\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d319aa4bb0ff5d32a48b47a7cb516d0cf980ced429c362b5180986f874da5d40\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1257183961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:d0f5facf1d0e6c487de9741d96bd2ca8f5d0bd808390ab8f986f9930acbf9d13\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1216936646},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:48c75e09d93cb5f991aeb25a6a7331f20014fc7a025cfb1ac3ca4e65f8a525a9\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:b621350662f546812f6c4d8dc3746e7f9aa73481a87621c54429ecde0129e07e\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1215623375},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-cli@sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9\\\",\\\"registry.redhat.io/openshift4/ose-cli@sha256:ef83967297f619f45075e7fd1428a1eb981622a6c174c46fb53b158ed24bed85\\\",\\\"registry.redhat.io/openshift4/ose-cli:latest\\\"],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:37 crc kubenswrapper[4724]: E0226 11:11:37.957464 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:37 crc kubenswrapper[4724]: E0226 11:11:37.957731 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:37 crc kubenswrapper[4724]: E0226 11:11:37.957999 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:37 crc kubenswrapper[4724]: E0226 11:11:37.958253 4724 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Feb 26 11:11:37 crc kubenswrapper[4724]: E0226 11:11:37.958284 4724 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 11:11:38 crc kubenswrapper[4724]: I0226 11:11:38.872543 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:11:38 crc kubenswrapper[4724]: I0226 11:11:38.872943 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:11:38 crc kubenswrapper[4724]: I0226 11:11:38.957091 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:11:38 crc kubenswrapper[4724]: I0226 11:11:38.957158 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:11:39 crc kubenswrapper[4724]: I0226 11:11:39.010474 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Feb 26 11:11:39 crc kubenswrapper[4724]: I0226 11:11:39.011311 4724 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f" exitCode=1 Feb 26 11:11:39 crc kubenswrapper[4724]: I0226 11:11:39.011398 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f"} Feb 26 11:11:39 crc kubenswrapper[4724]: I0226 11:11:39.012277 4724 scope.go:117] "RemoveContainer" containerID="357842a7bd65182d1db3de04d6a26fe1cafbaf598c383f4f001c4a8d150ef39f" Feb 26 11:11:39 crc kubenswrapper[4724]: I0226 11:11:39.015766 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b58ae02be44e5a64e72cc27abc66188d5d8e44639c7752b99e7452ebfb233819"} Feb 26 11:11:39 crc kubenswrapper[4724]: I0226 11:11:39.080497 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:11:39 crc kubenswrapper[4724]: I0226 11:11:39.080556 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:11:39 crc kubenswrapper[4724]: I0226 11:11:39.610943 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:11:39 crc kubenswrapper[4724]: I0226 11:11:39.613864 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:11:39 crc kubenswrapper[4724]: I0226 11:11:39.615944 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:11:39 crc kubenswrapper[4724]: I0226 11:11:39.678152 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:11:40 crc kubenswrapper[4724]: I0226 11:11:40.021745 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4ace7275c644cddbdbfe58ada5e07fb30799b1f294cad93cf93d168c0a76453d"} Feb 26 11:11:40 crc kubenswrapper[4724]: I0226 11:11:40.076302 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:11:40 crc kubenswrapper[4724]: I0226 11:11:40.855531 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 11:11:40 crc kubenswrapper[4724]: I0226 11:11:40.855619 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 11:11:41 crc kubenswrapper[4724]: I0226 11:11:41.027611 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Feb 26 11:11:41 crc kubenswrapper[4724]: I0226 11:11:41.028529 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"094269abe35265fd57465439eab8424b25d8c1aaf217f1a8a3bc4892392f7912"} Feb 26 11:11:41 crc kubenswrapper[4724]: I0226 11:11:41.083099 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:11:41 crc kubenswrapper[4724]: I0226 11:11:41.083148 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:11:41 crc kubenswrapper[4724]: I0226 11:11:41.130867 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:11:42 crc kubenswrapper[4724]: I0226 11:11:42.079159 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:11:43 crc kubenswrapper[4724]: I0226 11:11:43.046421 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"180b07a383de2edc6fd226b072c29c77889f2897a88f20165e890df3ca39b426"} Feb 26 11:11:48 crc kubenswrapper[4724]: I0226 11:11:48.990021 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:11:50 crc kubenswrapper[4724]: I0226 11:11:50.855518 4724 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 11:11:50 crc kubenswrapper[4724]: I0226 11:11:50.856065 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 11:11:50 crc kubenswrapper[4724]: I0226 11:11:50.856123 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:11:50 crc kubenswrapper[4724]: I0226 11:11:50.856931 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"5982ccc4e0be4127f06743b6c974c80d48e068250ba4adadde6014566cd152b2"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 26 11:11:50 crc kubenswrapper[4724]: I0226 11:11:50.857050 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://5982ccc4e0be4127f06743b6c974c80d48e068250ba4adadde6014566cd152b2" gracePeriod=30 Feb 26 11:11:55 crc kubenswrapper[4724]: I0226 11:11:55.114999 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64lrq" event={"ID":"4f727f37-5bac-476b-88a0-3d751c47e264","Type":"ContainerStarted","Data":"3135c43a357fc1c9365cf1b386458200bd97bcd37cb961a1a80746c9685fdd00"} Feb 26 11:11:55 crc kubenswrapper[4724]: I0226 11:11:55.119660 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f65e3e342233c083ecd39366caefc942eeca36d226de35be1b17167c78123e47"} Feb 26 11:11:55 crc kubenswrapper[4724]: I0226 11:11:55.121908 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92dsj" event={"ID":"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc","Type":"ContainerStarted","Data":"c805f42b4b6b1239e46c0e5d1cf780973199cb7010c00ebdeaa519107094af98"} Feb 26 11:11:55 crc kubenswrapper[4724]: I0226 11:11:55.124041 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xb5gc" event={"ID":"056030ad-19ca-4542-a486-139eb62524b0","Type":"ContainerStarted","Data":"e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8"} Feb 26 11:11:55 crc kubenswrapper[4724]: I0226 11:11:55.126080 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqtct" event={"ID":"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f","Type":"ContainerStarted","Data":"68a33343c30b0049633cc7df1f9d3b3ddc1994030a58f4ddbc87657d165f9611"} Feb 26 11:11:56 crc kubenswrapper[4724]: I0226 11:11:56.135590 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8f681929848c39d7da9c51042d0b7ad9228802f63704ddf424b3525e0d6f9c9d"} Feb 26 11:11:56 crc kubenswrapper[4724]: I0226 11:11:56.136102 4724 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b0ab7cfa-6ae2-41b0-be77-ceebb3b237da" Feb 26 11:11:56 crc kubenswrapper[4724]: I0226 11:11:56.136241 4724 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b0ab7cfa-6ae2-41b0-be77-ceebb3b237da" Feb 26 11:11:56 crc kubenswrapper[4724]: I0226 11:11:56.151675 4724 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:56 crc kubenswrapper[4724]: I0226 11:11:56.160149 4724 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ab7cfa-6ae2-41b0-be77-ceebb3b237da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b58ae02be44e5a64e72cc27abc66188d5d8e44639c7752b99e7452ebfb233819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180b07a383de2edc6fd226b072c29c77889f2897a88f20165e890df3ca39b426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ace7275c644cddbdbfe58ada5e07fb30799b1f294cad93cf93d168c0a76453d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f681929848c39d7da9c51042d0b7ad9228802f63704ddf424b3525e0d6f9c9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e342233c083ecd39366caefc942eeca36d226de35be1b17167c78123e47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T11:11:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": pods \"kube-apiserver-crc\" not found" Feb 26 11:11:56 crc kubenswrapper[4724]: I0226 11:11:56.996226 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:56 crc kubenswrapper[4724]: I0226 11:11:56.996294 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:57 crc kubenswrapper[4724]: I0226 11:11:57.002442 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:57 crc kubenswrapper[4724]: I0226 11:11:57.006615 4724 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ec08a0b5-55d2-47db-b92f-11dac85c1eab" Feb 26 11:11:57 crc kubenswrapper[4724]: I0226 11:11:57.141794 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:57 crc kubenswrapper[4724]: I0226 11:11:57.141936 4724 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b0ab7cfa-6ae2-41b0-be77-ceebb3b237da" Feb 26 11:11:57 crc kubenswrapper[4724]: I0226 11:11:57.141959 4724 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b0ab7cfa-6ae2-41b0-be77-ceebb3b237da" Feb 26 11:11:58 crc kubenswrapper[4724]: I0226 11:11:58.146272 4724 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b0ab7cfa-6ae2-41b0-be77-ceebb3b237da" Feb 26 11:11:58 crc kubenswrapper[4724]: I0226 11:11:58.146591 4724 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b0ab7cfa-6ae2-41b0-be77-ceebb3b237da" Feb 26 11:11:58 crc kubenswrapper[4724]: I0226 11:11:58.151294 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:11:58 crc kubenswrapper[4724]: I0226 11:11:58.447052 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:11:58 crc kubenswrapper[4724]: I0226 11:11:58.447109 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:11:58 crc kubenswrapper[4724]: I0226 11:11:58.492902 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:11:59 crc kubenswrapper[4724]: I0226 11:11:59.153736 4724 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b0ab7cfa-6ae2-41b0-be77-ceebb3b237da" Feb 26 11:11:59 crc kubenswrapper[4724]: I0226 11:11:59.153771 4724 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b0ab7cfa-6ae2-41b0-be77-ceebb3b237da" Feb 26 11:11:59 crc kubenswrapper[4724]: I0226 11:11:59.192286 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:12:00 crc kubenswrapper[4724]: I0226 11:12:00.768824 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:12:00 crc kubenswrapper[4724]: I0226 11:12:00.769147 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:12:00 crc kubenswrapper[4724]: I0226 11:12:00.828600 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:12:01 crc kubenswrapper[4724]: I0226 11:12:01.203661 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:12:01 crc kubenswrapper[4724]: I0226 11:12:01.619091 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:12:01 crc kubenswrapper[4724]: I0226 11:12:01.619134 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:12:01 crc kubenswrapper[4724]: I0226 11:12:01.656335 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:12:02 crc kubenswrapper[4724]: I0226 11:12:02.105419 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:12:02 crc kubenswrapper[4724]: I0226 11:12:02.105468 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:12:02 crc kubenswrapper[4724]: I0226 11:12:02.144894 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:12:02 crc kubenswrapper[4724]: I0226 11:12:02.205759 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:12:02 crc kubenswrapper[4724]: I0226 11:12:02.210353 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:12:04 crc kubenswrapper[4724]: I0226 11:12:04.022029 4724 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ec08a0b5-55d2-47db-b92f-11dac85c1eab" Feb 26 11:12:08 crc kubenswrapper[4724]: I0226 11:12:08.334572 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 26 11:12:08 crc kubenswrapper[4724]: I0226 11:12:08.345947 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 26 11:12:08 crc kubenswrapper[4724]: I0226 11:12:08.601029 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 26 11:12:08 crc kubenswrapper[4724]: I0226 11:12:08.644002 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 26 11:12:09 crc kubenswrapper[4724]: I0226 11:12:09.763756 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 26 11:12:10 crc kubenswrapper[4724]: I0226 11:12:10.513822 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 26 11:12:10 crc kubenswrapper[4724]: I0226 11:12:10.695135 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 26 11:12:11 crc kubenswrapper[4724]: I0226 11:12:11.150568 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 26 11:12:11 crc kubenswrapper[4724]: I0226 11:12:11.211956 4724 generic.go:334] "Generic (PLEG): container finished" podID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerID="70484096b07cc818074617dff45ec4339f1ec6e33f114f56f47e4b6f2c344ac9" exitCode=0 Feb 26 11:12:11 crc kubenswrapper[4724]: I0226 11:12:11.212005 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" event={"ID":"481dac61-2ecf-46c9-b8f8-981815ceb9c5","Type":"ContainerDied","Data":"70484096b07cc818074617dff45ec4339f1ec6e33f114f56f47e4b6f2c344ac9"} Feb 26 11:12:11 crc kubenswrapper[4724]: I0226 11:12:11.212477 4724 scope.go:117] "RemoveContainer" containerID="70484096b07cc818074617dff45ec4339f1ec6e33f114f56f47e4b6f2c344ac9" Feb 26 11:12:11 crc kubenswrapper[4724]: I0226 11:12:11.238443 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 26 11:12:11 crc kubenswrapper[4724]: I0226 11:12:11.278130 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 26 11:12:11 crc kubenswrapper[4724]: I0226 11:12:11.544730 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 26 11:12:11 crc kubenswrapper[4724]: I0226 11:12:11.576205 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 11:12:11 crc kubenswrapper[4724]: I0226 11:12:11.616617 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 26 11:12:11 crc kubenswrapper[4724]: I0226 11:12:11.704442 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 26 11:12:11 crc kubenswrapper[4724]: I0226 11:12:11.909117 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 11:12:12 crc kubenswrapper[4724]: I0226 11:12:12.067773 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 11:12:12 crc kubenswrapper[4724]: I0226 11:12:12.153693 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 11:12:12 crc kubenswrapper[4724]: I0226 11:12:12.219674 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8kd6n_481dac61-2ecf-46c9-b8f8-981815ceb9c5/marketplace-operator/1.log" Feb 26 11:12:12 crc kubenswrapper[4724]: I0226 11:12:12.219985 4724 generic.go:334] "Generic (PLEG): container finished" podID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerID="3d029f58f1a37fb37bb2e046a2d7b6cc28538efc65a3bebeaafd2cc17940b648" exitCode=1 Feb 26 11:12:12 crc kubenswrapper[4724]: I0226 11:12:12.220014 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" event={"ID":"481dac61-2ecf-46c9-b8f8-981815ceb9c5","Type":"ContainerDied","Data":"3d029f58f1a37fb37bb2e046a2d7b6cc28538efc65a3bebeaafd2cc17940b648"} Feb 26 11:12:12 crc kubenswrapper[4724]: I0226 11:12:12.220043 4724 scope.go:117] "RemoveContainer" containerID="70484096b07cc818074617dff45ec4339f1ec6e33f114f56f47e4b6f2c344ac9" Feb 26 11:12:12 crc kubenswrapper[4724]: I0226 11:12:12.220465 4724 scope.go:117] "RemoveContainer" containerID="3d029f58f1a37fb37bb2e046a2d7b6cc28538efc65a3bebeaafd2cc17940b648" Feb 26 11:12:12 crc kubenswrapper[4724]: E0226 11:12:12.220641 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-8kd6n_openshift-marketplace(481dac61-2ecf-46c9-b8f8-981815ceb9c5)\"" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" Feb 26 11:12:12 crc kubenswrapper[4724]: I0226 11:12:12.350604 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 26 11:12:12 crc kubenswrapper[4724]: I0226 11:12:12.411017 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 26 11:12:12 crc kubenswrapper[4724]: I0226 11:12:12.817915 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 11:12:13 crc kubenswrapper[4724]: I0226 11:12:13.118938 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 26 11:12:13 crc kubenswrapper[4724]: I0226 11:12:13.225360 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 26 11:12:13 crc kubenswrapper[4724]: I0226 11:12:13.226500 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 26 11:12:13 crc kubenswrapper[4724]: I0226 11:12:13.228863 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8kd6n_481dac61-2ecf-46c9-b8f8-981815ceb9c5/marketplace-operator/1.log" Feb 26 11:12:13 crc kubenswrapper[4724]: I0226 11:12:13.637459 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 26 11:12:14 crc kubenswrapper[4724]: I0226 11:12:14.344722 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 26 11:12:14 crc kubenswrapper[4724]: I0226 11:12:14.379703 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 26 11:12:14 crc kubenswrapper[4724]: I0226 11:12:14.453458 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 26 11:12:14 crc kubenswrapper[4724]: I0226 11:12:14.716829 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 26 11:12:14 crc kubenswrapper[4724]: I0226 11:12:14.837787 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 26 11:12:14 crc kubenswrapper[4724]: I0226 11:12:14.909766 4724 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 26 11:12:15 crc kubenswrapper[4724]: I0226 11:12:15.056771 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 26 11:12:15 crc kubenswrapper[4724]: I0226 11:12:15.099416 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 26 11:12:15 crc kubenswrapper[4724]: I0226 11:12:15.310524 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 26 11:12:15 crc kubenswrapper[4724]: I0226 11:12:15.396439 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 26 11:12:15 crc kubenswrapper[4724]: I0226 11:12:15.651868 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 26 11:12:15 crc kubenswrapper[4724]: I0226 11:12:15.911082 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 26 11:12:16 crc kubenswrapper[4724]: I0226 11:12:16.107799 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:12:16 crc kubenswrapper[4724]: I0226 11:12:16.107857 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:12:16 crc kubenswrapper[4724]: I0226 11:12:16.108375 4724 scope.go:117] "RemoveContainer" containerID="3d029f58f1a37fb37bb2e046a2d7b6cc28538efc65a3bebeaafd2cc17940b648" Feb 26 11:12:16 crc kubenswrapper[4724]: E0226 11:12:16.108559 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-8kd6n_openshift-marketplace(481dac61-2ecf-46c9-b8f8-981815ceb9c5)\"" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" Feb 26 11:12:16 crc kubenswrapper[4724]: I0226 11:12:16.437101 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 26 11:12:16 crc kubenswrapper[4724]: I0226 11:12:16.612391 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 26 11:12:16 crc kubenswrapper[4724]: I0226 11:12:16.618937 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 26 11:12:16 crc kubenswrapper[4724]: I0226 11:12:16.771561 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 26 11:12:16 crc kubenswrapper[4724]: I0226 11:12:16.827281 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 26 11:12:16 crc kubenswrapper[4724]: I0226 11:12:16.889615 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 26 11:12:16 crc kubenswrapper[4724]: I0226 11:12:16.978169 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 26 11:12:17 crc kubenswrapper[4724]: I0226 11:12:17.440104 4724 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 26 11:12:17 crc kubenswrapper[4724]: I0226 11:12:17.580116 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 26 11:12:17 crc kubenswrapper[4724]: I0226 11:12:17.837509 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 26 11:12:18 crc kubenswrapper[4724]: I0226 11:12:18.050591 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 26 11:12:18 crc kubenswrapper[4724]: I0226 11:12:18.149760 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 26 11:12:18 crc kubenswrapper[4724]: I0226 11:12:18.166314 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 26 11:12:18 crc kubenswrapper[4724]: I0226 11:12:18.188717 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 26 11:12:18 crc kubenswrapper[4724]: I0226 11:12:18.630062 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 26 11:12:18 crc kubenswrapper[4724]: I0226 11:12:18.897522 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 11:12:19 crc kubenswrapper[4724]: I0226 11:12:19.071377 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 11:12:19 crc kubenswrapper[4724]: I0226 11:12:19.089251 4724 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 26 11:12:19 crc kubenswrapper[4724]: I0226 11:12:19.984331 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 26 11:12:20 crc kubenswrapper[4724]: I0226 11:12:20.391870 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 26 11:12:20 crc kubenswrapper[4724]: I0226 11:12:20.438932 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 11:12:20 crc kubenswrapper[4724]: I0226 11:12:20.442043 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 26 11:12:20 crc kubenswrapper[4724]: I0226 11:12:20.576788 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 26 11:12:20 crc kubenswrapper[4724]: I0226 11:12:20.808533 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 26 11:12:20 crc kubenswrapper[4724]: I0226 11:12:20.829143 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 26 11:12:21 crc kubenswrapper[4724]: I0226 11:12:21.252052 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 26 11:12:21 crc kubenswrapper[4724]: I0226 11:12:21.284674 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 26 11:12:21 crc kubenswrapper[4724]: I0226 11:12:21.285433 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 11:12:21 crc kubenswrapper[4724]: I0226 11:12:21.286682 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 11:12:21 crc kubenswrapper[4724]: I0226 11:12:21.286733 4724 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5982ccc4e0be4127f06743b6c974c80d48e068250ba4adadde6014566cd152b2" exitCode=137 Feb 26 11:12:21 crc kubenswrapper[4724]: I0226 11:12:21.286760 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5982ccc4e0be4127f06743b6c974c80d48e068250ba4adadde6014566cd152b2"} Feb 26 11:12:21 crc kubenswrapper[4724]: I0226 11:12:21.286789 4724 scope.go:117] "RemoveContainer" containerID="30fabdbc207283755407fe33ee609345de6dcab0c4a0272e0b04a3cf02daf7eb" Feb 26 11:12:21 crc kubenswrapper[4724]: I0226 11:12:21.622795 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 11:12:21 crc kubenswrapper[4724]: I0226 11:12:21.711841 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.010237 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.096844 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.254379 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.294356 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.295100 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.295656 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"902176b60a20df023f27c138e052af0c49af90a846eb8d37dbe44a687b793a2a"} Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.354903 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.371441 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.442232 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.447355 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.463126 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.508502 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.583591 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.835744 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.852151 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 26 11:12:22 crc kubenswrapper[4724]: I0226 11:12:22.915472 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 26 11:12:23 crc kubenswrapper[4724]: I0226 11:12:23.186487 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 26 11:12:23 crc kubenswrapper[4724]: I0226 11:12:23.930266 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 11:12:24 crc kubenswrapper[4724]: I0226 11:12:24.046253 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 26 11:12:24 crc kubenswrapper[4724]: I0226 11:12:24.377483 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 26 11:12:26 crc kubenswrapper[4724]: I0226 11:12:26.029949 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 26 11:12:26 crc kubenswrapper[4724]: I0226 11:12:26.452041 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 26 11:12:26 crc kubenswrapper[4724]: I0226 11:12:26.465595 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 26 11:12:26 crc kubenswrapper[4724]: I0226 11:12:26.466112 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 26 11:12:26 crc kubenswrapper[4724]: I0226 11:12:26.660143 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 26 11:12:27 crc kubenswrapper[4724]: I0226 11:12:27.112841 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 26 11:12:27 crc kubenswrapper[4724]: I0226 11:12:27.273248 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 26 11:12:27 crc kubenswrapper[4724]: I0226 11:12:27.422383 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 26 11:12:27 crc kubenswrapper[4724]: I0226 11:12:27.513279 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 26 11:12:27 crc kubenswrapper[4724]: I0226 11:12:27.698410 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 26 11:12:27 crc kubenswrapper[4724]: I0226 11:12:27.767539 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 11:12:27 crc kubenswrapper[4724]: I0226 11:12:27.976353 4724 scope.go:117] "RemoveContainer" containerID="3d029f58f1a37fb37bb2e046a2d7b6cc28538efc65a3bebeaafd2cc17940b648" Feb 26 11:12:28 crc kubenswrapper[4724]: I0226 11:12:28.330964 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8kd6n_481dac61-2ecf-46c9-b8f8-981815ceb9c5/marketplace-operator/2.log" Feb 26 11:12:28 crc kubenswrapper[4724]: I0226 11:12:28.331676 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8kd6n_481dac61-2ecf-46c9-b8f8-981815ceb9c5/marketplace-operator/1.log" Feb 26 11:12:28 crc kubenswrapper[4724]: I0226 11:12:28.331712 4724 generic.go:334] "Generic (PLEG): container finished" podID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerID="30b2863202d13d94c3ffa53e0e97e09d76ffb347a178cefa99009a81e7a40fb6" exitCode=1 Feb 26 11:12:28 crc kubenswrapper[4724]: I0226 11:12:28.331744 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" event={"ID":"481dac61-2ecf-46c9-b8f8-981815ceb9c5","Type":"ContainerDied","Data":"30b2863202d13d94c3ffa53e0e97e09d76ffb347a178cefa99009a81e7a40fb6"} Feb 26 11:12:28 crc kubenswrapper[4724]: I0226 11:12:28.331783 4724 scope.go:117] "RemoveContainer" containerID="3d029f58f1a37fb37bb2e046a2d7b6cc28538efc65a3bebeaafd2cc17940b648" Feb 26 11:12:28 crc kubenswrapper[4724]: I0226 11:12:28.332429 4724 scope.go:117] "RemoveContainer" containerID="30b2863202d13d94c3ffa53e0e97e09d76ffb347a178cefa99009a81e7a40fb6" Feb 26 11:12:28 crc kubenswrapper[4724]: E0226 11:12:28.332627 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-8kd6n_openshift-marketplace(481dac61-2ecf-46c9-b8f8-981815ceb9c5)\"" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" Feb 26 11:12:28 crc kubenswrapper[4724]: I0226 11:12:28.524567 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 26 11:12:28 crc kubenswrapper[4724]: I0226 11:12:28.630157 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 26 11:12:28 crc kubenswrapper[4724]: I0226 11:12:28.685741 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 26 11:12:29 crc kubenswrapper[4724]: I0226 11:12:29.340345 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8kd6n_481dac61-2ecf-46c9-b8f8-981815ceb9c5/marketplace-operator/2.log" Feb 26 11:12:29 crc kubenswrapper[4724]: I0226 11:12:29.381564 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 11:12:29 crc kubenswrapper[4724]: I0226 11:12:29.431038 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 26 11:12:29 crc kubenswrapper[4724]: I0226 11:12:29.564081 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 26 11:12:29 crc kubenswrapper[4724]: I0226 11:12:29.643156 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 26 11:12:30 crc kubenswrapper[4724]: I0226 11:12:30.287908 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 26 11:12:30 crc kubenswrapper[4724]: I0226 11:12:30.305984 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 26 11:12:30 crc kubenswrapper[4724]: I0226 11:12:30.442638 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 26 11:12:30 crc kubenswrapper[4724]: I0226 11:12:30.471051 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 11:12:30 crc kubenswrapper[4724]: I0226 11:12:30.736715 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 26 11:12:30 crc kubenswrapper[4724]: I0226 11:12:30.807364 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 11:12:30 crc kubenswrapper[4724]: I0226 11:12:30.855665 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:12:30 crc kubenswrapper[4724]: I0226 11:12:30.857643 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 26 11:12:30 crc kubenswrapper[4724]: I0226 11:12:30.860870 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:12:30 crc kubenswrapper[4724]: I0226 11:12:30.934375 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 26 11:12:30 crc kubenswrapper[4724]: I0226 11:12:30.944613 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 11:12:31 crc kubenswrapper[4724]: I0226 11:12:31.105313 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 26 11:12:31 crc kubenswrapper[4724]: I0226 11:12:31.200763 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 11:12:31 crc kubenswrapper[4724]: I0226 11:12:31.354890 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 11:12:31 crc kubenswrapper[4724]: I0226 11:12:31.524578 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 26 11:12:32 crc kubenswrapper[4724]: I0226 11:12:32.022467 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 26 11:12:32 crc kubenswrapper[4724]: I0226 11:12:32.070714 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 26 11:12:32 crc kubenswrapper[4724]: I0226 11:12:32.116476 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 26 11:12:32 crc kubenswrapper[4724]: I0226 11:12:32.283886 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 26 11:12:33 crc kubenswrapper[4724]: I0226 11:12:33.028109 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 26 11:12:33 crc kubenswrapper[4724]: I0226 11:12:33.212412 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 26 11:12:33 crc kubenswrapper[4724]: I0226 11:12:33.339616 4724 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 26 11:12:33 crc kubenswrapper[4724]: I0226 11:12:33.511291 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 26 11:12:33 crc kubenswrapper[4724]: I0226 11:12:33.614456 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 26 11:12:33 crc kubenswrapper[4724]: I0226 11:12:33.624928 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 26 11:12:33 crc kubenswrapper[4724]: I0226 11:12:33.852535 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 26 11:12:34 crc kubenswrapper[4724]: I0226 11:12:34.255544 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 26 11:12:34 crc kubenswrapper[4724]: I0226 11:12:34.796756 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 26 11:12:35 crc kubenswrapper[4724]: I0226 11:12:35.102701 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 26 11:12:35 crc kubenswrapper[4724]: I0226 11:12:35.381277 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 26 11:12:35 crc kubenswrapper[4724]: I0226 11:12:35.432628 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 26 11:12:35 crc kubenswrapper[4724]: I0226 11:12:35.526374 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 26 11:12:36 crc kubenswrapper[4724]: I0226 11:12:36.097308 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 11:12:36 crc kubenswrapper[4724]: I0226 11:12:36.107469 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:12:36 crc kubenswrapper[4724]: I0226 11:12:36.107946 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:12:36 crc kubenswrapper[4724]: I0226 11:12:36.108253 4724 scope.go:117] "RemoveContainer" containerID="30b2863202d13d94c3ffa53e0e97e09d76ffb347a178cefa99009a81e7a40fb6" Feb 26 11:12:36 crc kubenswrapper[4724]: E0226 11:12:36.108514 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-8kd6n_openshift-marketplace(481dac61-2ecf-46c9-b8f8-981815ceb9c5)\"" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" Feb 26 11:12:36 crc kubenswrapper[4724]: I0226 11:12:36.163276 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 26 11:12:36 crc kubenswrapper[4724]: I0226 11:12:36.190408 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 11:12:36 crc kubenswrapper[4724]: I0226 11:12:36.271292 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 26 11:12:36 crc kubenswrapper[4724]: I0226 11:12:36.384158 4724 scope.go:117] "RemoveContainer" containerID="30b2863202d13d94c3ffa53e0e97e09d76ffb347a178cefa99009a81e7a40fb6" Feb 26 11:12:36 crc kubenswrapper[4724]: E0226 11:12:36.384359 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-8kd6n_openshift-marketplace(481dac61-2ecf-46c9-b8f8-981815ceb9c5)\"" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" Feb 26 11:12:37 crc kubenswrapper[4724]: I0226 11:12:37.177872 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 26 11:12:37 crc kubenswrapper[4724]: I0226 11:12:37.768187 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 11:12:38 crc kubenswrapper[4724]: I0226 11:12:38.006375 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 26 11:12:38 crc kubenswrapper[4724]: I0226 11:12:38.074571 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 26 11:12:39 crc kubenswrapper[4724]: I0226 11:12:39.233364 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 26 11:12:39 crc kubenswrapper[4724]: I0226 11:12:39.506210 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 11:12:39 crc kubenswrapper[4724]: I0226 11:12:39.554367 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 26 11:12:39 crc kubenswrapper[4724]: I0226 11:12:39.959956 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 26 11:12:41 crc kubenswrapper[4724]: I0226 11:12:41.147646 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 26 11:12:41 crc kubenswrapper[4724]: I0226 11:12:41.794496 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 26 11:12:42 crc kubenswrapper[4724]: I0226 11:12:42.705255 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 26 11:12:43 crc kubenswrapper[4724]: I0226 11:12:43.101717 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 26 11:12:43 crc kubenswrapper[4724]: I0226 11:12:43.466995 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 26 11:12:43 crc kubenswrapper[4724]: I0226 11:12:43.515018 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 26 11:12:44 crc kubenswrapper[4724]: I0226 11:12:44.366512 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 26 11:12:45 crc kubenswrapper[4724]: I0226 11:12:45.242870 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 26 11:12:45 crc kubenswrapper[4724]: I0226 11:12:45.454305 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 26 11:12:46 crc kubenswrapper[4724]: I0226 11:12:46.226523 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 26 11:12:47 crc kubenswrapper[4724]: I0226 11:12:47.090685 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 26 11:12:47 crc kubenswrapper[4724]: I0226 11:12:47.218120 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 26 11:12:47 crc kubenswrapper[4724]: I0226 11:12:47.572212 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 26 11:12:47 crc kubenswrapper[4724]: I0226 11:12:47.859742 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 11:12:47 crc kubenswrapper[4724]: I0226 11:12:47.939721 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 26 11:12:47 crc kubenswrapper[4724]: I0226 11:12:47.975115 4724 scope.go:117] "RemoveContainer" containerID="30b2863202d13d94c3ffa53e0e97e09d76ffb347a178cefa99009a81e7a40fb6" Feb 26 11:12:47 crc kubenswrapper[4724]: E0226 11:12:47.975334 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-8kd6n_openshift-marketplace(481dac61-2ecf-46c9-b8f8-981815ceb9c5)\"" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" Feb 26 11:12:48 crc kubenswrapper[4724]: I0226 11:12:48.030254 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 26 11:12:48 crc kubenswrapper[4724]: I0226 11:12:48.906318 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 11:12:48 crc kubenswrapper[4724]: I0226 11:12:48.943796 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 26 11:12:49 crc kubenswrapper[4724]: I0226 11:12:49.781014 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 26 11:12:50 crc kubenswrapper[4724]: I0226 11:12:49.999725 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 26 11:12:50 crc kubenswrapper[4724]: I0226 11:12:50.511756 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 26 11:12:51 crc kubenswrapper[4724]: I0226 11:12:51.545981 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 26 11:12:51 crc kubenswrapper[4724]: I0226 11:12:51.660677 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 26 11:12:51 crc kubenswrapper[4724]: I0226 11:12:51.887330 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.205062 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.414891 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.735633 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.882578 4724 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.882846 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xb5gc" podStartSLOduration=64.592220245 podStartE2EDuration="4m12.882827787s" podCreationTimestamp="2026-02-26 11:08:40 +0000 UTC" firstStartedPulling="2026-02-26 11:08:45.389218417 +0000 UTC m=+192.044957532" lastFinishedPulling="2026-02-26 11:11:53.679825959 +0000 UTC m=+380.335565074" observedRunningTime="2026-02-26 11:11:55.166841943 +0000 UTC m=+381.822581068" watchObservedRunningTime="2026-02-26 11:12:52.882827787 +0000 UTC m=+439.538566922" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.883555 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p9shd" podStartSLOduration=89.158027035 podStartE2EDuration="4m14.883549356s" podCreationTimestamp="2026-02-26 11:08:38 +0000 UTC" firstStartedPulling="2026-02-26 11:08:45.388829986 +0000 UTC m=+192.044569101" lastFinishedPulling="2026-02-26 11:11:31.114352317 +0000 UTC m=+357.770091422" observedRunningTime="2026-02-26 11:11:44.73157724 +0000 UTC m=+371.387316375" watchObservedRunningTime="2026-02-26 11:12:52.883549356 +0000 UTC m=+439.539288471" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.883641 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vlps5" podStartSLOduration=89.150692978 podStartE2EDuration="4m14.883636768s" podCreationTimestamp="2026-02-26 11:08:38 +0000 UTC" firstStartedPulling="2026-02-26 11:08:45.377990266 +0000 UTC m=+192.033729381" lastFinishedPulling="2026-02-26 11:11:31.110934056 +0000 UTC m=+357.766673171" observedRunningTime="2026-02-26 11:11:44.631470481 +0000 UTC m=+371.287209616" watchObservedRunningTime="2026-02-26 11:12:52.883636768 +0000 UTC m=+439.539375883" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.883924 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=114.883916755 podStartE2EDuration="1m54.883916755s" podCreationTimestamp="2026-02-26 11:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:11:44.765391238 +0000 UTC m=+371.421130353" watchObservedRunningTime="2026-02-26 11:12:52.883916755 +0000 UTC m=+439.539655870" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.884218 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mqtct" podStartSLOduration=63.436165496 podStartE2EDuration="4m11.884209453s" podCreationTimestamp="2026-02-26 11:08:41 +0000 UTC" firstStartedPulling="2026-02-26 11:08:45.328370148 +0000 UTC m=+191.984109263" lastFinishedPulling="2026-02-26 11:11:53.776414105 +0000 UTC m=+380.432153220" observedRunningTime="2026-02-26 11:11:55.215347901 +0000 UTC m=+381.871087046" watchObservedRunningTime="2026-02-26 11:12:52.884209453 +0000 UTC m=+439.539948588" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.884327 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2gkcb" podStartSLOduration=88.079125883 podStartE2EDuration="4m14.884324016s" podCreationTimestamp="2026-02-26 11:08:38 +0000 UTC" firstStartedPulling="2026-02-26 11:08:44.305815345 +0000 UTC m=+190.961554460" lastFinishedPulling="2026-02-26 11:11:31.111013478 +0000 UTC m=+357.766752593" observedRunningTime="2026-02-26 11:11:44.667528189 +0000 UTC m=+371.323267314" watchObservedRunningTime="2026-02-26 11:12:52.884324016 +0000 UTC m=+439.540063131" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.884867 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-92dsj" podStartSLOduration=63.130870871 podStartE2EDuration="4m14.88486322s" podCreationTimestamp="2026-02-26 11:08:38 +0000 UTC" firstStartedPulling="2026-02-26 11:08:42.022391305 +0000 UTC m=+188.678130410" lastFinishedPulling="2026-02-26 11:11:53.776383644 +0000 UTC m=+380.432122759" observedRunningTime="2026-02-26 11:11:55.190656335 +0000 UTC m=+381.846395470" watchObservedRunningTime="2026-02-26 11:12:52.88486322 +0000 UTC m=+439.540602335" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.884947 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hj7c4" podStartSLOduration=90.971187578 podStartE2EDuration="4m12.884943122s" podCreationTimestamp="2026-02-26 11:08:40 +0000 UTC" firstStartedPulling="2026-02-26 11:08:45.389481824 +0000 UTC m=+192.045220939" lastFinishedPulling="2026-02-26 11:11:27.303237368 +0000 UTC m=+353.958976483" observedRunningTime="2026-02-26 11:11:44.793386502 +0000 UTC m=+371.449125637" watchObservedRunningTime="2026-02-26 11:12:52.884943122 +0000 UTC m=+439.540682237" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.886866 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-64lrq" podStartSLOduration=64.652072385 podStartE2EDuration="4m11.886860872s" podCreationTimestamp="2026-02-26 11:08:41 +0000 UTC" firstStartedPulling="2026-02-26 11:08:46.435459718 +0000 UTC m=+193.091198833" lastFinishedPulling="2026-02-26 11:11:53.670248205 +0000 UTC m=+380.325987320" observedRunningTime="2026-02-26 11:11:55.133098077 +0000 UTC m=+381.788837212" watchObservedRunningTime="2026-02-26 11:12:52.886860872 +0000 UTC m=+439.542599987" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.888221 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.888265 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535072-h4cjv","openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 11:12:52 crc kubenswrapper[4724]: E0226 11:12:52.888489 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" containerName="oc" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.888510 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" containerName="oc" Feb 26 11:12:52 crc kubenswrapper[4724]: E0226 11:12:52.888533 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" containerName="oc" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.888547 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" containerName="oc" Feb 26 11:12:52 crc kubenswrapper[4724]: E0226 11:12:52.888559 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" containerName="installer" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.888566 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" containerName="installer" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.888691 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" containerName="oc" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.888711 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="99d4a0b0-dbd2-44f9-afb9-087ea5165db7" containerName="installer" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.888722 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" containerName="oc" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.889412 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535072-h4cjv" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.892132 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.893157 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.893270 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.893286 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.927985 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=56.927967834 podStartE2EDuration="56.927967834s" podCreationTimestamp="2026-02-26 11:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:12:52.90979533 +0000 UTC m=+439.565534465" watchObservedRunningTime="2026-02-26 11:12:52.927967834 +0000 UTC m=+439.583706959" Feb 26 11:12:52 crc kubenswrapper[4724]: I0226 11:12:52.987574 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmzk4\" (UniqueName: \"kubernetes.io/projected/5f17d90f-02c5-4721-9f39-2f50cafbd329-kube-api-access-qmzk4\") pod \"auto-csr-approver-29535072-h4cjv\" (UID: \"5f17d90f-02c5-4721-9f39-2f50cafbd329\") " pod="openshift-infra/auto-csr-approver-29535072-h4cjv" Feb 26 11:12:53 crc kubenswrapper[4724]: I0226 11:12:53.037706 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 26 11:12:53 crc kubenswrapper[4724]: I0226 11:12:53.088587 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmzk4\" (UniqueName: \"kubernetes.io/projected/5f17d90f-02c5-4721-9f39-2f50cafbd329-kube-api-access-qmzk4\") pod \"auto-csr-approver-29535072-h4cjv\" (UID: \"5f17d90f-02c5-4721-9f39-2f50cafbd329\") " pod="openshift-infra/auto-csr-approver-29535072-h4cjv" Feb 26 11:12:53 crc kubenswrapper[4724]: I0226 11:12:53.110845 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmzk4\" (UniqueName: \"kubernetes.io/projected/5f17d90f-02c5-4721-9f39-2f50cafbd329-kube-api-access-qmzk4\") pod \"auto-csr-approver-29535072-h4cjv\" (UID: \"5f17d90f-02c5-4721-9f39-2f50cafbd329\") " pod="openshift-infra/auto-csr-approver-29535072-h4cjv" Feb 26 11:12:53 crc kubenswrapper[4724]: I0226 11:12:53.210537 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535072-h4cjv" Feb 26 11:12:53 crc kubenswrapper[4724]: I0226 11:12:53.365109 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 26 11:12:53 crc kubenswrapper[4724]: I0226 11:12:53.385440 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 26 11:12:53 crc kubenswrapper[4724]: I0226 11:12:53.408459 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 26 11:12:53 crc kubenswrapper[4724]: I0226 11:12:53.699127 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 26 11:12:53 crc kubenswrapper[4724]: I0226 11:12:53.928238 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 26 11:12:54 crc kubenswrapper[4724]: I0226 11:12:54.300746 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 11:12:54 crc kubenswrapper[4724]: I0226 11:12:54.593371 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 26 11:12:54 crc kubenswrapper[4724]: I0226 11:12:54.700688 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 26 11:12:54 crc kubenswrapper[4724]: I0226 11:12:54.750040 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 26 11:12:55 crc kubenswrapper[4724]: I0226 11:12:55.172101 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 26 11:12:55 crc kubenswrapper[4724]: I0226 11:12:55.281706 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 26 11:12:55 crc kubenswrapper[4724]: I0226 11:12:55.421469 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 11:12:56 crc kubenswrapper[4724]: I0226 11:12:56.022471 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 26 11:12:56 crc kubenswrapper[4724]: I0226 11:12:56.278152 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 26 11:12:56 crc kubenswrapper[4724]: I0226 11:12:56.536684 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 26 11:12:56 crc kubenswrapper[4724]: I0226 11:12:56.693374 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 26 11:12:56 crc kubenswrapper[4724]: I0226 11:12:56.851696 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 26 11:12:57 crc kubenswrapper[4724]: I0226 11:12:57.345231 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 11:12:57 crc kubenswrapper[4724]: I0226 11:12:57.502245 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xw4vt_e87b7bd7-9d39-48f0-b896-fe5da437416f/control-plane-machine-set-operator/0.log" Feb 26 11:12:57 crc kubenswrapper[4724]: I0226 11:12:57.502315 4724 generic.go:334] "Generic (PLEG): container finished" podID="e87b7bd7-9d39-48f0-b896-fe5da437416f" containerID="508d1c792460bc39db9ae8e965f3827158743c9a641c7c3ea4aa1eeb8901435f" exitCode=1 Feb 26 11:12:57 crc kubenswrapper[4724]: I0226 11:12:57.502356 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt" event={"ID":"e87b7bd7-9d39-48f0-b896-fe5da437416f","Type":"ContainerDied","Data":"508d1c792460bc39db9ae8e965f3827158743c9a641c7c3ea4aa1eeb8901435f"} Feb 26 11:12:57 crc kubenswrapper[4724]: I0226 11:12:57.503056 4724 scope.go:117] "RemoveContainer" containerID="508d1c792460bc39db9ae8e965f3827158743c9a641c7c3ea4aa1eeb8901435f" Feb 26 11:12:57 crc kubenswrapper[4724]: I0226 11:12:57.865229 4724 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 11:12:57 crc kubenswrapper[4724]: I0226 11:12:57.981483 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 26 11:12:58 crc kubenswrapper[4724]: I0226 11:12:58.313560 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 11:12:58 crc kubenswrapper[4724]: I0226 11:12:58.508564 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xw4vt_e87b7bd7-9d39-48f0-b896-fe5da437416f/control-plane-machine-set-operator/0.log" Feb 26 11:12:58 crc kubenswrapper[4724]: I0226 11:12:58.509168 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xw4vt" event={"ID":"e87b7bd7-9d39-48f0-b896-fe5da437416f","Type":"ContainerStarted","Data":"5a8d6e98df3d40462c04fb0cd4e61c583b27a4c67f1f946bd3eec2871938177a"} Feb 26 11:12:58 crc kubenswrapper[4724]: I0226 11:12:58.556374 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 26 11:12:58 crc kubenswrapper[4724]: I0226 11:12:58.975323 4724 scope.go:117] "RemoveContainer" containerID="30b2863202d13d94c3ffa53e0e97e09d76ffb347a178cefa99009a81e7a40fb6" Feb 26 11:12:59 crc kubenswrapper[4724]: I0226 11:12:59.198685 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 26 11:12:59 crc kubenswrapper[4724]: I0226 11:12:59.389048 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 11:12:59 crc kubenswrapper[4724]: I0226 11:12:59.514543 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8kd6n_481dac61-2ecf-46c9-b8f8-981815ceb9c5/marketplace-operator/3.log" Feb 26 11:12:59 crc kubenswrapper[4724]: I0226 11:12:59.515394 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8kd6n_481dac61-2ecf-46c9-b8f8-981815ceb9c5/marketplace-operator/2.log" Feb 26 11:12:59 crc kubenswrapper[4724]: I0226 11:12:59.515434 4724 generic.go:334] "Generic (PLEG): container finished" podID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerID="416a1d5a3cf7f428ab79b4fd4d226abd53743d103395b27b9fb360cd62a9533c" exitCode=1 Feb 26 11:12:59 crc kubenswrapper[4724]: I0226 11:12:59.515463 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" event={"ID":"481dac61-2ecf-46c9-b8f8-981815ceb9c5","Type":"ContainerDied","Data":"416a1d5a3cf7f428ab79b4fd4d226abd53743d103395b27b9fb360cd62a9533c"} Feb 26 11:12:59 crc kubenswrapper[4724]: I0226 11:12:59.515497 4724 scope.go:117] "RemoveContainer" containerID="30b2863202d13d94c3ffa53e0e97e09d76ffb347a178cefa99009a81e7a40fb6" Feb 26 11:12:59 crc kubenswrapper[4724]: I0226 11:12:59.516087 4724 scope.go:117] "RemoveContainer" containerID="416a1d5a3cf7f428ab79b4fd4d226abd53743d103395b27b9fb360cd62a9533c" Feb 26 11:12:59 crc kubenswrapper[4724]: E0226 11:12:59.516315 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-8kd6n_openshift-marketplace(481dac61-2ecf-46c9-b8f8-981815ceb9c5)\"" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" Feb 26 11:12:59 crc kubenswrapper[4724]: I0226 11:12:59.611571 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 26 11:12:59 crc kubenswrapper[4724]: I0226 11:12:59.939132 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 26 11:13:00 crc kubenswrapper[4724]: I0226 11:13:00.158985 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 26 11:13:00 crc kubenswrapper[4724]: I0226 11:13:00.284843 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 26 11:13:00 crc kubenswrapper[4724]: I0226 11:13:00.438251 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 26 11:13:00 crc kubenswrapper[4724]: I0226 11:13:00.521820 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8kd6n_481dac61-2ecf-46c9-b8f8-981815ceb9c5/marketplace-operator/3.log" Feb 26 11:13:00 crc kubenswrapper[4724]: I0226 11:13:00.523411 4724 generic.go:334] "Generic (PLEG): container finished" podID="c9371739-6d1a-4872-b11e-b2e915349056" containerID="5af7c65324ab0f200fe0e88f16f385156e91593d56676f447b255f4db869dc9e" exitCode=0 Feb 26 11:13:00 crc kubenswrapper[4724]: I0226 11:13:00.523448 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" event={"ID":"c9371739-6d1a-4872-b11e-b2e915349056","Type":"ContainerDied","Data":"5af7c65324ab0f200fe0e88f16f385156e91593d56676f447b255f4db869dc9e"} Feb 26 11:13:00 crc kubenswrapper[4724]: I0226 11:13:00.523886 4724 scope.go:117] "RemoveContainer" containerID="5af7c65324ab0f200fe0e88f16f385156e91593d56676f447b255f4db869dc9e" Feb 26 11:13:00 crc kubenswrapper[4724]: I0226 11:13:00.541677 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 11:13:00 crc kubenswrapper[4724]: I0226 11:13:00.696112 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 26 11:13:00 crc kubenswrapper[4724]: I0226 11:13:00.819227 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 26 11:13:00 crc kubenswrapper[4724]: I0226 11:13:00.910488 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 26 11:13:01 crc kubenswrapper[4724]: I0226 11:13:01.072838 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 26 11:13:01 crc kubenswrapper[4724]: I0226 11:13:01.529992 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" event={"ID":"c9371739-6d1a-4872-b11e-b2e915349056","Type":"ContainerStarted","Data":"d08eea06d0c3e7387d52e6ee5fd80353a0c3e2acf37f7ee37a9c1539a1f18735"} Feb 26 11:13:01 crc kubenswrapper[4724]: I0226 11:13:01.530365 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:13:01 crc kubenswrapper[4724]: I0226 11:13:01.534793 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:13:02 crc kubenswrapper[4724]: I0226 11:13:02.121883 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 26 11:13:02 crc kubenswrapper[4724]: I0226 11:13:02.371620 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 26 11:13:02 crc kubenswrapper[4724]: I0226 11:13:02.607654 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 26 11:13:02 crc kubenswrapper[4724]: I0226 11:13:02.628462 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 11:13:02 crc kubenswrapper[4724]: I0226 11:13:02.734749 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 26 11:13:02 crc kubenswrapper[4724]: I0226 11:13:02.901208 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 11:13:03 crc kubenswrapper[4724]: I0226 11:13:03.114563 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 26 11:13:03 crc kubenswrapper[4724]: I0226 11:13:03.145721 4724 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 11:13:03 crc kubenswrapper[4724]: I0226 11:13:03.145947 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://4d06de794208fb38d65f1efc91a7a47e65f41860e048a0d5b7b11c7983277ea2" gracePeriod=5 Feb 26 11:13:03 crc kubenswrapper[4724]: I0226 11:13:03.254844 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 26 11:13:03 crc kubenswrapper[4724]: I0226 11:13:03.273996 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 26 11:13:03 crc kubenswrapper[4724]: I0226 11:13:03.443229 4724 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 26 11:13:03 crc kubenswrapper[4724]: I0226 11:13:03.795592 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 26 11:13:03 crc kubenswrapper[4724]: I0226 11:13:03.933502 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 26 11:13:04 crc kubenswrapper[4724]: I0226 11:13:04.104495 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 26 11:13:04 crc kubenswrapper[4724]: I0226 11:13:04.226059 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 26 11:13:04 crc kubenswrapper[4724]: I0226 11:13:04.240931 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 26 11:13:04 crc kubenswrapper[4724]: I0226 11:13:04.499624 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 11:13:04 crc kubenswrapper[4724]: I0226 11:13:04.745285 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 26 11:13:04 crc kubenswrapper[4724]: I0226 11:13:04.995571 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 26 11:13:05 crc kubenswrapper[4724]: I0226 11:13:05.215430 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 26 11:13:05 crc kubenswrapper[4724]: I0226 11:13:05.952608 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 26 11:13:06 crc kubenswrapper[4724]: I0226 11:13:06.107460 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:13:06 crc kubenswrapper[4724]: I0226 11:13:06.107504 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:13:06 crc kubenswrapper[4724]: I0226 11:13:06.108014 4724 scope.go:117] "RemoveContainer" containerID="416a1d5a3cf7f428ab79b4fd4d226abd53743d103395b27b9fb360cd62a9533c" Feb 26 11:13:06 crc kubenswrapper[4724]: E0226 11:13:06.108192 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-8kd6n_openshift-marketplace(481dac61-2ecf-46c9-b8f8-981815ceb9c5)\"" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" Feb 26 11:13:06 crc kubenswrapper[4724]: I0226 11:13:06.146790 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 26 11:13:06 crc kubenswrapper[4724]: I0226 11:13:06.152698 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 26 11:13:06 crc kubenswrapper[4724]: I0226 11:13:06.635030 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 26 11:13:06 crc kubenswrapper[4724]: I0226 11:13:06.906329 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 26 11:13:06 crc kubenswrapper[4724]: I0226 11:13:06.959995 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 26 11:13:07 crc kubenswrapper[4724]: I0226 11:13:07.073907 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 26 11:13:07 crc kubenswrapper[4724]: I0226 11:13:07.846514 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.038984 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 11:13:08 crc kubenswrapper[4724]: E0226 11:13:08.207390 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-4d06de794208fb38d65f1efc91a7a47e65f41860e048a0d5b7b11c7983277ea2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-conmon-4d06de794208fb38d65f1efc91a7a47e65f41860e048a0d5b7b11c7983277ea2.scope\": RecentStats: unable to find data in memory cache]" Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.530379 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535072-h4cjv"] Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.564993 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.565267 4724 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="4d06de794208fb38d65f1efc91a7a47e65f41860e048a0d5b7b11c7983277ea2" exitCode=137 Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.728317 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535072-h4cjv"] Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.760472 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.762292 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.762386 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.802020 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.802144 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.802241 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.802275 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.802336 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.802396 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.802434 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.802481 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.802624 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.802770 4724 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.802786 4724 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.802797 4724 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.802809 4724 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.810966 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:13:08 crc kubenswrapper[4724]: I0226 11:13:08.904279 4724 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:09 crc kubenswrapper[4724]: I0226 11:13:09.552444 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 26 11:13:09 crc kubenswrapper[4724]: I0226 11:13:09.572575 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535072-h4cjv" event={"ID":"5f17d90f-02c5-4721-9f39-2f50cafbd329","Type":"ContainerStarted","Data":"18d74415d25a2aa1b37e1ed4111b25825d3f8a4ac4f26d6f90303b8df4fb8fe7"} Feb 26 11:13:09 crc kubenswrapper[4724]: I0226 11:13:09.574438 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 11:13:09 crc kubenswrapper[4724]: I0226 11:13:09.574524 4724 scope.go:117] "RemoveContainer" containerID="4d06de794208fb38d65f1efc91a7a47e65f41860e048a0d5b7b11c7983277ea2" Feb 26 11:13:09 crc kubenswrapper[4724]: I0226 11:13:09.574591 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 11:13:09 crc kubenswrapper[4724]: I0226 11:13:09.982302 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 26 11:13:09 crc kubenswrapper[4724]: I0226 11:13:09.982558 4724 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 26 11:13:09 crc kubenswrapper[4724]: I0226 11:13:09.990860 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 11:13:09 crc kubenswrapper[4724]: I0226 11:13:09.990899 4724 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="01388bd7-b9a8-4714-85cd-e02eefad756b" Feb 26 11:13:09 crc kubenswrapper[4724]: I0226 11:13:09.994871 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 11:13:09 crc kubenswrapper[4724]: I0226 11:13:09.994910 4724 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="01388bd7-b9a8-4714-85cd-e02eefad756b" Feb 26 11:13:10 crc kubenswrapper[4724]: I0226 11:13:10.364793 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 26 11:13:10 crc kubenswrapper[4724]: I0226 11:13:10.582321 4724 generic.go:334] "Generic (PLEG): container finished" podID="5f17d90f-02c5-4721-9f39-2f50cafbd329" containerID="11e1403e5e6119f071ff6aee52bb43715d37c187f93e13cb72e2b562dc780dcf" exitCode=0 Feb 26 11:13:10 crc kubenswrapper[4724]: I0226 11:13:10.582421 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535072-h4cjv" event={"ID":"5f17d90f-02c5-4721-9f39-2f50cafbd329","Type":"ContainerDied","Data":"11e1403e5e6119f071ff6aee52bb43715d37c187f93e13cb72e2b562dc780dcf"} Feb 26 11:13:11 crc kubenswrapper[4724]: I0226 11:13:11.120404 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 11:13:11 crc kubenswrapper[4724]: I0226 11:13:11.674556 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 26 11:13:11 crc kubenswrapper[4724]: I0226 11:13:11.708767 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74b89969d5-gwmk8"] Feb 26 11:13:11 crc kubenswrapper[4724]: I0226 11:13:11.708982 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" podUID="c9371739-6d1a-4872-b11e-b2e915349056" containerName="controller-manager" containerID="cri-o://d08eea06d0c3e7387d52e6ee5fd80353a0c3e2acf37f7ee37a9c1539a1f18735" gracePeriod=30 Feb 26 11:13:11 crc kubenswrapper[4724]: I0226 11:13:11.811940 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw"] Feb 26 11:13:11 crc kubenswrapper[4724]: I0226 11:13:11.812539 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" podUID="f0e798af-3465-4040-a183-3319e609a282" containerName="route-controller-manager" containerID="cri-o://b24f353caba69372f8a150eaf10b5f15989d9b887803a64b9eb0e589392042dc" gracePeriod=30 Feb 26 11:13:11 crc kubenswrapper[4724]: I0226 11:13:11.820848 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535072-h4cjv" Feb 26 11:13:11 crc kubenswrapper[4724]: I0226 11:13:11.843481 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmzk4\" (UniqueName: \"kubernetes.io/projected/5f17d90f-02c5-4721-9f39-2f50cafbd329-kube-api-access-qmzk4\") pod \"5f17d90f-02c5-4721-9f39-2f50cafbd329\" (UID: \"5f17d90f-02c5-4721-9f39-2f50cafbd329\") " Feb 26 11:13:11 crc kubenswrapper[4724]: I0226 11:13:11.848636 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f17d90f-02c5-4721-9f39-2f50cafbd329-kube-api-access-qmzk4" (OuterVolumeSpecName: "kube-api-access-qmzk4") pod "5f17d90f-02c5-4721-9f39-2f50cafbd329" (UID: "5f17d90f-02c5-4721-9f39-2f50cafbd329"). InnerVolumeSpecName "kube-api-access-qmzk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:13:11 crc kubenswrapper[4724]: I0226 11:13:11.945839 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmzk4\" (UniqueName: \"kubernetes.io/projected/5f17d90f-02c5-4721-9f39-2f50cafbd329-kube-api-access-qmzk4\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.080368 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.118621 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.147660 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9371739-6d1a-4872-b11e-b2e915349056-serving-cert\") pod \"c9371739-6d1a-4872-b11e-b2e915349056\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.147702 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e798af-3465-4040-a183-3319e609a282-config\") pod \"f0e798af-3465-4040-a183-3319e609a282\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.147732 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-client-ca\") pod \"c9371739-6d1a-4872-b11e-b2e915349056\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.147772 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0e798af-3465-4040-a183-3319e609a282-serving-cert\") pod \"f0e798af-3465-4040-a183-3319e609a282\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.147790 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0e798af-3465-4040-a183-3319e609a282-client-ca\") pod \"f0e798af-3465-4040-a183-3319e609a282\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.147817 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-proxy-ca-bundles\") pod \"c9371739-6d1a-4872-b11e-b2e915349056\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.147842 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-config\") pod \"c9371739-6d1a-4872-b11e-b2e915349056\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.147891 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cx2v\" (UniqueName: \"kubernetes.io/projected/c9371739-6d1a-4872-b11e-b2e915349056-kube-api-access-2cx2v\") pod \"c9371739-6d1a-4872-b11e-b2e915349056\" (UID: \"c9371739-6d1a-4872-b11e-b2e915349056\") " Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.147919 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdh2g\" (UniqueName: \"kubernetes.io/projected/f0e798af-3465-4040-a183-3319e609a282-kube-api-access-fdh2g\") pod \"f0e798af-3465-4040-a183-3319e609a282\" (UID: \"f0e798af-3465-4040-a183-3319e609a282\") " Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.148635 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0e798af-3465-4040-a183-3319e609a282-config" (OuterVolumeSpecName: "config") pod "f0e798af-3465-4040-a183-3319e609a282" (UID: "f0e798af-3465-4040-a183-3319e609a282"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.148635 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-client-ca" (OuterVolumeSpecName: "client-ca") pod "c9371739-6d1a-4872-b11e-b2e915349056" (UID: "c9371739-6d1a-4872-b11e-b2e915349056"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.148930 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c9371739-6d1a-4872-b11e-b2e915349056" (UID: "c9371739-6d1a-4872-b11e-b2e915349056"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.149389 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0e798af-3465-4040-a183-3319e609a282-client-ca" (OuterVolumeSpecName: "client-ca") pod "f0e798af-3465-4040-a183-3319e609a282" (UID: "f0e798af-3465-4040-a183-3319e609a282"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.150403 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-config" (OuterVolumeSpecName: "config") pod "c9371739-6d1a-4872-b11e-b2e915349056" (UID: "c9371739-6d1a-4872-b11e-b2e915349056"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.152827 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9371739-6d1a-4872-b11e-b2e915349056-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c9371739-6d1a-4872-b11e-b2e915349056" (UID: "c9371739-6d1a-4872-b11e-b2e915349056"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.153406 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e798af-3465-4040-a183-3319e609a282-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f0e798af-3465-4040-a183-3319e609a282" (UID: "f0e798af-3465-4040-a183-3319e609a282"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.157338 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e798af-3465-4040-a183-3319e609a282-kube-api-access-fdh2g" (OuterVolumeSpecName: "kube-api-access-fdh2g") pod "f0e798af-3465-4040-a183-3319e609a282" (UID: "f0e798af-3465-4040-a183-3319e609a282"). InnerVolumeSpecName "kube-api-access-fdh2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.157392 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9371739-6d1a-4872-b11e-b2e915349056-kube-api-access-2cx2v" (OuterVolumeSpecName: "kube-api-access-2cx2v") pod "c9371739-6d1a-4872-b11e-b2e915349056" (UID: "c9371739-6d1a-4872-b11e-b2e915349056"). InnerVolumeSpecName "kube-api-access-2cx2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.245492 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.249482 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cx2v\" (UniqueName: \"kubernetes.io/projected/c9371739-6d1a-4872-b11e-b2e915349056-kube-api-access-2cx2v\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.249540 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdh2g\" (UniqueName: \"kubernetes.io/projected/f0e798af-3465-4040-a183-3319e609a282-kube-api-access-fdh2g\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.249553 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9371739-6d1a-4872-b11e-b2e915349056-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.249568 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e798af-3465-4040-a183-3319e609a282-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.249580 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.249593 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0e798af-3465-4040-a183-3319e609a282-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.249602 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0e798af-3465-4040-a183-3319e609a282-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.249614 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.249626 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9371739-6d1a-4872-b11e-b2e915349056-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.318702 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.610258 4724 generic.go:334] "Generic (PLEG): container finished" podID="c9371739-6d1a-4872-b11e-b2e915349056" containerID="d08eea06d0c3e7387d52e6ee5fd80353a0c3e2acf37f7ee37a9c1539a1f18735" exitCode=0 Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.610587 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.611148 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" event={"ID":"c9371739-6d1a-4872-b11e-b2e915349056","Type":"ContainerDied","Data":"d08eea06d0c3e7387d52e6ee5fd80353a0c3e2acf37f7ee37a9c1539a1f18735"} Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.611283 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b89969d5-gwmk8" event={"ID":"c9371739-6d1a-4872-b11e-b2e915349056","Type":"ContainerDied","Data":"830dc4cbc8c4c294f74154cf912d090370b9a3fa45a60377747c9d6b79b4fce7"} Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.611387 4724 scope.go:117] "RemoveContainer" containerID="d08eea06d0c3e7387d52e6ee5fd80353a0c3e2acf37f7ee37a9c1539a1f18735" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.614029 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-56656f9798-6tgnh_630d11de-abc5-47ed-8284-7bbf4ec5b9c8/machine-approver-controller/0.log" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.614700 4724 generic.go:334] "Generic (PLEG): container finished" podID="630d11de-abc5-47ed-8284-7bbf4ec5b9c8" containerID="55bbd2b87ed8b7a67c6cb6e185f800581b01c3abb4da57e29a9c51d6e1b4073f" exitCode=255 Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.614798 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" event={"ID":"630d11de-abc5-47ed-8284-7bbf4ec5b9c8","Type":"ContainerDied","Data":"55bbd2b87ed8b7a67c6cb6e185f800581b01c3abb4da57e29a9c51d6e1b4073f"} Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.615234 4724 scope.go:117] "RemoveContainer" containerID="55bbd2b87ed8b7a67c6cb6e185f800581b01c3abb4da57e29a9c51d6e1b4073f" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.619013 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535072-h4cjv" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.620254 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535072-h4cjv" event={"ID":"5f17d90f-02c5-4721-9f39-2f50cafbd329","Type":"ContainerDied","Data":"18d74415d25a2aa1b37e1ed4111b25825d3f8a4ac4f26d6f90303b8df4fb8fe7"} Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.620293 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18d74415d25a2aa1b37e1ed4111b25825d3f8a4ac4f26d6f90303b8df4fb8fe7" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.622259 4724 generic.go:334] "Generic (PLEG): container finished" podID="f0e798af-3465-4040-a183-3319e609a282" containerID="b24f353caba69372f8a150eaf10b5f15989d9b887803a64b9eb0e589392042dc" exitCode=0 Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.622287 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" event={"ID":"f0e798af-3465-4040-a183-3319e609a282","Type":"ContainerDied","Data":"b24f353caba69372f8a150eaf10b5f15989d9b887803a64b9eb0e589392042dc"} Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.622307 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" event={"ID":"f0e798af-3465-4040-a183-3319e609a282","Type":"ContainerDied","Data":"c7bd9e93a0dd5305cd60c7277bdc0c129aa5c41eb71a6877daf6e22ba0daf4e8"} Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.622333 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.637439 4724 scope.go:117] "RemoveContainer" containerID="5af7c65324ab0f200fe0e88f16f385156e91593d56676f447b255f4db869dc9e" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.655969 4724 scope.go:117] "RemoveContainer" containerID="d08eea06d0c3e7387d52e6ee5fd80353a0c3e2acf37f7ee37a9c1539a1f18735" Feb 26 11:13:12 crc kubenswrapper[4724]: E0226 11:13:12.658461 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d08eea06d0c3e7387d52e6ee5fd80353a0c3e2acf37f7ee37a9c1539a1f18735\": container with ID starting with d08eea06d0c3e7387d52e6ee5fd80353a0c3e2acf37f7ee37a9c1539a1f18735 not found: ID does not exist" containerID="d08eea06d0c3e7387d52e6ee5fd80353a0c3e2acf37f7ee37a9c1539a1f18735" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.658636 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08eea06d0c3e7387d52e6ee5fd80353a0c3e2acf37f7ee37a9c1539a1f18735"} err="failed to get container status \"d08eea06d0c3e7387d52e6ee5fd80353a0c3e2acf37f7ee37a9c1539a1f18735\": rpc error: code = NotFound desc = could not find container \"d08eea06d0c3e7387d52e6ee5fd80353a0c3e2acf37f7ee37a9c1539a1f18735\": container with ID starting with d08eea06d0c3e7387d52e6ee5fd80353a0c3e2acf37f7ee37a9c1539a1f18735 not found: ID does not exist" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.658754 4724 scope.go:117] "RemoveContainer" containerID="5af7c65324ab0f200fe0e88f16f385156e91593d56676f447b255f4db869dc9e" Feb 26 11:13:12 crc kubenswrapper[4724]: E0226 11:13:12.659429 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5af7c65324ab0f200fe0e88f16f385156e91593d56676f447b255f4db869dc9e\": container with ID starting with 5af7c65324ab0f200fe0e88f16f385156e91593d56676f447b255f4db869dc9e not found: ID does not exist" containerID="5af7c65324ab0f200fe0e88f16f385156e91593d56676f447b255f4db869dc9e" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.659464 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5af7c65324ab0f200fe0e88f16f385156e91593d56676f447b255f4db869dc9e"} err="failed to get container status \"5af7c65324ab0f200fe0e88f16f385156e91593d56676f447b255f4db869dc9e\": rpc error: code = NotFound desc = could not find container \"5af7c65324ab0f200fe0e88f16f385156e91593d56676f447b255f4db869dc9e\": container with ID starting with 5af7c65324ab0f200fe0e88f16f385156e91593d56676f447b255f4db869dc9e not found: ID does not exist" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.659483 4724 scope.go:117] "RemoveContainer" containerID="b24f353caba69372f8a150eaf10b5f15989d9b887803a64b9eb0e589392042dc" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.660763 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74b89969d5-gwmk8"] Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.665407 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-74b89969d5-gwmk8"] Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.697928 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw"] Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.708546 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d558c998b-ftqpw"] Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.712376 4724 scope.go:117] "RemoveContainer" containerID="b24f353caba69372f8a150eaf10b5f15989d9b887803a64b9eb0e589392042dc" Feb 26 11:13:12 crc kubenswrapper[4724]: E0226 11:13:12.714076 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b24f353caba69372f8a150eaf10b5f15989d9b887803a64b9eb0e589392042dc\": container with ID starting with b24f353caba69372f8a150eaf10b5f15989d9b887803a64b9eb0e589392042dc not found: ID does not exist" containerID="b24f353caba69372f8a150eaf10b5f15989d9b887803a64b9eb0e589392042dc" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.714112 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b24f353caba69372f8a150eaf10b5f15989d9b887803a64b9eb0e589392042dc"} err="failed to get container status \"b24f353caba69372f8a150eaf10b5f15989d9b887803a64b9eb0e589392042dc\": rpc error: code = NotFound desc = could not find container \"b24f353caba69372f8a150eaf10b5f15989d9b887803a64b9eb0e589392042dc\": container with ID starting with b24f353caba69372f8a150eaf10b5f15989d9b887803a64b9eb0e589392042dc not found: ID does not exist" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.751335 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.868981 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-654fc45fb9-xldd7"] Feb 26 11:13:12 crc kubenswrapper[4724]: E0226 11:13:12.869253 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e798af-3465-4040-a183-3319e609a282" containerName="route-controller-manager" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.869274 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e798af-3465-4040-a183-3319e609a282" containerName="route-controller-manager" Feb 26 11:13:12 crc kubenswrapper[4724]: E0226 11:13:12.869287 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.869295 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 11:13:12 crc kubenswrapper[4724]: E0226 11:13:12.869309 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9371739-6d1a-4872-b11e-b2e915349056" containerName="controller-manager" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.869316 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9371739-6d1a-4872-b11e-b2e915349056" containerName="controller-manager" Feb 26 11:13:12 crc kubenswrapper[4724]: E0226 11:13:12.869325 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9371739-6d1a-4872-b11e-b2e915349056" containerName="controller-manager" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.869333 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9371739-6d1a-4872-b11e-b2e915349056" containerName="controller-manager" Feb 26 11:13:12 crc kubenswrapper[4724]: E0226 11:13:12.869344 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f17d90f-02c5-4721-9f39-2f50cafbd329" containerName="oc" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.869350 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f17d90f-02c5-4721-9f39-2f50cafbd329" containerName="oc" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.869481 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9371739-6d1a-4872-b11e-b2e915349056" containerName="controller-manager" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.869495 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.869510 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f17d90f-02c5-4721-9f39-2f50cafbd329" containerName="oc" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.869521 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e798af-3465-4040-a183-3319e609a282" containerName="route-controller-manager" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.870068 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.871734 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.871965 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.872369 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.872496 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.872648 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.877662 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.881135 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-654fc45fb9-xldd7"] Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.882382 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.957953 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-config\") pod \"controller-manager-654fc45fb9-xldd7\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.957999 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05c16e0c-ed26-4307-9673-5b9497d942c3-serving-cert\") pod \"controller-manager-654fc45fb9-xldd7\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.958060 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-proxy-ca-bundles\") pod \"controller-manager-654fc45fb9-xldd7\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.958078 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-client-ca\") pod \"controller-manager-654fc45fb9-xldd7\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:12 crc kubenswrapper[4724]: I0226 11:13:12.958098 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl6xb\" (UniqueName: \"kubernetes.io/projected/05c16e0c-ed26-4307-9673-5b9497d942c3-kube-api-access-fl6xb\") pod \"controller-manager-654fc45fb9-xldd7\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.059756 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-config\") pod \"controller-manager-654fc45fb9-xldd7\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.059804 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05c16e0c-ed26-4307-9673-5b9497d942c3-serving-cert\") pod \"controller-manager-654fc45fb9-xldd7\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.059933 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-client-ca\") pod \"controller-manager-654fc45fb9-xldd7\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.059955 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-proxy-ca-bundles\") pod \"controller-manager-654fc45fb9-xldd7\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.059986 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl6xb\" (UniqueName: \"kubernetes.io/projected/05c16e0c-ed26-4307-9673-5b9497d942c3-kube-api-access-fl6xb\") pod \"controller-manager-654fc45fb9-xldd7\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.061901 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-client-ca\") pod \"controller-manager-654fc45fb9-xldd7\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.062173 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-config\") pod \"controller-manager-654fc45fb9-xldd7\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.062948 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-proxy-ca-bundles\") pod \"controller-manager-654fc45fb9-xldd7\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.067847 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05c16e0c-ed26-4307-9673-5b9497d942c3-serving-cert\") pod \"controller-manager-654fc45fb9-xldd7\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.076754 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl6xb\" (UniqueName: \"kubernetes.io/projected/05c16e0c-ed26-4307-9673-5b9497d942c3-kube-api-access-fl6xb\") pod \"controller-manager-654fc45fb9-xldd7\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.192364 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.294422 4724 ???:1] "http: TLS handshake error from 192.168.126.11:58884: no serving certificate available for the kubelet" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.321823 4724 ???:1] "http: TLS handshake error from 192.168.126.11:58890: no serving certificate available for the kubelet" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.343744 4724 ???:1] "http: TLS handshake error from 192.168.126.11:58898: no serving certificate available for the kubelet" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.351936 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.365640 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-654fc45fb9-xldd7"] Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.393500 4724 ???:1] "http: TLS handshake error from 192.168.126.11:58914: no serving certificate available for the kubelet" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.412863 4724 ???:1] "http: TLS handshake error from 192.168.126.11:58930: no serving certificate available for the kubelet" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.457557 4724 ???:1] "http: TLS handshake error from 192.168.126.11:58934: no serving certificate available for the kubelet" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.635883 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-56656f9798-6tgnh_630d11de-abc5-47ed-8284-7bbf4ec5b9c8/machine-approver-controller/0.log" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.636439 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tgnh" event={"ID":"630d11de-abc5-47ed-8284-7bbf4ec5b9c8","Type":"ContainerStarted","Data":"0a77bb654bfe4f9445f0ff723d3cd7159f899ee7a0bdb78b6c5afb91ad3574ea"} Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.639985 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" event={"ID":"05c16e0c-ed26-4307-9673-5b9497d942c3","Type":"ContainerStarted","Data":"c0d78a02e33e603a625456508a816d3122524b60c5dc4f25f46553079e8d8a2a"} Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.640024 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" event={"ID":"05c16e0c-ed26-4307-9673-5b9497d942c3","Type":"ContainerStarted","Data":"0e643d15e56f7eecc13c738bf04b667b6ee85f120f211589d0cb597527dd9394"} Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.640308 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.645278 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.648153 4724 ???:1] "http: TLS handshake error from 192.168.126.11:58944: no serving certificate available for the kubelet" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.670547 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" podStartSLOduration=2.670528211 podStartE2EDuration="2.670528211s" podCreationTimestamp="2026-02-26 11:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:13.668818767 +0000 UTC m=+460.324557892" watchObservedRunningTime="2026-02-26 11:13:13.670528211 +0000 UTC m=+460.326267336" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.867765 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd"] Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.868081 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9371739-6d1a-4872-b11e-b2e915349056" containerName="controller-manager" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.868456 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.870830 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.870855 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.870968 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.871095 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.871130 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.871972 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.875923 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd"] Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.969592 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6f97483-9760-4871-9d54-94c3f3502c14-serving-cert\") pod \"route-controller-manager-f4649b4f4-mcbwd\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.969672 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f97483-9760-4871-9d54-94c3f3502c14-config\") pod \"route-controller-manager-f4649b4f4-mcbwd\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.969725 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6f97483-9760-4871-9d54-94c3f3502c14-client-ca\") pod \"route-controller-manager-f4649b4f4-mcbwd\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.969755 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7gn8\" (UniqueName: \"kubernetes.io/projected/a6f97483-9760-4871-9d54-94c3f3502c14-kube-api-access-f7gn8\") pod \"route-controller-manager-f4649b4f4-mcbwd\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.982094 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9371739-6d1a-4872-b11e-b2e915349056" path="/var/lib/kubelet/pods/c9371739-6d1a-4872-b11e-b2e915349056/volumes" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.982667 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e798af-3465-4040-a183-3319e609a282" path="/var/lib/kubelet/pods/f0e798af-3465-4040-a183-3319e609a282/volumes" Feb 26 11:13:13 crc kubenswrapper[4724]: I0226 11:13:13.987235 4724 ???:1] "http: TLS handshake error from 192.168.126.11:58960: no serving certificate available for the kubelet" Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.054353 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.071002 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6f97483-9760-4871-9d54-94c3f3502c14-serving-cert\") pod \"route-controller-manager-f4649b4f4-mcbwd\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.071277 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f97483-9760-4871-9d54-94c3f3502c14-config\") pod \"route-controller-manager-f4649b4f4-mcbwd\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.071407 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6f97483-9760-4871-9d54-94c3f3502c14-client-ca\") pod \"route-controller-manager-f4649b4f4-mcbwd\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.071491 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7gn8\" (UniqueName: \"kubernetes.io/projected/a6f97483-9760-4871-9d54-94c3f3502c14-kube-api-access-f7gn8\") pod \"route-controller-manager-f4649b4f4-mcbwd\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.072449 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6f97483-9760-4871-9d54-94c3f3502c14-client-ca\") pod \"route-controller-manager-f4649b4f4-mcbwd\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.072773 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f97483-9760-4871-9d54-94c3f3502c14-config\") pod \"route-controller-manager-f4649b4f4-mcbwd\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.077531 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6f97483-9760-4871-9d54-94c3f3502c14-serving-cert\") pod \"route-controller-manager-f4649b4f4-mcbwd\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.085803 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7gn8\" (UniqueName: \"kubernetes.io/projected/a6f97483-9760-4871-9d54-94c3f3502c14-kube-api-access-f7gn8\") pod \"route-controller-manager-f4649b4f4-mcbwd\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.189338 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.200050 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.365289 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd"] Feb 26 11:13:14 crc kubenswrapper[4724]: W0226 11:13:14.370555 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6f97483_9760_4871_9d54_94c3f3502c14.slice/crio-ccaec1e1deca39b21c20ce69c826b33455bb46c0038c27414ab2e0fc97a07296 WatchSource:0}: Error finding container ccaec1e1deca39b21c20ce69c826b33455bb46c0038c27414ab2e0fc97a07296: Status 404 returned error can't find the container with id ccaec1e1deca39b21c20ce69c826b33455bb46c0038c27414ab2e0fc97a07296 Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.462612 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.648559 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" event={"ID":"a6f97483-9760-4871-9d54-94c3f3502c14","Type":"ContainerStarted","Data":"b16671bcddafcf5f275a74cd5734678a990ffa3ab871c54b5602b21710e30a66"} Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.648878 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.648889 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" event={"ID":"a6f97483-9760-4871-9d54-94c3f3502c14","Type":"ContainerStarted","Data":"ccaec1e1deca39b21c20ce69c826b33455bb46c0038c27414ab2e0fc97a07296"} Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.659609 4724 ???:1] "http: TLS handshake error from 192.168.126.11:58974: no serving certificate available for the kubelet" Feb 26 11:13:14 crc kubenswrapper[4724]: I0226 11:13:14.669046 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" podStartSLOduration=3.669026572 podStartE2EDuration="3.669026572s" podCreationTimestamp="2026-02-26 11:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:14.66504743 +0000 UTC m=+461.320786555" watchObservedRunningTime="2026-02-26 11:13:14.669026572 +0000 UTC m=+461.324765687" Feb 26 11:13:15 crc kubenswrapper[4724]: I0226 11:13:15.093976 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:15 crc kubenswrapper[4724]: I0226 11:13:15.191740 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 26 11:13:15 crc kubenswrapper[4724]: I0226 11:13:15.903809 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 11:13:15 crc kubenswrapper[4724]: I0226 11:13:15.964960 4724 ???:1] "http: TLS handshake error from 192.168.126.11:58978: no serving certificate available for the kubelet" Feb 26 11:13:16 crc kubenswrapper[4724]: I0226 11:13:16.061298 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 26 11:13:16 crc kubenswrapper[4724]: I0226 11:13:16.323571 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 26 11:13:16 crc kubenswrapper[4724]: I0226 11:13:16.435898 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 26 11:13:16 crc kubenswrapper[4724]: I0226 11:13:16.662498 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 26 11:13:16 crc kubenswrapper[4724]: I0226 11:13:16.906476 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:13:16 crc kubenswrapper[4724]: I0226 11:13:16.907166 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:13:17 crc kubenswrapper[4724]: I0226 11:13:17.793311 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 26 11:13:18 crc kubenswrapper[4724]: I0226 11:13:18.236921 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 26 11:13:18 crc kubenswrapper[4724]: I0226 11:13:18.546763 4724 ???:1] "http: TLS handshake error from 192.168.126.11:58982: no serving certificate available for the kubelet" Feb 26 11:13:19 crc kubenswrapper[4724]: I0226 11:13:19.067677 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 26 11:13:21 crc kubenswrapper[4724]: I0226 11:13:21.975866 4724 scope.go:117] "RemoveContainer" containerID="416a1d5a3cf7f428ab79b4fd4d226abd53743d103395b27b9fb360cd62a9533c" Feb 26 11:13:21 crc kubenswrapper[4724]: E0226 11:13:21.976437 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-8kd6n_openshift-marketplace(481dac61-2ecf-46c9-b8f8-981815ceb9c5)\"" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" Feb 26 11:13:22 crc kubenswrapper[4724]: I0226 11:13:22.143054 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 26 11:13:23 crc kubenswrapper[4724]: I0226 11:13:23.116162 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 26 11:13:23 crc kubenswrapper[4724]: I0226 11:13:23.685351 4724 ???:1] "http: TLS handshake error from 192.168.126.11:49478: no serving certificate available for the kubelet" Feb 26 11:13:32 crc kubenswrapper[4724]: I0226 11:13:32.975377 4724 scope.go:117] "RemoveContainer" containerID="416a1d5a3cf7f428ab79b4fd4d226abd53743d103395b27b9fb360cd62a9533c" Feb 26 11:13:32 crc kubenswrapper[4724]: E0226 11:13:32.976071 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-8kd6n_openshift-marketplace(481dac61-2ecf-46c9-b8f8-981815ceb9c5)\"" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" Feb 26 11:13:33 crc kubenswrapper[4724]: I0226 11:13:33.946770 4724 ???:1] "http: TLS handshake error from 192.168.126.11:57086: no serving certificate available for the kubelet" Feb 26 11:13:46 crc kubenswrapper[4724]: I0226 11:13:46.906357 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:13:46 crc kubenswrapper[4724]: I0226 11:13:46.906797 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:13:47 crc kubenswrapper[4724]: I0226 11:13:47.975374 4724 scope.go:117] "RemoveContainer" containerID="416a1d5a3cf7f428ab79b4fd4d226abd53743d103395b27b9fb360cd62a9533c" Feb 26 11:13:48 crc kubenswrapper[4724]: I0226 11:13:48.835779 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8kd6n_481dac61-2ecf-46c9-b8f8-981815ceb9c5/marketplace-operator/3.log" Feb 26 11:13:48 crc kubenswrapper[4724]: I0226 11:13:48.836256 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" event={"ID":"481dac61-2ecf-46c9-b8f8-981815ceb9c5","Type":"ContainerStarted","Data":"60bb794b9218f7d4380c1132409b524fbffcc56aa1be0e27425474e6b08c43ca"} Feb 26 11:13:48 crc kubenswrapper[4724]: I0226 11:13:48.836646 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:13:48 crc kubenswrapper[4724]: I0226 11:13:48.838791 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:13:50 crc kubenswrapper[4724]: I0226 11:13:50.470592 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-654fc45fb9-xldd7"] Feb 26 11:13:50 crc kubenswrapper[4724]: I0226 11:13:50.470848 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" podUID="05c16e0c-ed26-4307-9673-5b9497d942c3" containerName="controller-manager" containerID="cri-o://c0d78a02e33e603a625456508a816d3122524b60c5dc4f25f46553079e8d8a2a" gracePeriod=30 Feb 26 11:13:50 crc kubenswrapper[4724]: I0226 11:13:50.574558 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd"] Feb 26 11:13:50 crc kubenswrapper[4724]: I0226 11:13:50.574862 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" podUID="a6f97483-9760-4871-9d54-94c3f3502c14" containerName="route-controller-manager" containerID="cri-o://b16671bcddafcf5f275a74cd5734678a990ffa3ab871c54b5602b21710e30a66" gracePeriod=30 Feb 26 11:13:50 crc kubenswrapper[4724]: I0226 11:13:50.859357 4724 generic.go:334] "Generic (PLEG): container finished" podID="05c16e0c-ed26-4307-9673-5b9497d942c3" containerID="c0d78a02e33e603a625456508a816d3122524b60c5dc4f25f46553079e8d8a2a" exitCode=0 Feb 26 11:13:50 crc kubenswrapper[4724]: I0226 11:13:50.859479 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" event={"ID":"05c16e0c-ed26-4307-9673-5b9497d942c3","Type":"ContainerDied","Data":"c0d78a02e33e603a625456508a816d3122524b60c5dc4f25f46553079e8d8a2a"} Feb 26 11:13:50 crc kubenswrapper[4724]: I0226 11:13:50.864540 4724 generic.go:334] "Generic (PLEG): container finished" podID="a6f97483-9760-4871-9d54-94c3f3502c14" containerID="b16671bcddafcf5f275a74cd5734678a990ffa3ab871c54b5602b21710e30a66" exitCode=0 Feb 26 11:13:50 crc kubenswrapper[4724]: I0226 11:13:50.864637 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" event={"ID":"a6f97483-9760-4871-9d54-94c3f3502c14","Type":"ContainerDied","Data":"b16671bcddafcf5f275a74cd5734678a990ffa3ab871c54b5602b21710e30a66"} Feb 26 11:13:50 crc kubenswrapper[4724]: I0226 11:13:50.979492 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.159536 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05c16e0c-ed26-4307-9673-5b9497d942c3-serving-cert\") pod \"05c16e0c-ed26-4307-9673-5b9497d942c3\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.159627 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-proxy-ca-bundles\") pod \"05c16e0c-ed26-4307-9673-5b9497d942c3\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.159703 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl6xb\" (UniqueName: \"kubernetes.io/projected/05c16e0c-ed26-4307-9673-5b9497d942c3-kube-api-access-fl6xb\") pod \"05c16e0c-ed26-4307-9673-5b9497d942c3\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.159723 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-config\") pod \"05c16e0c-ed26-4307-9673-5b9497d942c3\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.159743 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-client-ca\") pod \"05c16e0c-ed26-4307-9673-5b9497d942c3\" (UID: \"05c16e0c-ed26-4307-9673-5b9497d942c3\") " Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.161406 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "05c16e0c-ed26-4307-9673-5b9497d942c3" (UID: "05c16e0c-ed26-4307-9673-5b9497d942c3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.161868 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-client-ca" (OuterVolumeSpecName: "client-ca") pod "05c16e0c-ed26-4307-9673-5b9497d942c3" (UID: "05c16e0c-ed26-4307-9673-5b9497d942c3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.162319 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-config" (OuterVolumeSpecName: "config") pod "05c16e0c-ed26-4307-9673-5b9497d942c3" (UID: "05c16e0c-ed26-4307-9673-5b9497d942c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.167461 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c16e0c-ed26-4307-9673-5b9497d942c3-kube-api-access-fl6xb" (OuterVolumeSpecName: "kube-api-access-fl6xb") pod "05c16e0c-ed26-4307-9673-5b9497d942c3" (UID: "05c16e0c-ed26-4307-9673-5b9497d942c3"). InnerVolumeSpecName "kube-api-access-fl6xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.175306 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c16e0c-ed26-4307-9673-5b9497d942c3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "05c16e0c-ed26-4307-9673-5b9497d942c3" (UID: "05c16e0c-ed26-4307-9673-5b9497d942c3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.261077 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl6xb\" (UniqueName: \"kubernetes.io/projected/05c16e0c-ed26-4307-9673-5b9497d942c3-kube-api-access-fl6xb\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.261368 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.261456 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.261526 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05c16e0c-ed26-4307-9673-5b9497d942c3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.261591 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05c16e0c-ed26-4307-9673-5b9497d942c3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.466514 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.667524 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6f97483-9760-4871-9d54-94c3f3502c14-serving-cert\") pod \"a6f97483-9760-4871-9d54-94c3f3502c14\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.667582 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7gn8\" (UniqueName: \"kubernetes.io/projected/a6f97483-9760-4871-9d54-94c3f3502c14-kube-api-access-f7gn8\") pod \"a6f97483-9760-4871-9d54-94c3f3502c14\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.667648 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f97483-9760-4871-9d54-94c3f3502c14-config\") pod \"a6f97483-9760-4871-9d54-94c3f3502c14\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.667679 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6f97483-9760-4871-9d54-94c3f3502c14-client-ca\") pod \"a6f97483-9760-4871-9d54-94c3f3502c14\" (UID: \"a6f97483-9760-4871-9d54-94c3f3502c14\") " Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.668652 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6f97483-9760-4871-9d54-94c3f3502c14-client-ca" (OuterVolumeSpecName: "client-ca") pod "a6f97483-9760-4871-9d54-94c3f3502c14" (UID: "a6f97483-9760-4871-9d54-94c3f3502c14"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.668950 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6f97483-9760-4871-9d54-94c3f3502c14-config" (OuterVolumeSpecName: "config") pod "a6f97483-9760-4871-9d54-94c3f3502c14" (UID: "a6f97483-9760-4871-9d54-94c3f3502c14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.681368 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f97483-9760-4871-9d54-94c3f3502c14-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a6f97483-9760-4871-9d54-94c3f3502c14" (UID: "a6f97483-9760-4871-9d54-94c3f3502c14"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.681414 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f97483-9760-4871-9d54-94c3f3502c14-kube-api-access-f7gn8" (OuterVolumeSpecName: "kube-api-access-f7gn8") pod "a6f97483-9760-4871-9d54-94c3f3502c14" (UID: "a6f97483-9760-4871-9d54-94c3f3502c14"). InnerVolumeSpecName "kube-api-access-f7gn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.724419 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6487bff6c8-94pr8"] Feb 26 11:13:51 crc kubenswrapper[4724]: E0226 11:13:51.724622 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f97483-9760-4871-9d54-94c3f3502c14" containerName="route-controller-manager" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.724634 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f97483-9760-4871-9d54-94c3f3502c14" containerName="route-controller-manager" Feb 26 11:13:51 crc kubenswrapper[4724]: E0226 11:13:51.724652 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c16e0c-ed26-4307-9673-5b9497d942c3" containerName="controller-manager" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.724658 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c16e0c-ed26-4307-9673-5b9497d942c3" containerName="controller-manager" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.724755 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c16e0c-ed26-4307-9673-5b9497d942c3" containerName="controller-manager" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.724773 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f97483-9760-4871-9d54-94c3f3502c14" containerName="route-controller-manager" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.725101 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.736204 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6487bff6c8-94pr8"] Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.768906 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6f97483-9760-4871-9d54-94c3f3502c14-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.769086 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6f97483-9760-4871-9d54-94c3f3502c14-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.769103 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7gn8\" (UniqueName: \"kubernetes.io/projected/a6f97483-9760-4871-9d54-94c3f3502c14-kube-api-access-f7gn8\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.769119 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f97483-9760-4871-9d54-94c3f3502c14-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.869807 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-proxy-ca-bundles\") pod \"controller-manager-6487bff6c8-94pr8\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.869853 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbl7c\" (UniqueName: \"kubernetes.io/projected/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-kube-api-access-hbl7c\") pod \"controller-manager-6487bff6c8-94pr8\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.869883 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-config\") pod \"controller-manager-6487bff6c8-94pr8\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.869915 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-client-ca\") pod \"controller-manager-6487bff6c8-94pr8\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.870016 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-serving-cert\") pod \"controller-manager-6487bff6c8-94pr8\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.872480 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" event={"ID":"05c16e0c-ed26-4307-9673-5b9497d942c3","Type":"ContainerDied","Data":"0e643d15e56f7eecc13c738bf04b667b6ee85f120f211589d0cb597527dd9394"} Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.872660 4724 scope.go:117] "RemoveContainer" containerID="c0d78a02e33e603a625456508a816d3122524b60c5dc4f25f46553079e8d8a2a" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.872496 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-654fc45fb9-xldd7" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.874285 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" event={"ID":"a6f97483-9760-4871-9d54-94c3f3502c14","Type":"ContainerDied","Data":"ccaec1e1deca39b21c20ce69c826b33455bb46c0038c27414ab2e0fc97a07296"} Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.874311 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.894488 4724 scope.go:117] "RemoveContainer" containerID="b16671bcddafcf5f275a74cd5734678a990ffa3ab871c54b5602b21710e30a66" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.895880 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z"] Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.896567 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.898790 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.900637 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.900709 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.900785 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.900829 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.900835 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.919978 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z"] Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.938681 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-654fc45fb9-xldd7"] Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.943681 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-654fc45fb9-xldd7"] Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.946700 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd"] Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.949698 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4649b4f4-mcbwd"] Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.970762 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbl7c\" (UniqueName: \"kubernetes.io/projected/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-kube-api-access-hbl7c\") pod \"controller-manager-6487bff6c8-94pr8\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.971062 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-proxy-ca-bundles\") pod \"controller-manager-6487bff6c8-94pr8\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.971127 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-config\") pod \"controller-manager-6487bff6c8-94pr8\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.971181 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-client-ca\") pod \"controller-manager-6487bff6c8-94pr8\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.971271 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-serving-cert\") pod \"controller-manager-6487bff6c8-94pr8\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.972578 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-proxy-ca-bundles\") pod \"controller-manager-6487bff6c8-94pr8\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.972745 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-client-ca\") pod \"controller-manager-6487bff6c8-94pr8\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.973020 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-config\") pod \"controller-manager-6487bff6c8-94pr8\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.975643 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-serving-cert\") pod \"controller-manager-6487bff6c8-94pr8\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.986968 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c16e0c-ed26-4307-9673-5b9497d942c3" path="/var/lib/kubelet/pods/05c16e0c-ed26-4307-9673-5b9497d942c3/volumes" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.987800 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6f97483-9760-4871-9d54-94c3f3502c14" path="/var/lib/kubelet/pods/a6f97483-9760-4871-9d54-94c3f3502c14/volumes" Feb 26 11:13:51 crc kubenswrapper[4724]: I0226 11:13:51.988563 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbl7c\" (UniqueName: \"kubernetes.io/projected/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-kube-api-access-hbl7c\") pod \"controller-manager-6487bff6c8-94pr8\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.046088 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.072032 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e-client-ca\") pod \"route-controller-manager-c59776b8c-wg89z\" (UID: \"d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e\") " pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.072100 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e-config\") pod \"route-controller-manager-c59776b8c-wg89z\" (UID: \"d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e\") " pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.072130 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqzbq\" (UniqueName: \"kubernetes.io/projected/d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e-kube-api-access-sqzbq\") pod \"route-controller-manager-c59776b8c-wg89z\" (UID: \"d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e\") " pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.072168 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e-serving-cert\") pod \"route-controller-manager-c59776b8c-wg89z\" (UID: \"d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e\") " pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.211009 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e-serving-cert\") pod \"route-controller-manager-c59776b8c-wg89z\" (UID: \"d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e\") " pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.211317 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e-client-ca\") pod \"route-controller-manager-c59776b8c-wg89z\" (UID: \"d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e\") " pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.211358 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e-config\") pod \"route-controller-manager-c59776b8c-wg89z\" (UID: \"d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e\") " pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.211386 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqzbq\" (UniqueName: \"kubernetes.io/projected/d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e-kube-api-access-sqzbq\") pod \"route-controller-manager-c59776b8c-wg89z\" (UID: \"d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e\") " pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.212694 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e-client-ca\") pod \"route-controller-manager-c59776b8c-wg89z\" (UID: \"d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e\") " pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.217884 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e-config\") pod \"route-controller-manager-c59776b8c-wg89z\" (UID: \"d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e\") " pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.217085 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e-serving-cert\") pod \"route-controller-manager-c59776b8c-wg89z\" (UID: \"d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e\") " pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.234579 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqzbq\" (UniqueName: \"kubernetes.io/projected/d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e-kube-api-access-sqzbq\") pod \"route-controller-manager-c59776b8c-wg89z\" (UID: \"d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e\") " pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.288699 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6487bff6c8-94pr8"] Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.529531 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.725024 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z"] Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.879519 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" event={"ID":"d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e","Type":"ContainerStarted","Data":"10d722965c0de3a7e9b28d1b4e78f205fb1f80dddf2f6a159de6395c80f10f68"} Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.881068 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" event={"ID":"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74","Type":"ContainerStarted","Data":"fdef82ccbf149a0a5ecb220e076f96c4fa57e8fe5617a4af9f95cac14338c68e"} Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.881096 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" event={"ID":"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74","Type":"ContainerStarted","Data":"373a2a37f27304e429cc89ae0b2263bdad9143914b7ab4566dd7c8ab8bc5caaa"} Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.881384 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.888619 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:13:52 crc kubenswrapper[4724]: I0226 11:13:52.908365 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" podStartSLOduration=1.9083444429999998 podStartE2EDuration="1.908344443s" podCreationTimestamp="2026-02-26 11:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:52.897511196 +0000 UTC m=+499.553250331" watchObservedRunningTime="2026-02-26 11:13:52.908344443 +0000 UTC m=+499.564083548" Feb 26 11:13:53 crc kubenswrapper[4724]: I0226 11:13:53.900719 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" event={"ID":"d94e3342-4e3d-4ed1-a3bc-39cf4c2a1e7e","Type":"ContainerStarted","Data":"e8c4e9ed14d2ec3ddec91d1fab57f34d6f4e85c2333c464d4a0df189f62c0304"} Feb 26 11:13:53 crc kubenswrapper[4724]: I0226 11:13:53.924100 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" podStartSLOduration=3.924077103 podStartE2EDuration="3.924077103s" podCreationTimestamp="2026-02-26 11:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:13:53.920601965 +0000 UTC m=+500.576341090" watchObservedRunningTime="2026-02-26 11:13:53.924077103 +0000 UTC m=+500.579816218" Feb 26 11:13:54 crc kubenswrapper[4724]: I0226 11:13:54.458518 4724 ???:1] "http: TLS handshake error from 192.168.126.11:36298: no serving certificate available for the kubelet" Feb 26 11:13:54 crc kubenswrapper[4724]: I0226 11:13:54.907900 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:13:54 crc kubenswrapper[4724]: I0226 11:13:54.911889 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c59776b8c-wg89z" Feb 26 11:14:00 crc kubenswrapper[4724]: I0226 11:14:00.131615 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535074-wvpbv"] Feb 26 11:14:00 crc kubenswrapper[4724]: I0226 11:14:00.132959 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535074-wvpbv" Feb 26 11:14:00 crc kubenswrapper[4724]: I0226 11:14:00.136683 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:14:00 crc kubenswrapper[4724]: I0226 11:14:00.136767 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:14:00 crc kubenswrapper[4724]: I0226 11:14:00.136797 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:14:00 crc kubenswrapper[4724]: I0226 11:14:00.145685 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535074-wvpbv"] Feb 26 11:14:00 crc kubenswrapper[4724]: I0226 11:14:00.338560 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s7jg\" (UniqueName: \"kubernetes.io/projected/a2b79a85-e78f-427f-8250-bfe8be1e098b-kube-api-access-6s7jg\") pod \"auto-csr-approver-29535074-wvpbv\" (UID: \"a2b79a85-e78f-427f-8250-bfe8be1e098b\") " pod="openshift-infra/auto-csr-approver-29535074-wvpbv" Feb 26 11:14:00 crc kubenswrapper[4724]: I0226 11:14:00.440657 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s7jg\" (UniqueName: \"kubernetes.io/projected/a2b79a85-e78f-427f-8250-bfe8be1e098b-kube-api-access-6s7jg\") pod \"auto-csr-approver-29535074-wvpbv\" (UID: \"a2b79a85-e78f-427f-8250-bfe8be1e098b\") " pod="openshift-infra/auto-csr-approver-29535074-wvpbv" Feb 26 11:14:00 crc kubenswrapper[4724]: I0226 11:14:00.460050 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s7jg\" (UniqueName: \"kubernetes.io/projected/a2b79a85-e78f-427f-8250-bfe8be1e098b-kube-api-access-6s7jg\") pod \"auto-csr-approver-29535074-wvpbv\" (UID: \"a2b79a85-e78f-427f-8250-bfe8be1e098b\") " pod="openshift-infra/auto-csr-approver-29535074-wvpbv" Feb 26 11:14:00 crc kubenswrapper[4724]: I0226 11:14:00.749338 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535074-wvpbv" Feb 26 11:14:00 crc kubenswrapper[4724]: I0226 11:14:00.951931 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535074-wvpbv"] Feb 26 11:14:00 crc kubenswrapper[4724]: I0226 11:14:00.963535 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:14:01 crc kubenswrapper[4724]: I0226 11:14:01.946002 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535074-wvpbv" event={"ID":"a2b79a85-e78f-427f-8250-bfe8be1e098b","Type":"ContainerStarted","Data":"86df5becfcfe0f99dd656a262313373df0241acdbb5d0facff9822ee034192b4"} Feb 26 11:14:03 crc kubenswrapper[4724]: I0226 11:14:03.311337 4724 csr.go:261] certificate signing request csr-k7ffp is approved, waiting to be issued Feb 26 11:14:03 crc kubenswrapper[4724]: I0226 11:14:03.330046 4724 csr.go:257] certificate signing request csr-k7ffp is issued Feb 26 11:14:03 crc kubenswrapper[4724]: I0226 11:14:03.961907 4724 generic.go:334] "Generic (PLEG): container finished" podID="a2b79a85-e78f-427f-8250-bfe8be1e098b" containerID="514423164b87bfec0bc2047e5625b5bff2d273e028d6dba53d3dfb9b15d72049" exitCode=0 Feb 26 11:14:03 crc kubenswrapper[4724]: I0226 11:14:03.961949 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535074-wvpbv" event={"ID":"a2b79a85-e78f-427f-8250-bfe8be1e098b","Type":"ContainerDied","Data":"514423164b87bfec0bc2047e5625b5bff2d273e028d6dba53d3dfb9b15d72049"} Feb 26 11:14:04 crc kubenswrapper[4724]: I0226 11:14:04.331591 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-11 18:20:08.101240872 +0000 UTC Feb 26 11:14:04 crc kubenswrapper[4724]: I0226 11:14:04.331629 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6919h6m3.769614526s for next certificate rotation Feb 26 11:14:05 crc kubenswrapper[4724]: I0226 11:14:05.212533 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535074-wvpbv" Feb 26 11:14:05 crc kubenswrapper[4724]: I0226 11:14:05.331702 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-12 02:04:14.124821561 +0000 UTC Feb 26 11:14:05 crc kubenswrapper[4724]: I0226 11:14:05.331732 4724 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7670h50m8.793091293s for next certificate rotation Feb 26 11:14:05 crc kubenswrapper[4724]: I0226 11:14:05.414122 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s7jg\" (UniqueName: \"kubernetes.io/projected/a2b79a85-e78f-427f-8250-bfe8be1e098b-kube-api-access-6s7jg\") pod \"a2b79a85-e78f-427f-8250-bfe8be1e098b\" (UID: \"a2b79a85-e78f-427f-8250-bfe8be1e098b\") " Feb 26 11:14:05 crc kubenswrapper[4724]: I0226 11:14:05.428846 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b79a85-e78f-427f-8250-bfe8be1e098b-kube-api-access-6s7jg" (OuterVolumeSpecName: "kube-api-access-6s7jg") pod "a2b79a85-e78f-427f-8250-bfe8be1e098b" (UID: "a2b79a85-e78f-427f-8250-bfe8be1e098b"). InnerVolumeSpecName "kube-api-access-6s7jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:14:05 crc kubenswrapper[4724]: I0226 11:14:05.515726 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s7jg\" (UniqueName: \"kubernetes.io/projected/a2b79a85-e78f-427f-8250-bfe8be1e098b-kube-api-access-6s7jg\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:05 crc kubenswrapper[4724]: I0226 11:14:05.974631 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535074-wvpbv" Feb 26 11:14:05 crc kubenswrapper[4724]: I0226 11:14:05.981455 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535074-wvpbv" event={"ID":"a2b79a85-e78f-427f-8250-bfe8be1e098b","Type":"ContainerDied","Data":"86df5becfcfe0f99dd656a262313373df0241acdbb5d0facff9822ee034192b4"} Feb 26 11:14:05 crc kubenswrapper[4724]: I0226 11:14:05.981496 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86df5becfcfe0f99dd656a262313373df0241acdbb5d0facff9822ee034192b4" Feb 26 11:14:06 crc kubenswrapper[4724]: I0226 11:14:06.262147 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535068-crjcm"] Feb 26 11:14:06 crc kubenswrapper[4724]: I0226 11:14:06.265710 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535068-crjcm"] Feb 26 11:14:07 crc kubenswrapper[4724]: I0226 11:14:07.981596 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91b7ba35-3bf3-4738-8a71-d093b0e7fd12" path="/var/lib/kubelet/pods/91b7ba35-3bf3-4738-8a71-d093b0e7fd12/volumes" Feb 26 11:14:16 crc kubenswrapper[4724]: I0226 11:14:16.906681 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:14:16 crc kubenswrapper[4724]: I0226 11:14:16.907299 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:14:16 crc kubenswrapper[4724]: I0226 11:14:16.907348 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:14:16 crc kubenswrapper[4724]: I0226 11:14:16.907923 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"512c865cae468760a5a7701ee00c685edb3eb8ce270a9fed6d0b0e6c4c9fab74"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:14:16 crc kubenswrapper[4724]: I0226 11:14:16.907981 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://512c865cae468760a5a7701ee00c685edb3eb8ce270a9fed6d0b0e6c4c9fab74" gracePeriod=600 Feb 26 11:14:17 crc kubenswrapper[4724]: I0226 11:14:17.053305 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="512c865cae468760a5a7701ee00c685edb3eb8ce270a9fed6d0b0e6c4c9fab74" exitCode=0 Feb 26 11:14:17 crc kubenswrapper[4724]: I0226 11:14:17.053352 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"512c865cae468760a5a7701ee00c685edb3eb8ce270a9fed6d0b0e6c4c9fab74"} Feb 26 11:14:17 crc kubenswrapper[4724]: I0226 11:14:17.053393 4724 scope.go:117] "RemoveContainer" containerID="e2b2021b894f4eb4a3b87ac95fa2435ece88171f5560f8d7cd8550186d274cd5" Feb 26 11:14:18 crc kubenswrapper[4724]: I0226 11:14:18.060558 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"1edc54f7129749b0acdb90a5fcc53d2261e46a8913bfac1b99f27a0443dc7c8a"} Feb 26 11:14:31 crc kubenswrapper[4724]: I0226 11:14:31.317946 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2gkcb"] Feb 26 11:14:31 crc kubenswrapper[4724]: I0226 11:14:31.319264 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2gkcb" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" containerName="registry-server" containerID="cri-o://079fd25ad4396d8206b795d4fec7a9d4bb6beffb4beab106d70f958f78325732" gracePeriod=2 Feb 26 11:14:31 crc kubenswrapper[4724]: I0226 11:14:31.524110 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vlps5"] Feb 26 11:14:31 crc kubenswrapper[4724]: I0226 11:14:31.524386 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vlps5" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" containerName="registry-server" containerID="cri-o://2d1fc0c80fcf15e43a8c5169b44284cda7d8f675992e7dcb7907b13561a59d10" gracePeriod=2 Feb 26 11:14:31 crc kubenswrapper[4724]: I0226 11:14:31.733859 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:14:31 crc kubenswrapper[4724]: I0226 11:14:31.860552 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a09ba5-1063-467d-b7a6-c1b2c37a135e-utilities\") pod \"35a09ba5-1063-467d-b7a6-c1b2c37a135e\" (UID: \"35a09ba5-1063-467d-b7a6-c1b2c37a135e\") " Feb 26 11:14:31 crc kubenswrapper[4724]: I0226 11:14:31.860709 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a09ba5-1063-467d-b7a6-c1b2c37a135e-catalog-content\") pod \"35a09ba5-1063-467d-b7a6-c1b2c37a135e\" (UID: \"35a09ba5-1063-467d-b7a6-c1b2c37a135e\") " Feb 26 11:14:31 crc kubenswrapper[4724]: I0226 11:14:31.860875 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht5jv\" (UniqueName: \"kubernetes.io/projected/35a09ba5-1063-467d-b7a6-c1b2c37a135e-kube-api-access-ht5jv\") pod \"35a09ba5-1063-467d-b7a6-c1b2c37a135e\" (UID: \"35a09ba5-1063-467d-b7a6-c1b2c37a135e\") " Feb 26 11:14:31 crc kubenswrapper[4724]: I0226 11:14:31.861468 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a09ba5-1063-467d-b7a6-c1b2c37a135e-utilities" (OuterVolumeSpecName: "utilities") pod "35a09ba5-1063-467d-b7a6-c1b2c37a135e" (UID: "35a09ba5-1063-467d-b7a6-c1b2c37a135e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:14:31 crc kubenswrapper[4724]: I0226 11:14:31.880418 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a09ba5-1063-467d-b7a6-c1b2c37a135e-kube-api-access-ht5jv" (OuterVolumeSpecName: "kube-api-access-ht5jv") pod "35a09ba5-1063-467d-b7a6-c1b2c37a135e" (UID: "35a09ba5-1063-467d-b7a6-c1b2c37a135e"). InnerVolumeSpecName "kube-api-access-ht5jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:14:31 crc kubenswrapper[4724]: I0226 11:14:31.924977 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a09ba5-1063-467d-b7a6-c1b2c37a135e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35a09ba5-1063-467d-b7a6-c1b2c37a135e" (UID: "35a09ba5-1063-467d-b7a6-c1b2c37a135e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:14:31 crc kubenswrapper[4724]: I0226 11:14:31.962844 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht5jv\" (UniqueName: \"kubernetes.io/projected/35a09ba5-1063-467d-b7a6-c1b2c37a135e-kube-api-access-ht5jv\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:31 crc kubenswrapper[4724]: I0226 11:14:31.962889 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a09ba5-1063-467d-b7a6-c1b2c37a135e-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:31 crc kubenswrapper[4724]: I0226 11:14:31.962902 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a09ba5-1063-467d-b7a6-c1b2c37a135e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:31 crc kubenswrapper[4724]: I0226 11:14:31.987257 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.164659 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ed0863-9bdf-48ba-ad70-c1c728c58730-catalog-content\") pod \"f9ed0863-9bdf-48ba-ad70-c1c728c58730\" (UID: \"f9ed0863-9bdf-48ba-ad70-c1c728c58730\") " Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.164766 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ed0863-9bdf-48ba-ad70-c1c728c58730-utilities\") pod \"f9ed0863-9bdf-48ba-ad70-c1c728c58730\" (UID: \"f9ed0863-9bdf-48ba-ad70-c1c728c58730\") " Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.164795 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnd8c\" (UniqueName: \"kubernetes.io/projected/f9ed0863-9bdf-48ba-ad70-c1c728c58730-kube-api-access-dnd8c\") pod \"f9ed0863-9bdf-48ba-ad70-c1c728c58730\" (UID: \"f9ed0863-9bdf-48ba-ad70-c1c728c58730\") " Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.165612 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ed0863-9bdf-48ba-ad70-c1c728c58730-utilities" (OuterVolumeSpecName: "utilities") pod "f9ed0863-9bdf-48ba-ad70-c1c728c58730" (UID: "f9ed0863-9bdf-48ba-ad70-c1c728c58730"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.167717 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ed0863-9bdf-48ba-ad70-c1c728c58730-kube-api-access-dnd8c" (OuterVolumeSpecName: "kube-api-access-dnd8c") pod "f9ed0863-9bdf-48ba-ad70-c1c728c58730" (UID: "f9ed0863-9bdf-48ba-ad70-c1c728c58730"). InnerVolumeSpecName "kube-api-access-dnd8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.174791 4724 generic.go:334] "Generic (PLEG): container finished" podID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" containerID="079fd25ad4396d8206b795d4fec7a9d4bb6beffb4beab106d70f958f78325732" exitCode=0 Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.174874 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gkcb" event={"ID":"35a09ba5-1063-467d-b7a6-c1b2c37a135e","Type":"ContainerDied","Data":"079fd25ad4396d8206b795d4fec7a9d4bb6beffb4beab106d70f958f78325732"} Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.174909 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gkcb" event={"ID":"35a09ba5-1063-467d-b7a6-c1b2c37a135e","Type":"ContainerDied","Data":"45c4660e36fb377359a9fee6c2d3bdf4813d4e5e737535224182c76870f555bb"} Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.174925 4724 scope.go:117] "RemoveContainer" containerID="079fd25ad4396d8206b795d4fec7a9d4bb6beffb4beab106d70f958f78325732" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.175056 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2gkcb" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.181678 4724 generic.go:334] "Generic (PLEG): container finished" podID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" containerID="2d1fc0c80fcf15e43a8c5169b44284cda7d8f675992e7dcb7907b13561a59d10" exitCode=0 Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.181719 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlps5" event={"ID":"f9ed0863-9bdf-48ba-ad70-c1c728c58730","Type":"ContainerDied","Data":"2d1fc0c80fcf15e43a8c5169b44284cda7d8f675992e7dcb7907b13561a59d10"} Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.181750 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlps5" event={"ID":"f9ed0863-9bdf-48ba-ad70-c1c728c58730","Type":"ContainerDied","Data":"a6a537e52ccc09d66935aa7c4a8baeb2bc98d2189e1ae1e2faffe5767d58618f"} Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.181807 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlps5" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.196672 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2gkcb"] Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.200952 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2gkcb"] Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.206862 4724 scope.go:117] "RemoveContainer" containerID="10e4e9ada4e71131b5fb57a63d9abf86f8e0f271e001852d075326cf07cb4a40" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.220712 4724 scope.go:117] "RemoveContainer" containerID="bbeb21091e47cf3d7762ed124076e5b032725c1afdab7886c32c4090448c1d5d" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.223626 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ed0863-9bdf-48ba-ad70-c1c728c58730-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9ed0863-9bdf-48ba-ad70-c1c728c58730" (UID: "f9ed0863-9bdf-48ba-ad70-c1c728c58730"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.235635 4724 scope.go:117] "RemoveContainer" containerID="079fd25ad4396d8206b795d4fec7a9d4bb6beffb4beab106d70f958f78325732" Feb 26 11:14:32 crc kubenswrapper[4724]: E0226 11:14:32.236076 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079fd25ad4396d8206b795d4fec7a9d4bb6beffb4beab106d70f958f78325732\": container with ID starting with 079fd25ad4396d8206b795d4fec7a9d4bb6beffb4beab106d70f958f78325732 not found: ID does not exist" containerID="079fd25ad4396d8206b795d4fec7a9d4bb6beffb4beab106d70f958f78325732" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.236126 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079fd25ad4396d8206b795d4fec7a9d4bb6beffb4beab106d70f958f78325732"} err="failed to get container status \"079fd25ad4396d8206b795d4fec7a9d4bb6beffb4beab106d70f958f78325732\": rpc error: code = NotFound desc = could not find container \"079fd25ad4396d8206b795d4fec7a9d4bb6beffb4beab106d70f958f78325732\": container with ID starting with 079fd25ad4396d8206b795d4fec7a9d4bb6beffb4beab106d70f958f78325732 not found: ID does not exist" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.236153 4724 scope.go:117] "RemoveContainer" containerID="10e4e9ada4e71131b5fb57a63d9abf86f8e0f271e001852d075326cf07cb4a40" Feb 26 11:14:32 crc kubenswrapper[4724]: E0226 11:14:32.236534 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10e4e9ada4e71131b5fb57a63d9abf86f8e0f271e001852d075326cf07cb4a40\": container with ID starting with 10e4e9ada4e71131b5fb57a63d9abf86f8e0f271e001852d075326cf07cb4a40 not found: ID does not exist" containerID="10e4e9ada4e71131b5fb57a63d9abf86f8e0f271e001852d075326cf07cb4a40" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.236572 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e4e9ada4e71131b5fb57a63d9abf86f8e0f271e001852d075326cf07cb4a40"} err="failed to get container status \"10e4e9ada4e71131b5fb57a63d9abf86f8e0f271e001852d075326cf07cb4a40\": rpc error: code = NotFound desc = could not find container \"10e4e9ada4e71131b5fb57a63d9abf86f8e0f271e001852d075326cf07cb4a40\": container with ID starting with 10e4e9ada4e71131b5fb57a63d9abf86f8e0f271e001852d075326cf07cb4a40 not found: ID does not exist" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.236614 4724 scope.go:117] "RemoveContainer" containerID="bbeb21091e47cf3d7762ed124076e5b032725c1afdab7886c32c4090448c1d5d" Feb 26 11:14:32 crc kubenswrapper[4724]: E0226 11:14:32.236960 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbeb21091e47cf3d7762ed124076e5b032725c1afdab7886c32c4090448c1d5d\": container with ID starting with bbeb21091e47cf3d7762ed124076e5b032725c1afdab7886c32c4090448c1d5d not found: ID does not exist" containerID="bbeb21091e47cf3d7762ed124076e5b032725c1afdab7886c32c4090448c1d5d" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.236986 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbeb21091e47cf3d7762ed124076e5b032725c1afdab7886c32c4090448c1d5d"} err="failed to get container status \"bbeb21091e47cf3d7762ed124076e5b032725c1afdab7886c32c4090448c1d5d\": rpc error: code = NotFound desc = could not find container \"bbeb21091e47cf3d7762ed124076e5b032725c1afdab7886c32c4090448c1d5d\": container with ID starting with bbeb21091e47cf3d7762ed124076e5b032725c1afdab7886c32c4090448c1d5d not found: ID does not exist" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.237019 4724 scope.go:117] "RemoveContainer" containerID="2d1fc0c80fcf15e43a8c5169b44284cda7d8f675992e7dcb7907b13561a59d10" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.251077 4724 scope.go:117] "RemoveContainer" containerID="6ffbc836214cfb8e477bfe122ea8fdc26dbc986c7008c6eba7e7ca4b82d59382" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.265385 4724 scope.go:117] "RemoveContainer" containerID="a743aedc9c2c0cd026fe47e2d1e402781a03802a2368c796c9e4969bca547e10" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.266000 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ed0863-9bdf-48ba-ad70-c1c728c58730-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.266022 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ed0863-9bdf-48ba-ad70-c1c728c58730-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.266036 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnd8c\" (UniqueName: \"kubernetes.io/projected/f9ed0863-9bdf-48ba-ad70-c1c728c58730-kube-api-access-dnd8c\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.279220 4724 scope.go:117] "RemoveContainer" containerID="2d1fc0c80fcf15e43a8c5169b44284cda7d8f675992e7dcb7907b13561a59d10" Feb 26 11:14:32 crc kubenswrapper[4724]: E0226 11:14:32.279673 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d1fc0c80fcf15e43a8c5169b44284cda7d8f675992e7dcb7907b13561a59d10\": container with ID starting with 2d1fc0c80fcf15e43a8c5169b44284cda7d8f675992e7dcb7907b13561a59d10 not found: ID does not exist" containerID="2d1fc0c80fcf15e43a8c5169b44284cda7d8f675992e7dcb7907b13561a59d10" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.279710 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d1fc0c80fcf15e43a8c5169b44284cda7d8f675992e7dcb7907b13561a59d10"} err="failed to get container status \"2d1fc0c80fcf15e43a8c5169b44284cda7d8f675992e7dcb7907b13561a59d10\": rpc error: code = NotFound desc = could not find container \"2d1fc0c80fcf15e43a8c5169b44284cda7d8f675992e7dcb7907b13561a59d10\": container with ID starting with 2d1fc0c80fcf15e43a8c5169b44284cda7d8f675992e7dcb7907b13561a59d10 not found: ID does not exist" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.279739 4724 scope.go:117] "RemoveContainer" containerID="6ffbc836214cfb8e477bfe122ea8fdc26dbc986c7008c6eba7e7ca4b82d59382" Feb 26 11:14:32 crc kubenswrapper[4724]: E0226 11:14:32.280112 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ffbc836214cfb8e477bfe122ea8fdc26dbc986c7008c6eba7e7ca4b82d59382\": container with ID starting with 6ffbc836214cfb8e477bfe122ea8fdc26dbc986c7008c6eba7e7ca4b82d59382 not found: ID does not exist" containerID="6ffbc836214cfb8e477bfe122ea8fdc26dbc986c7008c6eba7e7ca4b82d59382" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.280153 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ffbc836214cfb8e477bfe122ea8fdc26dbc986c7008c6eba7e7ca4b82d59382"} err="failed to get container status \"6ffbc836214cfb8e477bfe122ea8fdc26dbc986c7008c6eba7e7ca4b82d59382\": rpc error: code = NotFound desc = could not find container \"6ffbc836214cfb8e477bfe122ea8fdc26dbc986c7008c6eba7e7ca4b82d59382\": container with ID starting with 6ffbc836214cfb8e477bfe122ea8fdc26dbc986c7008c6eba7e7ca4b82d59382 not found: ID does not exist" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.280205 4724 scope.go:117] "RemoveContainer" containerID="a743aedc9c2c0cd026fe47e2d1e402781a03802a2368c796c9e4969bca547e10" Feb 26 11:14:32 crc kubenswrapper[4724]: E0226 11:14:32.280496 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a743aedc9c2c0cd026fe47e2d1e402781a03802a2368c796c9e4969bca547e10\": container with ID starting with a743aedc9c2c0cd026fe47e2d1e402781a03802a2368c796c9e4969bca547e10 not found: ID does not exist" containerID="a743aedc9c2c0cd026fe47e2d1e402781a03802a2368c796c9e4969bca547e10" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.280573 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a743aedc9c2c0cd026fe47e2d1e402781a03802a2368c796c9e4969bca547e10"} err="failed to get container status \"a743aedc9c2c0cd026fe47e2d1e402781a03802a2368c796c9e4969bca547e10\": rpc error: code = NotFound desc = could not find container \"a743aedc9c2c0cd026fe47e2d1e402781a03802a2368c796c9e4969bca547e10\": container with ID starting with a743aedc9c2c0cd026fe47e2d1e402781a03802a2368c796c9e4969bca547e10 not found: ID does not exist" Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.510439 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vlps5"] Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.514440 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vlps5"] Feb 26 11:14:32 crc kubenswrapper[4724]: I0226 11:14:32.990755 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2m27r"] Feb 26 11:14:33 crc kubenswrapper[4724]: I0226 11:14:33.722125 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hj7c4"] Feb 26 11:14:33 crc kubenswrapper[4724]: I0226 11:14:33.722501 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hj7c4" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" containerName="registry-server" containerID="cri-o://bb1fdb4149f54b4f3af9e9c18ddce04d2a874a6a9c95b8e4640df2793389849d" gracePeriod=2 Feb 26 11:14:33 crc kubenswrapper[4724]: I0226 11:14:33.917058 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-64lrq"] Feb 26 11:14:33 crc kubenswrapper[4724]: I0226 11:14:33.917312 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-64lrq" podUID="4f727f37-5bac-476b-88a0-3d751c47e264" containerName="registry-server" containerID="cri-o://3135c43a357fc1c9365cf1b386458200bd97bcd37cb961a1a80746c9685fdd00" gracePeriod=2 Feb 26 11:14:33 crc kubenswrapper[4724]: I0226 11:14:33.983910 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" path="/var/lib/kubelet/pods/35a09ba5-1063-467d-b7a6-c1b2c37a135e/volumes" Feb 26 11:14:33 crc kubenswrapper[4724]: I0226 11:14:33.985899 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" path="/var/lib/kubelet/pods/f9ed0863-9bdf-48ba-ad70-c1c728c58730/volumes" Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.556221 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.697146 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4930fbf-4372-4466-b084-a13dfa8a5415-catalog-content\") pod \"f4930fbf-4372-4466-b084-a13dfa8a5415\" (UID: \"f4930fbf-4372-4466-b084-a13dfa8a5415\") " Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.697444 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht4q4\" (UniqueName: \"kubernetes.io/projected/f4930fbf-4372-4466-b084-a13dfa8a5415-kube-api-access-ht4q4\") pod \"f4930fbf-4372-4466-b084-a13dfa8a5415\" (UID: \"f4930fbf-4372-4466-b084-a13dfa8a5415\") " Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.697491 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4930fbf-4372-4466-b084-a13dfa8a5415-utilities\") pod \"f4930fbf-4372-4466-b084-a13dfa8a5415\" (UID: \"f4930fbf-4372-4466-b084-a13dfa8a5415\") " Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.698299 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4930fbf-4372-4466-b084-a13dfa8a5415-utilities" (OuterVolumeSpecName: "utilities") pod "f4930fbf-4372-4466-b084-a13dfa8a5415" (UID: "f4930fbf-4372-4466-b084-a13dfa8a5415"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.703336 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4930fbf-4372-4466-b084-a13dfa8a5415-kube-api-access-ht4q4" (OuterVolumeSpecName: "kube-api-access-ht4q4") pod "f4930fbf-4372-4466-b084-a13dfa8a5415" (UID: "f4930fbf-4372-4466-b084-a13dfa8a5415"). InnerVolumeSpecName "kube-api-access-ht4q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.722661 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4930fbf-4372-4466-b084-a13dfa8a5415-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4930fbf-4372-4466-b084-a13dfa8a5415" (UID: "f4930fbf-4372-4466-b084-a13dfa8a5415"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.731939 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.798372 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k4d8\" (UniqueName: \"kubernetes.io/projected/4f727f37-5bac-476b-88a0-3d751c47e264-kube-api-access-7k4d8\") pod \"4f727f37-5bac-476b-88a0-3d751c47e264\" (UID: \"4f727f37-5bac-476b-88a0-3d751c47e264\") " Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.798479 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f727f37-5bac-476b-88a0-3d751c47e264-catalog-content\") pod \"4f727f37-5bac-476b-88a0-3d751c47e264\" (UID: \"4f727f37-5bac-476b-88a0-3d751c47e264\") " Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.798531 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f727f37-5bac-476b-88a0-3d751c47e264-utilities\") pod \"4f727f37-5bac-476b-88a0-3d751c47e264\" (UID: \"4f727f37-5bac-476b-88a0-3d751c47e264\") " Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.798751 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4930fbf-4372-4466-b084-a13dfa8a5415-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.798765 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht4q4\" (UniqueName: \"kubernetes.io/projected/f4930fbf-4372-4466-b084-a13dfa8a5415-kube-api-access-ht4q4\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.798778 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4930fbf-4372-4466-b084-a13dfa8a5415-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.799453 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f727f37-5bac-476b-88a0-3d751c47e264-utilities" (OuterVolumeSpecName: "utilities") pod "4f727f37-5bac-476b-88a0-3d751c47e264" (UID: "4f727f37-5bac-476b-88a0-3d751c47e264"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.801414 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f727f37-5bac-476b-88a0-3d751c47e264-kube-api-access-7k4d8" (OuterVolumeSpecName: "kube-api-access-7k4d8") pod "4f727f37-5bac-476b-88a0-3d751c47e264" (UID: "4f727f37-5bac-476b-88a0-3d751c47e264"). InnerVolumeSpecName "kube-api-access-7k4d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.900060 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k4d8\" (UniqueName: \"kubernetes.io/projected/4f727f37-5bac-476b-88a0-3d751c47e264-kube-api-access-7k4d8\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.900317 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f727f37-5bac-476b-88a0-3d751c47e264-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:34 crc kubenswrapper[4724]: I0226 11:14:34.926969 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f727f37-5bac-476b-88a0-3d751c47e264-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f727f37-5bac-476b-88a0-3d751c47e264" (UID: "4f727f37-5bac-476b-88a0-3d751c47e264"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.001466 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f727f37-5bac-476b-88a0-3d751c47e264-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.201910 4724 generic.go:334] "Generic (PLEG): container finished" podID="4f727f37-5bac-476b-88a0-3d751c47e264" containerID="3135c43a357fc1c9365cf1b386458200bd97bcd37cb961a1a80746c9685fdd00" exitCode=0 Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.202024 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64lrq" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.202009 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64lrq" event={"ID":"4f727f37-5bac-476b-88a0-3d751c47e264","Type":"ContainerDied","Data":"3135c43a357fc1c9365cf1b386458200bd97bcd37cb961a1a80746c9685fdd00"} Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.202237 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64lrq" event={"ID":"4f727f37-5bac-476b-88a0-3d751c47e264","Type":"ContainerDied","Data":"40b5bc5c59227c129f863c296ab3e604d2a35c05fe0380cbbd325edc4f7c77c2"} Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.202263 4724 scope.go:117] "RemoveContainer" containerID="3135c43a357fc1c9365cf1b386458200bd97bcd37cb961a1a80746c9685fdd00" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.204442 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4930fbf-4372-4466-b084-a13dfa8a5415" containerID="bb1fdb4149f54b4f3af9e9c18ddce04d2a874a6a9c95b8e4640df2793389849d" exitCode=0 Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.204472 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj7c4" event={"ID":"f4930fbf-4372-4466-b084-a13dfa8a5415","Type":"ContainerDied","Data":"bb1fdb4149f54b4f3af9e9c18ddce04d2a874a6a9c95b8e4640df2793389849d"} Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.204488 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj7c4" event={"ID":"f4930fbf-4372-4466-b084-a13dfa8a5415","Type":"ContainerDied","Data":"8aed15bb9b39edae84defea99105065b1b766858de45578a4b28437949baf680"} Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.204514 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hj7c4" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.219861 4724 scope.go:117] "RemoveContainer" containerID="14cd4ae4993277a3eb5cca5ead8fad774ad1a080f1a23b66453e3662019b0d6b" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.235462 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hj7c4"] Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.240296 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hj7c4"] Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.249873 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-64lrq"] Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.255829 4724 scope.go:117] "RemoveContainer" containerID="e25b12fb5ff4351b357acff0751cf8300ce03dab8a011152aa5bee7eb0bac4b6" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.263627 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-64lrq"] Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.276147 4724 scope.go:117] "RemoveContainer" containerID="3135c43a357fc1c9365cf1b386458200bd97bcd37cb961a1a80746c9685fdd00" Feb 26 11:14:35 crc kubenswrapper[4724]: E0226 11:14:35.281558 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3135c43a357fc1c9365cf1b386458200bd97bcd37cb961a1a80746c9685fdd00\": container with ID starting with 3135c43a357fc1c9365cf1b386458200bd97bcd37cb961a1a80746c9685fdd00 not found: ID does not exist" containerID="3135c43a357fc1c9365cf1b386458200bd97bcd37cb961a1a80746c9685fdd00" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.281608 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3135c43a357fc1c9365cf1b386458200bd97bcd37cb961a1a80746c9685fdd00"} err="failed to get container status \"3135c43a357fc1c9365cf1b386458200bd97bcd37cb961a1a80746c9685fdd00\": rpc error: code = NotFound desc = could not find container \"3135c43a357fc1c9365cf1b386458200bd97bcd37cb961a1a80746c9685fdd00\": container with ID starting with 3135c43a357fc1c9365cf1b386458200bd97bcd37cb961a1a80746c9685fdd00 not found: ID does not exist" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.281634 4724 scope.go:117] "RemoveContainer" containerID="14cd4ae4993277a3eb5cca5ead8fad774ad1a080f1a23b66453e3662019b0d6b" Feb 26 11:14:35 crc kubenswrapper[4724]: E0226 11:14:35.282246 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14cd4ae4993277a3eb5cca5ead8fad774ad1a080f1a23b66453e3662019b0d6b\": container with ID starting with 14cd4ae4993277a3eb5cca5ead8fad774ad1a080f1a23b66453e3662019b0d6b not found: ID does not exist" containerID="14cd4ae4993277a3eb5cca5ead8fad774ad1a080f1a23b66453e3662019b0d6b" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.282278 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14cd4ae4993277a3eb5cca5ead8fad774ad1a080f1a23b66453e3662019b0d6b"} err="failed to get container status \"14cd4ae4993277a3eb5cca5ead8fad774ad1a080f1a23b66453e3662019b0d6b\": rpc error: code = NotFound desc = could not find container \"14cd4ae4993277a3eb5cca5ead8fad774ad1a080f1a23b66453e3662019b0d6b\": container with ID starting with 14cd4ae4993277a3eb5cca5ead8fad774ad1a080f1a23b66453e3662019b0d6b not found: ID does not exist" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.282297 4724 scope.go:117] "RemoveContainer" containerID="e25b12fb5ff4351b357acff0751cf8300ce03dab8a011152aa5bee7eb0bac4b6" Feb 26 11:14:35 crc kubenswrapper[4724]: E0226 11:14:35.282659 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e25b12fb5ff4351b357acff0751cf8300ce03dab8a011152aa5bee7eb0bac4b6\": container with ID starting with e25b12fb5ff4351b357acff0751cf8300ce03dab8a011152aa5bee7eb0bac4b6 not found: ID does not exist" containerID="e25b12fb5ff4351b357acff0751cf8300ce03dab8a011152aa5bee7eb0bac4b6" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.282691 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e25b12fb5ff4351b357acff0751cf8300ce03dab8a011152aa5bee7eb0bac4b6"} err="failed to get container status \"e25b12fb5ff4351b357acff0751cf8300ce03dab8a011152aa5bee7eb0bac4b6\": rpc error: code = NotFound desc = could not find container \"e25b12fb5ff4351b357acff0751cf8300ce03dab8a011152aa5bee7eb0bac4b6\": container with ID starting with e25b12fb5ff4351b357acff0751cf8300ce03dab8a011152aa5bee7eb0bac4b6 not found: ID does not exist" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.282708 4724 scope.go:117] "RemoveContainer" containerID="bb1fdb4149f54b4f3af9e9c18ddce04d2a874a6a9c95b8e4640df2793389849d" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.300990 4724 scope.go:117] "RemoveContainer" containerID="bad0c8048f0f86adbd3544de02fa1c490ba522a114e060f53a248dff2131b3e8" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.319991 4724 scope.go:117] "RemoveContainer" containerID="edc674b6f88033d96851c9533ea95019e2660940d662ade38868a4849382b462" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.340380 4724 scope.go:117] "RemoveContainer" containerID="bb1fdb4149f54b4f3af9e9c18ddce04d2a874a6a9c95b8e4640df2793389849d" Feb 26 11:14:35 crc kubenswrapper[4724]: E0226 11:14:35.340993 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb1fdb4149f54b4f3af9e9c18ddce04d2a874a6a9c95b8e4640df2793389849d\": container with ID starting with bb1fdb4149f54b4f3af9e9c18ddce04d2a874a6a9c95b8e4640df2793389849d not found: ID does not exist" containerID="bb1fdb4149f54b4f3af9e9c18ddce04d2a874a6a9c95b8e4640df2793389849d" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.341030 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb1fdb4149f54b4f3af9e9c18ddce04d2a874a6a9c95b8e4640df2793389849d"} err="failed to get container status \"bb1fdb4149f54b4f3af9e9c18ddce04d2a874a6a9c95b8e4640df2793389849d\": rpc error: code = NotFound desc = could not find container \"bb1fdb4149f54b4f3af9e9c18ddce04d2a874a6a9c95b8e4640df2793389849d\": container with ID starting with bb1fdb4149f54b4f3af9e9c18ddce04d2a874a6a9c95b8e4640df2793389849d not found: ID does not exist" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.341059 4724 scope.go:117] "RemoveContainer" containerID="bad0c8048f0f86adbd3544de02fa1c490ba522a114e060f53a248dff2131b3e8" Feb 26 11:14:35 crc kubenswrapper[4724]: E0226 11:14:35.341516 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bad0c8048f0f86adbd3544de02fa1c490ba522a114e060f53a248dff2131b3e8\": container with ID starting with bad0c8048f0f86adbd3544de02fa1c490ba522a114e060f53a248dff2131b3e8 not found: ID does not exist" containerID="bad0c8048f0f86adbd3544de02fa1c490ba522a114e060f53a248dff2131b3e8" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.341545 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bad0c8048f0f86adbd3544de02fa1c490ba522a114e060f53a248dff2131b3e8"} err="failed to get container status \"bad0c8048f0f86adbd3544de02fa1c490ba522a114e060f53a248dff2131b3e8\": rpc error: code = NotFound desc = could not find container \"bad0c8048f0f86adbd3544de02fa1c490ba522a114e060f53a248dff2131b3e8\": container with ID starting with bad0c8048f0f86adbd3544de02fa1c490ba522a114e060f53a248dff2131b3e8 not found: ID does not exist" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.341564 4724 scope.go:117] "RemoveContainer" containerID="edc674b6f88033d96851c9533ea95019e2660940d662ade38868a4849382b462" Feb 26 11:14:35 crc kubenswrapper[4724]: E0226 11:14:35.341828 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edc674b6f88033d96851c9533ea95019e2660940d662ade38868a4849382b462\": container with ID starting with edc674b6f88033d96851c9533ea95019e2660940d662ade38868a4849382b462 not found: ID does not exist" containerID="edc674b6f88033d96851c9533ea95019e2660940d662ade38868a4849382b462" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.341849 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edc674b6f88033d96851c9533ea95019e2660940d662ade38868a4849382b462"} err="failed to get container status \"edc674b6f88033d96851c9533ea95019e2660940d662ade38868a4849382b462\": rpc error: code = NotFound desc = could not find container \"edc674b6f88033d96851c9533ea95019e2660940d662ade38868a4849382b462\": container with ID starting with edc674b6f88033d96851c9533ea95019e2660940d662ade38868a4849382b462 not found: ID does not exist" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.982500 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f727f37-5bac-476b-88a0-3d751c47e264" path="/var/lib/kubelet/pods/4f727f37-5bac-476b-88a0-3d751c47e264/volumes" Feb 26 11:14:35 crc kubenswrapper[4724]: I0226 11:14:35.983121 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" path="/var/lib/kubelet/pods/f4930fbf-4372-4466-b084-a13dfa8a5415/volumes" Feb 26 11:14:51 crc kubenswrapper[4724]: I0226 11:14:51.706029 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6487bff6c8-94pr8"] Feb 26 11:14:51 crc kubenswrapper[4724]: I0226 11:14:51.706906 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" podUID="87f77ddf-cd14-4a43-bc4d-f7ce472a7b74" containerName="controller-manager" containerID="cri-o://fdef82ccbf149a0a5ecb220e076f96c4fa57e8fe5617a4af9f95cac14338c68e" gracePeriod=30 Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.042910 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.147357 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-client-ca\") pod \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.147451 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-config\") pod \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.147480 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbl7c\" (UniqueName: \"kubernetes.io/projected/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-kube-api-access-hbl7c\") pod \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.147525 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-proxy-ca-bundles\") pod \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.147541 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-serving-cert\") pod \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\" (UID: \"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74\") " Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.148454 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "87f77ddf-cd14-4a43-bc4d-f7ce472a7b74" (UID: "87f77ddf-cd14-4a43-bc4d-f7ce472a7b74"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.148719 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-client-ca" (OuterVolumeSpecName: "client-ca") pod "87f77ddf-cd14-4a43-bc4d-f7ce472a7b74" (UID: "87f77ddf-cd14-4a43-bc4d-f7ce472a7b74"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.148784 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-config" (OuterVolumeSpecName: "config") pod "87f77ddf-cd14-4a43-bc4d-f7ce472a7b74" (UID: "87f77ddf-cd14-4a43-bc4d-f7ce472a7b74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.152834 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "87f77ddf-cd14-4a43-bc4d-f7ce472a7b74" (UID: "87f77ddf-cd14-4a43-bc4d-f7ce472a7b74"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.152844 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-kube-api-access-hbl7c" (OuterVolumeSpecName: "kube-api-access-hbl7c") pod "87f77ddf-cd14-4a43-bc4d-f7ce472a7b74" (UID: "87f77ddf-cd14-4a43-bc4d-f7ce472a7b74"). InnerVolumeSpecName "kube-api-access-hbl7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.248997 4724 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.249065 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.249081 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbl7c\" (UniqueName: \"kubernetes.io/projected/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-kube-api-access-hbl7c\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.249097 4724 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.249110 4724 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.293352 4724 generic.go:334] "Generic (PLEG): container finished" podID="87f77ddf-cd14-4a43-bc4d-f7ce472a7b74" containerID="fdef82ccbf149a0a5ecb220e076f96c4fa57e8fe5617a4af9f95cac14338c68e" exitCode=0 Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.293402 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" event={"ID":"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74","Type":"ContainerDied","Data":"fdef82ccbf149a0a5ecb220e076f96c4fa57e8fe5617a4af9f95cac14338c68e"} Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.293431 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" event={"ID":"87f77ddf-cd14-4a43-bc4d-f7ce472a7b74","Type":"ContainerDied","Data":"373a2a37f27304e429cc89ae0b2263bdad9143914b7ab4566dd7c8ab8bc5caaa"} Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.293449 4724 scope.go:117] "RemoveContainer" containerID="fdef82ccbf149a0a5ecb220e076f96c4fa57e8fe5617a4af9f95cac14338c68e" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.293559 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6487bff6c8-94pr8" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.325555 4724 scope.go:117] "RemoveContainer" containerID="fdef82ccbf149a0a5ecb220e076f96c4fa57e8fe5617a4af9f95cac14338c68e" Feb 26 11:14:52 crc kubenswrapper[4724]: E0226 11:14:52.328016 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdef82ccbf149a0a5ecb220e076f96c4fa57e8fe5617a4af9f95cac14338c68e\": container with ID starting with fdef82ccbf149a0a5ecb220e076f96c4fa57e8fe5617a4af9f95cac14338c68e not found: ID does not exist" containerID="fdef82ccbf149a0a5ecb220e076f96c4fa57e8fe5617a4af9f95cac14338c68e" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.328090 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdef82ccbf149a0a5ecb220e076f96c4fa57e8fe5617a4af9f95cac14338c68e"} err="failed to get container status \"fdef82ccbf149a0a5ecb220e076f96c4fa57e8fe5617a4af9f95cac14338c68e\": rpc error: code = NotFound desc = could not find container \"fdef82ccbf149a0a5ecb220e076f96c4fa57e8fe5617a4af9f95cac14338c68e\": container with ID starting with fdef82ccbf149a0a5ecb220e076f96c4fa57e8fe5617a4af9f95cac14338c68e not found: ID does not exist" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.331688 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6487bff6c8-94pr8"] Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.339392 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6487bff6c8-94pr8"] Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.925795 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-558c45cc54-ckmnp"] Feb 26 11:14:52 crc kubenswrapper[4724]: E0226 11:14:52.926032 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f727f37-5bac-476b-88a0-3d751c47e264" containerName="extract-utilities" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926049 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f727f37-5bac-476b-88a0-3d751c47e264" containerName="extract-utilities" Feb 26 11:14:52 crc kubenswrapper[4724]: E0226 11:14:52.926064 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f727f37-5bac-476b-88a0-3d751c47e264" containerName="registry-server" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926072 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f727f37-5bac-476b-88a0-3d751c47e264" containerName="registry-server" Feb 26 11:14:52 crc kubenswrapper[4724]: E0226 11:14:52.926401 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" containerName="extract-content" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926411 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" containerName="extract-content" Feb 26 11:14:52 crc kubenswrapper[4724]: E0226 11:14:52.926428 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" containerName="extract-utilities" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926436 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" containerName="extract-utilities" Feb 26 11:14:52 crc kubenswrapper[4724]: E0226 11:14:52.926447 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" containerName="registry-server" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926454 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" containerName="registry-server" Feb 26 11:14:52 crc kubenswrapper[4724]: E0226 11:14:52.926465 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b79a85-e78f-427f-8250-bfe8be1e098b" containerName="oc" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926472 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b79a85-e78f-427f-8250-bfe8be1e098b" containerName="oc" Feb 26 11:14:52 crc kubenswrapper[4724]: E0226 11:14:52.926481 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" containerName="extract-utilities" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926489 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" containerName="extract-utilities" Feb 26 11:14:52 crc kubenswrapper[4724]: E0226 11:14:52.926497 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" containerName="extract-utilities" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926504 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" containerName="extract-utilities" Feb 26 11:14:52 crc kubenswrapper[4724]: E0226 11:14:52.926513 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f727f37-5bac-476b-88a0-3d751c47e264" containerName="extract-content" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926520 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f727f37-5bac-476b-88a0-3d751c47e264" containerName="extract-content" Feb 26 11:14:52 crc kubenswrapper[4724]: E0226 11:14:52.926530 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" containerName="registry-server" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926538 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" containerName="registry-server" Feb 26 11:14:52 crc kubenswrapper[4724]: E0226 11:14:52.926549 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" containerName="extract-content" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926558 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" containerName="extract-content" Feb 26 11:14:52 crc kubenswrapper[4724]: E0226 11:14:52.926570 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f77ddf-cd14-4a43-bc4d-f7ce472a7b74" containerName="controller-manager" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926577 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f77ddf-cd14-4a43-bc4d-f7ce472a7b74" containerName="controller-manager" Feb 26 11:14:52 crc kubenswrapper[4724]: E0226 11:14:52.926590 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" containerName="registry-server" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926597 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" containerName="registry-server" Feb 26 11:14:52 crc kubenswrapper[4724]: E0226 11:14:52.926608 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" containerName="extract-content" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926616 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" containerName="extract-content" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926737 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a09ba5-1063-467d-b7a6-c1b2c37a135e" containerName="registry-server" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926747 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f727f37-5bac-476b-88a0-3d751c47e264" containerName="registry-server" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926761 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ed0863-9bdf-48ba-ad70-c1c728c58730" containerName="registry-server" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926771 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f77ddf-cd14-4a43-bc4d-f7ce472a7b74" containerName="controller-manager" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926782 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b79a85-e78f-427f-8250-bfe8be1e098b" containerName="oc" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.926793 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4930fbf-4372-4466-b084-a13dfa8a5415" containerName="registry-server" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.927264 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.931469 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.932695 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.933449 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.934034 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.935165 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.937839 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.943619 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-558c45cc54-ckmnp"] Feb 26 11:14:52 crc kubenswrapper[4724]: I0226 11:14:52.946467 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.058805 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c414055d-8bfc-4e9a-858a-074d38a3097a-proxy-ca-bundles\") pod \"controller-manager-558c45cc54-ckmnp\" (UID: \"c414055d-8bfc-4e9a-858a-074d38a3097a\") " pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.059043 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-444wg\" (UniqueName: \"kubernetes.io/projected/c414055d-8bfc-4e9a-858a-074d38a3097a-kube-api-access-444wg\") pod \"controller-manager-558c45cc54-ckmnp\" (UID: \"c414055d-8bfc-4e9a-858a-074d38a3097a\") " pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.059139 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c414055d-8bfc-4e9a-858a-074d38a3097a-config\") pod \"controller-manager-558c45cc54-ckmnp\" (UID: \"c414055d-8bfc-4e9a-858a-074d38a3097a\") " pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.059211 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c414055d-8bfc-4e9a-858a-074d38a3097a-serving-cert\") pod \"controller-manager-558c45cc54-ckmnp\" (UID: \"c414055d-8bfc-4e9a-858a-074d38a3097a\") " pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.059464 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c414055d-8bfc-4e9a-858a-074d38a3097a-client-ca\") pod \"controller-manager-558c45cc54-ckmnp\" (UID: \"c414055d-8bfc-4e9a-858a-074d38a3097a\") " pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.160674 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c414055d-8bfc-4e9a-858a-074d38a3097a-serving-cert\") pod \"controller-manager-558c45cc54-ckmnp\" (UID: \"c414055d-8bfc-4e9a-858a-074d38a3097a\") " pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.160757 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c414055d-8bfc-4e9a-858a-074d38a3097a-client-ca\") pod \"controller-manager-558c45cc54-ckmnp\" (UID: \"c414055d-8bfc-4e9a-858a-074d38a3097a\") " pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.160900 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c414055d-8bfc-4e9a-858a-074d38a3097a-proxy-ca-bundles\") pod \"controller-manager-558c45cc54-ckmnp\" (UID: \"c414055d-8bfc-4e9a-858a-074d38a3097a\") " pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.160956 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-444wg\" (UniqueName: \"kubernetes.io/projected/c414055d-8bfc-4e9a-858a-074d38a3097a-kube-api-access-444wg\") pod \"controller-manager-558c45cc54-ckmnp\" (UID: \"c414055d-8bfc-4e9a-858a-074d38a3097a\") " pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.160983 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c414055d-8bfc-4e9a-858a-074d38a3097a-config\") pod \"controller-manager-558c45cc54-ckmnp\" (UID: \"c414055d-8bfc-4e9a-858a-074d38a3097a\") " pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.162203 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c414055d-8bfc-4e9a-858a-074d38a3097a-client-ca\") pod \"controller-manager-558c45cc54-ckmnp\" (UID: \"c414055d-8bfc-4e9a-858a-074d38a3097a\") " pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.162308 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c414055d-8bfc-4e9a-858a-074d38a3097a-proxy-ca-bundles\") pod \"controller-manager-558c45cc54-ckmnp\" (UID: \"c414055d-8bfc-4e9a-858a-074d38a3097a\") " pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.162790 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c414055d-8bfc-4e9a-858a-074d38a3097a-config\") pod \"controller-manager-558c45cc54-ckmnp\" (UID: \"c414055d-8bfc-4e9a-858a-074d38a3097a\") " pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.168464 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c414055d-8bfc-4e9a-858a-074d38a3097a-serving-cert\") pod \"controller-manager-558c45cc54-ckmnp\" (UID: \"c414055d-8bfc-4e9a-858a-074d38a3097a\") " pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.186440 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-444wg\" (UniqueName: \"kubernetes.io/projected/c414055d-8bfc-4e9a-858a-074d38a3097a-kube-api-access-444wg\") pod \"controller-manager-558c45cc54-ckmnp\" (UID: \"c414055d-8bfc-4e9a-858a-074d38a3097a\") " pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.255466 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.693522 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-558c45cc54-ckmnp"] Feb 26 11:14:53 crc kubenswrapper[4724]: W0226 11:14:53.699003 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc414055d_8bfc_4e9a_858a_074d38a3097a.slice/crio-9892db4ab61387230777fb34f66c1fb9c44381d57c118f3d1f6488fe134a1eac WatchSource:0}: Error finding container 9892db4ab61387230777fb34f66c1fb9c44381d57c118f3d1f6488fe134a1eac: Status 404 returned error can't find the container with id 9892db4ab61387230777fb34f66c1fb9c44381d57c118f3d1f6488fe134a1eac Feb 26 11:14:53 crc kubenswrapper[4724]: I0226 11:14:53.983664 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f77ddf-cd14-4a43-bc4d-f7ce472a7b74" path="/var/lib/kubelet/pods/87f77ddf-cd14-4a43-bc4d-f7ce472a7b74/volumes" Feb 26 11:14:54 crc kubenswrapper[4724]: I0226 11:14:54.308785 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" event={"ID":"c414055d-8bfc-4e9a-858a-074d38a3097a","Type":"ContainerStarted","Data":"c2e78206ffdefa0f4cbdd957f2fbec7f409c2d37dfc45eb42f03bc6c8e75692a"} Feb 26 11:14:54 crc kubenswrapper[4724]: I0226 11:14:54.308834 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" event={"ID":"c414055d-8bfc-4e9a-858a-074d38a3097a","Type":"ContainerStarted","Data":"9892db4ab61387230777fb34f66c1fb9c44381d57c118f3d1f6488fe134a1eac"} Feb 26 11:14:54 crc kubenswrapper[4724]: I0226 11:14:54.309241 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:54 crc kubenswrapper[4724]: I0226 11:14:54.314720 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" Feb 26 11:14:54 crc kubenswrapper[4724]: I0226 11:14:54.331526 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-558c45cc54-ckmnp" podStartSLOduration=3.331507216 podStartE2EDuration="3.331507216s" podCreationTimestamp="2026-02-26 11:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:14:54.326313533 +0000 UTC m=+560.982052658" watchObservedRunningTime="2026-02-26 11:14:54.331507216 +0000 UTC m=+560.987246341" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.019830 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" podUID="5f469f47-990d-4224-8002-c658ef626f48" containerName="oauth-openshift" containerID="cri-o://dc312fc8866b864610732aebc21f12d909e36023b79c4cb44fb78526ae16a484" gracePeriod=15 Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.332889 4724 generic.go:334] "Generic (PLEG): container finished" podID="5f469f47-990d-4224-8002-c658ef626f48" containerID="dc312fc8866b864610732aebc21f12d909e36023b79c4cb44fb78526ae16a484" exitCode=0 Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.332953 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" event={"ID":"5f469f47-990d-4224-8002-c658ef626f48","Type":"ContainerDied","Data":"dc312fc8866b864610732aebc21f12d909e36023b79c4cb44fb78526ae16a484"} Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.370921 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.533758 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff98r\" (UniqueName: \"kubernetes.io/projected/5f469f47-990d-4224-8002-c658ef626f48-kube-api-access-ff98r\") pod \"5f469f47-990d-4224-8002-c658ef626f48\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.533835 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-idp-0-file-data\") pod \"5f469f47-990d-4224-8002-c658ef626f48\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.533871 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-login\") pod \"5f469f47-990d-4224-8002-c658ef626f48\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.533925 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-ocp-branding-template\") pod \"5f469f47-990d-4224-8002-c658ef626f48\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.535107 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5f469f47-990d-4224-8002-c658ef626f48" (UID: "5f469f47-990d-4224-8002-c658ef626f48"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.535140 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-cliconfig\") pod \"5f469f47-990d-4224-8002-c658ef626f48\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.535322 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-serving-cert\") pod \"5f469f47-990d-4224-8002-c658ef626f48\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.535362 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-router-certs\") pod \"5f469f47-990d-4224-8002-c658ef626f48\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.535394 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-provider-selection\") pod \"5f469f47-990d-4224-8002-c658ef626f48\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.535424 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f469f47-990d-4224-8002-c658ef626f48-audit-dir\") pod \"5f469f47-990d-4224-8002-c658ef626f48\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.535459 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-session\") pod \"5f469f47-990d-4224-8002-c658ef626f48\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.535480 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-error\") pod \"5f469f47-990d-4224-8002-c658ef626f48\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.535518 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-trusted-ca-bundle\") pod \"5f469f47-990d-4224-8002-c658ef626f48\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.535597 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-service-ca\") pod \"5f469f47-990d-4224-8002-c658ef626f48\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.535618 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-audit-policies\") pod \"5f469f47-990d-4224-8002-c658ef626f48\" (UID: \"5f469f47-990d-4224-8002-c658ef626f48\") " Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.536099 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.536478 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5f469f47-990d-4224-8002-c658ef626f48" (UID: "5f469f47-990d-4224-8002-c658ef626f48"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.539572 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f469f47-990d-4224-8002-c658ef626f48-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5f469f47-990d-4224-8002-c658ef626f48" (UID: "5f469f47-990d-4224-8002-c658ef626f48"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.540059 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5f469f47-990d-4224-8002-c658ef626f48" (UID: "5f469f47-990d-4224-8002-c658ef626f48"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.540069 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5f469f47-990d-4224-8002-c658ef626f48" (UID: "5f469f47-990d-4224-8002-c658ef626f48"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.541030 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5f469f47-990d-4224-8002-c658ef626f48" (UID: "5f469f47-990d-4224-8002-c658ef626f48"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.541205 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f469f47-990d-4224-8002-c658ef626f48-kube-api-access-ff98r" (OuterVolumeSpecName: "kube-api-access-ff98r") pod "5f469f47-990d-4224-8002-c658ef626f48" (UID: "5f469f47-990d-4224-8002-c658ef626f48"). InnerVolumeSpecName "kube-api-access-ff98r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.548354 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5f469f47-990d-4224-8002-c658ef626f48" (UID: "5f469f47-990d-4224-8002-c658ef626f48"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.548927 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5f469f47-990d-4224-8002-c658ef626f48" (UID: "5f469f47-990d-4224-8002-c658ef626f48"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.549238 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5f469f47-990d-4224-8002-c658ef626f48" (UID: "5f469f47-990d-4224-8002-c658ef626f48"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.550429 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5f469f47-990d-4224-8002-c658ef626f48" (UID: "5f469f47-990d-4224-8002-c658ef626f48"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.550621 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5f469f47-990d-4224-8002-c658ef626f48" (UID: "5f469f47-990d-4224-8002-c658ef626f48"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.550837 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5f469f47-990d-4224-8002-c658ef626f48" (UID: "5f469f47-990d-4224-8002-c658ef626f48"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.551105 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5f469f47-990d-4224-8002-c658ef626f48" (UID: "5f469f47-990d-4224-8002-c658ef626f48"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.637031 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.637080 4724 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.637095 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff98r\" (UniqueName: \"kubernetes.io/projected/5f469f47-990d-4224-8002-c658ef626f48-kube-api-access-ff98r\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.637134 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.637144 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.637155 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.637164 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.637172 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.637216 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.637225 4724 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f469f47-990d-4224-8002-c658ef626f48-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.637234 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.637242 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:58 crc kubenswrapper[4724]: I0226 11:14:58.637252 4724 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f469f47-990d-4224-8002-c658ef626f48-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:14:59 crc kubenswrapper[4724]: I0226 11:14:59.343068 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" event={"ID":"5f469f47-990d-4224-8002-c658ef626f48","Type":"ContainerDied","Data":"4a2800e72745c3492bc0bb7d7932c09bb3f178b95199903d706e3b44b78023f1"} Feb 26 11:14:59 crc kubenswrapper[4724]: I0226 11:14:59.343233 4724 scope.go:117] "RemoveContainer" containerID="dc312fc8866b864610732aebc21f12d909e36023b79c4cb44fb78526ae16a484" Feb 26 11:14:59 crc kubenswrapper[4724]: I0226 11:14:59.343235 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2m27r" Feb 26 11:14:59 crc kubenswrapper[4724]: I0226 11:14:59.415451 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2m27r"] Feb 26 11:14:59 crc kubenswrapper[4724]: I0226 11:14:59.421127 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2m27r"] Feb 26 11:14:59 crc kubenswrapper[4724]: I0226 11:14:59.982789 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f469f47-990d-4224-8002-c658ef626f48" path="/var/lib/kubelet/pods/5f469f47-990d-4224-8002-c658ef626f48/volumes" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.143299 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7"] Feb 26 11:15:00 crc kubenswrapper[4724]: E0226 11:15:00.143638 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f469f47-990d-4224-8002-c658ef626f48" containerName="oauth-openshift" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.143663 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f469f47-990d-4224-8002-c658ef626f48" containerName="oauth-openshift" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.143780 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f469f47-990d-4224-8002-c658ef626f48" containerName="oauth-openshift" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.144277 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.147195 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.148519 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.154527 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7"] Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.260140 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1205f6-96f8-47e5-bd64-8bfae8525d43-config-volume\") pod \"collect-profiles-29535075-rv8b7\" (UID: \"ac1205f6-96f8-47e5-bd64-8bfae8525d43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.260246 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67lc7\" (UniqueName: \"kubernetes.io/projected/ac1205f6-96f8-47e5-bd64-8bfae8525d43-kube-api-access-67lc7\") pod \"collect-profiles-29535075-rv8b7\" (UID: \"ac1205f6-96f8-47e5-bd64-8bfae8525d43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.260291 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac1205f6-96f8-47e5-bd64-8bfae8525d43-secret-volume\") pod \"collect-profiles-29535075-rv8b7\" (UID: \"ac1205f6-96f8-47e5-bd64-8bfae8525d43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.361765 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67lc7\" (UniqueName: \"kubernetes.io/projected/ac1205f6-96f8-47e5-bd64-8bfae8525d43-kube-api-access-67lc7\") pod \"collect-profiles-29535075-rv8b7\" (UID: \"ac1205f6-96f8-47e5-bd64-8bfae8525d43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.361817 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac1205f6-96f8-47e5-bd64-8bfae8525d43-secret-volume\") pod \"collect-profiles-29535075-rv8b7\" (UID: \"ac1205f6-96f8-47e5-bd64-8bfae8525d43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.361869 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1205f6-96f8-47e5-bd64-8bfae8525d43-config-volume\") pod \"collect-profiles-29535075-rv8b7\" (UID: \"ac1205f6-96f8-47e5-bd64-8bfae8525d43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.362668 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1205f6-96f8-47e5-bd64-8bfae8525d43-config-volume\") pod \"collect-profiles-29535075-rv8b7\" (UID: \"ac1205f6-96f8-47e5-bd64-8bfae8525d43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.372886 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac1205f6-96f8-47e5-bd64-8bfae8525d43-secret-volume\") pod \"collect-profiles-29535075-rv8b7\" (UID: \"ac1205f6-96f8-47e5-bd64-8bfae8525d43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.380942 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67lc7\" (UniqueName: \"kubernetes.io/projected/ac1205f6-96f8-47e5-bd64-8bfae8525d43-kube-api-access-67lc7\") pod \"collect-profiles-29535075-rv8b7\" (UID: \"ac1205f6-96f8-47e5-bd64-8bfae8525d43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.458668 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.652934 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7"] Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.932154 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5668d7d5f9-655l9"] Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.932791 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.935430 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.935468 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.935526 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.935875 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.937110 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.937384 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.937532 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.937641 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.939665 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.939772 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.939864 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.940480 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.947774 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.955301 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.961828 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 11:15:00 crc kubenswrapper[4724]: I0226 11:15:00.963084 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5668d7d5f9-655l9"] Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.072405 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.072478 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c18372b-fb6b-4c95-b1d7-b38b8165668a-audit-policies\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.072507 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-user-template-error\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.072564 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l546p\" (UniqueName: \"kubernetes.io/projected/7c18372b-fb6b-4c95-b1d7-b38b8165668a-kube-api-access-l546p\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.072591 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-user-template-login\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.072628 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.072651 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.072678 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.072700 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-router-certs\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.072724 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-service-ca\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.072759 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.072779 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-session\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.072801 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.072873 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c18372b-fb6b-4c95-b1d7-b38b8165668a-audit-dir\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.174245 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-session\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.174281 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.174309 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c18372b-fb6b-4c95-b1d7-b38b8165668a-audit-dir\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.174335 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.174370 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c18372b-fb6b-4c95-b1d7-b38b8165668a-audit-policies\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.174392 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-user-template-error\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.174424 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c18372b-fb6b-4c95-b1d7-b38b8165668a-audit-dir\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.174444 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l546p\" (UniqueName: \"kubernetes.io/projected/7c18372b-fb6b-4c95-b1d7-b38b8165668a-kube-api-access-l546p\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.174487 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-user-template-login\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.174527 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.174552 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.174578 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.174602 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-router-certs\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.174623 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-service-ca\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.174645 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.176659 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c18372b-fb6b-4c95-b1d7-b38b8165668a-audit-policies\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.177021 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.180925 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.180986 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-service-ca\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.182051 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.182068 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-user-template-login\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.182577 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.183663 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-router-certs\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.183829 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-user-template-error\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.184784 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.186439 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-system-session\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.187287 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7c18372b-fb6b-4c95-b1d7-b38b8165668a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.191434 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l546p\" (UniqueName: \"kubernetes.io/projected/7c18372b-fb6b-4c95-b1d7-b38b8165668a-kube-api-access-l546p\") pod \"oauth-openshift-5668d7d5f9-655l9\" (UID: \"7c18372b-fb6b-4c95-b1d7-b38b8165668a\") " pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.249586 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.354373 4724 generic.go:334] "Generic (PLEG): container finished" podID="ac1205f6-96f8-47e5-bd64-8bfae8525d43" containerID="052437116e29d5f41d32260321c962da74f58b2f08dc44c85ebff105046c618f" exitCode=0 Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.354413 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" event={"ID":"ac1205f6-96f8-47e5-bd64-8bfae8525d43","Type":"ContainerDied","Data":"052437116e29d5f41d32260321c962da74f58b2f08dc44c85ebff105046c618f"} Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.354437 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" event={"ID":"ac1205f6-96f8-47e5-bd64-8bfae8525d43","Type":"ContainerStarted","Data":"9db4b8340dce2a1820e2b15434aa2f65955b1a679c7f55c8472b06f58d53969f"} Feb 26 11:15:01 crc kubenswrapper[4724]: I0226 11:15:01.453866 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5668d7d5f9-655l9"] Feb 26 11:15:01 crc kubenswrapper[4724]: W0226 11:15:01.458087 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c18372b_fb6b_4c95_b1d7_b38b8165668a.slice/crio-03c5442509fac63ea9582ae1e1a70c0b9dea6250e89c96fa1af04431a31e4a5e WatchSource:0}: Error finding container 03c5442509fac63ea9582ae1e1a70c0b9dea6250e89c96fa1af04431a31e4a5e: Status 404 returned error can't find the container with id 03c5442509fac63ea9582ae1e1a70c0b9dea6250e89c96fa1af04431a31e4a5e Feb 26 11:15:02 crc kubenswrapper[4724]: I0226 11:15:02.369347 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" event={"ID":"7c18372b-fb6b-4c95-b1d7-b38b8165668a","Type":"ContainerStarted","Data":"8b900a9a17a92bd4beb82e9c95e80835ef5d6caab9db532f3c42e36fa90d276e"} Feb 26 11:15:02 crc kubenswrapper[4724]: I0226 11:15:02.369724 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:02 crc kubenswrapper[4724]: I0226 11:15:02.369747 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" event={"ID":"7c18372b-fb6b-4c95-b1d7-b38b8165668a","Type":"ContainerStarted","Data":"03c5442509fac63ea9582ae1e1a70c0b9dea6250e89c96fa1af04431a31e4a5e"} Feb 26 11:15:02 crc kubenswrapper[4724]: I0226 11:15:02.375078 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" Feb 26 11:15:02 crc kubenswrapper[4724]: I0226 11:15:02.397450 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5668d7d5f9-655l9" podStartSLOduration=29.397429988 podStartE2EDuration="29.397429988s" podCreationTimestamp="2026-02-26 11:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:15:02.395651763 +0000 UTC m=+569.051390898" watchObservedRunningTime="2026-02-26 11:15:02.397429988 +0000 UTC m=+569.053169103" Feb 26 11:15:02 crc kubenswrapper[4724]: I0226 11:15:02.636902 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" Feb 26 11:15:02 crc kubenswrapper[4724]: I0226 11:15:02.792905 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67lc7\" (UniqueName: \"kubernetes.io/projected/ac1205f6-96f8-47e5-bd64-8bfae8525d43-kube-api-access-67lc7\") pod \"ac1205f6-96f8-47e5-bd64-8bfae8525d43\" (UID: \"ac1205f6-96f8-47e5-bd64-8bfae8525d43\") " Feb 26 11:15:02 crc kubenswrapper[4724]: I0226 11:15:02.792958 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac1205f6-96f8-47e5-bd64-8bfae8525d43-secret-volume\") pod \"ac1205f6-96f8-47e5-bd64-8bfae8525d43\" (UID: \"ac1205f6-96f8-47e5-bd64-8bfae8525d43\") " Feb 26 11:15:02 crc kubenswrapper[4724]: I0226 11:15:02.793005 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1205f6-96f8-47e5-bd64-8bfae8525d43-config-volume\") pod \"ac1205f6-96f8-47e5-bd64-8bfae8525d43\" (UID: \"ac1205f6-96f8-47e5-bd64-8bfae8525d43\") " Feb 26 11:15:02 crc kubenswrapper[4724]: I0226 11:15:02.793804 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1205f6-96f8-47e5-bd64-8bfae8525d43-config-volume" (OuterVolumeSpecName: "config-volume") pod "ac1205f6-96f8-47e5-bd64-8bfae8525d43" (UID: "ac1205f6-96f8-47e5-bd64-8bfae8525d43"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:02 crc kubenswrapper[4724]: I0226 11:15:02.810445 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1205f6-96f8-47e5-bd64-8bfae8525d43-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ac1205f6-96f8-47e5-bd64-8bfae8525d43" (UID: "ac1205f6-96f8-47e5-bd64-8bfae8525d43"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:15:02 crc kubenswrapper[4724]: I0226 11:15:02.811609 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1205f6-96f8-47e5-bd64-8bfae8525d43-kube-api-access-67lc7" (OuterVolumeSpecName: "kube-api-access-67lc7") pod "ac1205f6-96f8-47e5-bd64-8bfae8525d43" (UID: "ac1205f6-96f8-47e5-bd64-8bfae8525d43"). InnerVolumeSpecName "kube-api-access-67lc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:15:02 crc kubenswrapper[4724]: I0226 11:15:02.894853 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67lc7\" (UniqueName: \"kubernetes.io/projected/ac1205f6-96f8-47e5-bd64-8bfae8525d43-kube-api-access-67lc7\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:02 crc kubenswrapper[4724]: I0226 11:15:02.895092 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac1205f6-96f8-47e5-bd64-8bfae8525d43-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:02 crc kubenswrapper[4724]: I0226 11:15:02.895192 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1205f6-96f8-47e5-bd64-8bfae8525d43-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:03 crc kubenswrapper[4724]: I0226 11:15:03.375892 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" Feb 26 11:15:03 crc kubenswrapper[4724]: I0226 11:15:03.376327 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7" event={"ID":"ac1205f6-96f8-47e5-bd64-8bfae8525d43","Type":"ContainerDied","Data":"9db4b8340dce2a1820e2b15434aa2f65955b1a679c7f55c8472b06f58d53969f"} Feb 26 11:15:03 crc kubenswrapper[4724]: I0226 11:15:03.376381 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9db4b8340dce2a1820e2b15434aa2f65955b1a679c7f55c8472b06f58d53969f" Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.737521 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-92dsj"] Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.738398 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-92dsj" podUID="8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" containerName="registry-server" containerID="cri-o://c805f42b4b6b1239e46c0e5d1cf780973199cb7010c00ebdeaa519107094af98" gracePeriod=30 Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.754032 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9shd"] Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.754804 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p9shd" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" containerName="registry-server" containerID="cri-o://2fd06c1e0167a1445e0f39f8f9447bbecd8e138bc772f2f5c6d35250766ad7db" gracePeriod=30 Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.766691 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8kd6n"] Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.766933 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" containerID="cri-o://60bb794b9218f7d4380c1132409b524fbffcc56aa1be0e27425474e6b08c43ca" gracePeriod=30 Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.788328 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xb5gc"] Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.788850 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xb5gc" podUID="056030ad-19ca-4542-a486-139eb62524b0" containerName="registry-server" containerID="cri-o://e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8" gracePeriod=30 Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.806272 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rtjt6"] Feb 26 11:15:40 crc kubenswrapper[4724]: E0226 11:15:40.806572 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1205f6-96f8-47e5-bd64-8bfae8525d43" containerName="collect-profiles" Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.806586 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1205f6-96f8-47e5-bd64-8bfae8525d43" containerName="collect-profiles" Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.806706 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1205f6-96f8-47e5-bd64-8bfae8525d43" containerName="collect-profiles" Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.808473 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.816708 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mqtct"] Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.817276 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mqtct" podUID="48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" containerName="registry-server" containerID="cri-o://68a33343c30b0049633cc7df1f9d3b3ddc1994030a58f4ddbc87657d165f9611" gracePeriod=30 Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.823423 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rtjt6"] Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.941306 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49bnj\" (UniqueName: \"kubernetes.io/projected/2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3-kube-api-access-49bnj\") pod \"marketplace-operator-79b997595-rtjt6\" (UID: \"2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.941392 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rtjt6\" (UID: \"2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.941421 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rtjt6\" (UID: \"2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" Feb 26 11:15:40 crc kubenswrapper[4724]: E0226 11:15:40.948698 4724 log.go:32] "ExecSync cmd from runtime service failed" err=< Feb 26 11:15:40 crc kubenswrapper[4724]: rpc error: code = Unknown desc = command error: setns `mnt`: Bad file descriptor Feb 26 11:15:40 crc kubenswrapper[4724]: fail startup Feb 26 11:15:40 crc kubenswrapper[4724]: , stdout: , stderr: , exit code -1 Feb 26 11:15:40 crc kubenswrapper[4724]: > containerID="e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 11:15:40 crc kubenswrapper[4724]: E0226 11:15:40.953399 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8 is running failed: container process not found" containerID="e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 11:15:40 crc kubenswrapper[4724]: E0226 11:15:40.954116 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8 is running failed: container process not found" containerID="e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 11:15:40 crc kubenswrapper[4724]: E0226 11:15:40.954188 4724 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-xb5gc" podUID="056030ad-19ca-4542-a486-139eb62524b0" containerName="registry-server" Feb 26 11:15:40 crc kubenswrapper[4724]: I0226 11:15:40.991736 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-xb5gc" podUID="056030ad-19ca-4542-a486-139eb62524b0" containerName="registry-server" probeResult="failure" output="" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.042387 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49bnj\" (UniqueName: \"kubernetes.io/projected/2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3-kube-api-access-49bnj\") pod \"marketplace-operator-79b997595-rtjt6\" (UID: \"2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.042469 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rtjt6\" (UID: \"2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.042497 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rtjt6\" (UID: \"2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.044450 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rtjt6\" (UID: \"2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.066505 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rtjt6\" (UID: \"2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.074759 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49bnj\" (UniqueName: \"kubernetes.io/projected/2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3-kube-api-access-49bnj\") pod \"marketplace-operator-79b997595-rtjt6\" (UID: \"2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3\") " pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.265008 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.272230 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.338107 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.397823 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.447242 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb55921-4244-4557-aa72-97cea802c3fb-utilities\") pod \"0eb55921-4244-4557-aa72-97cea802c3fb\" (UID: \"0eb55921-4244-4557-aa72-97cea802c3fb\") " Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.447357 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sht7\" (UniqueName: \"kubernetes.io/projected/0eb55921-4244-4557-aa72-97cea802c3fb-kube-api-access-5sht7\") pod \"0eb55921-4244-4557-aa72-97cea802c3fb\" (UID: \"0eb55921-4244-4557-aa72-97cea802c3fb\") " Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.448439 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eb55921-4244-4557-aa72-97cea802c3fb-utilities" (OuterVolumeSpecName: "utilities") pod "0eb55921-4244-4557-aa72-97cea802c3fb" (UID: "0eb55921-4244-4557-aa72-97cea802c3fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.449164 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vvcv\" (UniqueName: \"kubernetes.io/projected/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-kube-api-access-6vvcv\") pod \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\" (UID: \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\") " Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.449234 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-catalog-content\") pod \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\" (UID: \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\") " Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.449266 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-utilities\") pod \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\" (UID: \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\") " Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.449293 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb55921-4244-4557-aa72-97cea802c3fb-catalog-content\") pod \"0eb55921-4244-4557-aa72-97cea802c3fb\" (UID: \"0eb55921-4244-4557-aa72-97cea802c3fb\") " Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.449851 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb55921-4244-4557-aa72-97cea802c3fb-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.454874 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-utilities" (OuterVolumeSpecName: "utilities") pod "48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" (UID: "48a2c1ec-376b-440a-9dd2-6037d5dfdd1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.455939 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-kube-api-access-6vvcv" (OuterVolumeSpecName: "kube-api-access-6vvcv") pod "48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" (UID: "48a2c1ec-376b-440a-9dd2-6037d5dfdd1f"). InnerVolumeSpecName "kube-api-access-6vvcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.456606 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eb55921-4244-4557-aa72-97cea802c3fb-kube-api-access-5sht7" (OuterVolumeSpecName: "kube-api-access-5sht7") pod "0eb55921-4244-4557-aa72-97cea802c3fb" (UID: "0eb55921-4244-4557-aa72-97cea802c3fb"). InnerVolumeSpecName "kube-api-access-5sht7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.552822 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056030ad-19ca-4542-a486-139eb62524b0-utilities\") pod \"056030ad-19ca-4542-a486-139eb62524b0\" (UID: \"056030ad-19ca-4542-a486-139eb62524b0\") " Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.552939 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056030ad-19ca-4542-a486-139eb62524b0-catalog-content\") pod \"056030ad-19ca-4542-a486-139eb62524b0\" (UID: \"056030ad-19ca-4542-a486-139eb62524b0\") " Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.553018 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmbgj\" (UniqueName: \"kubernetes.io/projected/056030ad-19ca-4542-a486-139eb62524b0-kube-api-access-vmbgj\") pod \"056030ad-19ca-4542-a486-139eb62524b0\" (UID: \"056030ad-19ca-4542-a486-139eb62524b0\") " Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.553383 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vvcv\" (UniqueName: \"kubernetes.io/projected/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-kube-api-access-6vvcv\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.553400 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.553411 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sht7\" (UniqueName: \"kubernetes.io/projected/0eb55921-4244-4557-aa72-97cea802c3fb-kube-api-access-5sht7\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.556784 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/056030ad-19ca-4542-a486-139eb62524b0-utilities" (OuterVolumeSpecName: "utilities") pod "056030ad-19ca-4542-a486-139eb62524b0" (UID: "056030ad-19ca-4542-a486-139eb62524b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.565729 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rtjt6"] Feb 26 11:15:41 crc kubenswrapper[4724]: W0226 11:15:41.566589 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2edf1cee_54e6_4ffa_93ea_d09a2a74d8a3.slice/crio-c7eb4e3bdcff9d196f97bbcc2ff5015a0d40836b2f13191add5b6d447e89fdc8 WatchSource:0}: Error finding container c7eb4e3bdcff9d196f97bbcc2ff5015a0d40836b2f13191add5b6d447e89fdc8: Status 404 returned error can't find the container with id c7eb4e3bdcff9d196f97bbcc2ff5015a0d40836b2f13191add5b6d447e89fdc8 Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.572796 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eb55921-4244-4557-aa72-97cea802c3fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0eb55921-4244-4557-aa72-97cea802c3fb" (UID: "0eb55921-4244-4557-aa72-97cea802c3fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.596719 4724 generic.go:334] "Generic (PLEG): container finished" podID="056030ad-19ca-4542-a486-139eb62524b0" containerID="e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8" exitCode=0 Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.596810 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xb5gc" event={"ID":"056030ad-19ca-4542-a486-139eb62524b0","Type":"ContainerDied","Data":"e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8"} Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.596847 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xb5gc" event={"ID":"056030ad-19ca-4542-a486-139eb62524b0","Type":"ContainerDied","Data":"fe610450de497b52e584cf20ae2c72ee02d515ac0ee2a8b7e10cb1bd435d3960"} Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.596870 4724 scope.go:117] "RemoveContainer" containerID="e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.597054 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xb5gc" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.610812 4724 generic.go:334] "Generic (PLEG): container finished" podID="48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" containerID="68a33343c30b0049633cc7df1f9d3b3ddc1994030a58f4ddbc87657d165f9611" exitCode=0 Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.610925 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqtct" event={"ID":"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f","Type":"ContainerDied","Data":"68a33343c30b0049633cc7df1f9d3b3ddc1994030a58f4ddbc87657d165f9611"} Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.610955 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqtct" event={"ID":"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f","Type":"ContainerDied","Data":"782efe2a1aceeb3d8ea72d619d55d5d6ff56d16d25dda3d5a2823c041cf21d2e"} Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.611050 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqtct" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.622613 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056030ad-19ca-4542-a486-139eb62524b0-kube-api-access-vmbgj" (OuterVolumeSpecName: "kube-api-access-vmbgj") pod "056030ad-19ca-4542-a486-139eb62524b0" (UID: "056030ad-19ca-4542-a486-139eb62524b0"). InnerVolumeSpecName "kube-api-access-vmbgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.631608 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/056030ad-19ca-4542-a486-139eb62524b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "056030ad-19ca-4542-a486-139eb62524b0" (UID: "056030ad-19ca-4542-a486-139eb62524b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.633019 4724 generic.go:334] "Generic (PLEG): container finished" podID="0eb55921-4244-4557-aa72-97cea802c3fb" containerID="2fd06c1e0167a1445e0f39f8f9447bbecd8e138bc772f2f5c6d35250766ad7db" exitCode=0 Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.635391 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9shd" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.635920 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9shd" event={"ID":"0eb55921-4244-4557-aa72-97cea802c3fb","Type":"ContainerDied","Data":"2fd06c1e0167a1445e0f39f8f9447bbecd8e138bc772f2f5c6d35250766ad7db"} Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.635975 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9shd" event={"ID":"0eb55921-4244-4557-aa72-97cea802c3fb","Type":"ContainerDied","Data":"ffaf192074c82abc6ecb1f812222e63630bfb46a7574913f2c2b3e520905ae73"} Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.644762 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8kd6n_481dac61-2ecf-46c9-b8f8-981815ceb9c5/marketplace-operator/3.log" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.644980 4724 generic.go:334] "Generic (PLEG): container finished" podID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerID="60bb794b9218f7d4380c1132409b524fbffcc56aa1be0e27425474e6b08c43ca" exitCode=0 Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.645028 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" event={"ID":"481dac61-2ecf-46c9-b8f8-981815ceb9c5","Type":"ContainerDied","Data":"60bb794b9218f7d4380c1132409b524fbffcc56aa1be0e27425474e6b08c43ca"} Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.647337 4724 scope.go:117] "RemoveContainer" containerID="d1c91108a0a9f4f857d499b0086fed9ccc010e058317c1644a4aeb51182bd1bb" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.649668 4724 generic.go:334] "Generic (PLEG): container finished" podID="8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" containerID="c805f42b4b6b1239e46c0e5d1cf780973199cb7010c00ebdeaa519107094af98" exitCode=0 Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.649701 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92dsj" event={"ID":"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc","Type":"ContainerDied","Data":"c805f42b4b6b1239e46c0e5d1cf780973199cb7010c00ebdeaa519107094af98"} Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.664524 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" (UID: "48a2c1ec-376b-440a-9dd2-6037d5dfdd1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.671638 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-catalog-content\") pod \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\" (UID: \"48a2c1ec-376b-440a-9dd2-6037d5dfdd1f\") " Feb 26 11:15:41 crc kubenswrapper[4724]: W0226 11:15:41.672062 4724 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f/volumes/kubernetes.io~empty-dir/catalog-content Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.672076 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" (UID: "48a2c1ec-376b-440a-9dd2-6037d5dfdd1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.672563 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmbgj\" (UniqueName: \"kubernetes.io/projected/056030ad-19ca-4542-a486-139eb62524b0-kube-api-access-vmbgj\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.672594 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056030ad-19ca-4542-a486-139eb62524b0-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.672633 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb55921-4244-4557-aa72-97cea802c3fb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.672644 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056030ad-19ca-4542-a486-139eb62524b0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.711485 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9shd"] Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.720527 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p9shd"] Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.740976 4724 scope.go:117] "RemoveContainer" containerID="68f40a2ae847fff66cf3f8a58bdc36426864f7d66bf3083a49f27bd40f782f52" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.759611 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.773833 4724 scope.go:117] "RemoveContainer" containerID="e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8" Feb 26 11:15:41 crc kubenswrapper[4724]: E0226 11:15:41.774685 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8\": container with ID starting with e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8 not found: ID does not exist" containerID="e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.774722 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8"} err="failed to get container status \"e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8\": rpc error: code = NotFound desc = could not find container \"e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8\": container with ID starting with e273654f561082b3e8b2b4812618574c9f74f5c5beba79ea2399a713954918e8 not found: ID does not exist" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.774748 4724 scope.go:117] "RemoveContainer" containerID="d1c91108a0a9f4f857d499b0086fed9ccc010e058317c1644a4aeb51182bd1bb" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.774868 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-utilities\") pod \"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc\" (UID: \"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc\") " Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.775169 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:41 crc kubenswrapper[4724]: E0226 11:15:41.775289 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1c91108a0a9f4f857d499b0086fed9ccc010e058317c1644a4aeb51182bd1bb\": container with ID starting with d1c91108a0a9f4f857d499b0086fed9ccc010e058317c1644a4aeb51182bd1bb not found: ID does not exist" containerID="d1c91108a0a9f4f857d499b0086fed9ccc010e058317c1644a4aeb51182bd1bb" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.775324 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1c91108a0a9f4f857d499b0086fed9ccc010e058317c1644a4aeb51182bd1bb"} err="failed to get container status \"d1c91108a0a9f4f857d499b0086fed9ccc010e058317c1644a4aeb51182bd1bb\": rpc error: code = NotFound desc = could not find container \"d1c91108a0a9f4f857d499b0086fed9ccc010e058317c1644a4aeb51182bd1bb\": container with ID starting with d1c91108a0a9f4f857d499b0086fed9ccc010e058317c1644a4aeb51182bd1bb not found: ID does not exist" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.775353 4724 scope.go:117] "RemoveContainer" containerID="68f40a2ae847fff66cf3f8a58bdc36426864f7d66bf3083a49f27bd40f782f52" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.775849 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-utilities" (OuterVolumeSpecName: "utilities") pod "8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" (UID: "8a06e0f8-4c39-4fbd-a7fc-710337cbfafc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: E0226 11:15:41.776087 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68f40a2ae847fff66cf3f8a58bdc36426864f7d66bf3083a49f27bd40f782f52\": container with ID starting with 68f40a2ae847fff66cf3f8a58bdc36426864f7d66bf3083a49f27bd40f782f52 not found: ID does not exist" containerID="68f40a2ae847fff66cf3f8a58bdc36426864f7d66bf3083a49f27bd40f782f52" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.776152 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68f40a2ae847fff66cf3f8a58bdc36426864f7d66bf3083a49f27bd40f782f52"} err="failed to get container status \"68f40a2ae847fff66cf3f8a58bdc36426864f7d66bf3083a49f27bd40f782f52\": rpc error: code = NotFound desc = could not find container \"68f40a2ae847fff66cf3f8a58bdc36426864f7d66bf3083a49f27bd40f782f52\": container with ID starting with 68f40a2ae847fff66cf3f8a58bdc36426864f7d66bf3083a49f27bd40f782f52 not found: ID does not exist" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.776173 4724 scope.go:117] "RemoveContainer" containerID="68a33343c30b0049633cc7df1f9d3b3ddc1994030a58f4ddbc87657d165f9611" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.803496 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8kd6n_481dac61-2ecf-46c9-b8f8-981815ceb9c5/marketplace-operator/3.log" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.803588 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.807721 4724 scope.go:117] "RemoveContainer" containerID="de482463721a91271fe141f706afecc2d0f53a6e1e3ede673e072d21189c8d40" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.836331 4724 scope.go:117] "RemoveContainer" containerID="2abcfecb919182f1a655c61fe5d154bff920a083a8832e369dfa1acba547361e" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.852628 4724 scope.go:117] "RemoveContainer" containerID="68a33343c30b0049633cc7df1f9d3b3ddc1994030a58f4ddbc87657d165f9611" Feb 26 11:15:41 crc kubenswrapper[4724]: E0226 11:15:41.857353 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68a33343c30b0049633cc7df1f9d3b3ddc1994030a58f4ddbc87657d165f9611\": container with ID starting with 68a33343c30b0049633cc7df1f9d3b3ddc1994030a58f4ddbc87657d165f9611 not found: ID does not exist" containerID="68a33343c30b0049633cc7df1f9d3b3ddc1994030a58f4ddbc87657d165f9611" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.857399 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68a33343c30b0049633cc7df1f9d3b3ddc1994030a58f4ddbc87657d165f9611"} err="failed to get container status \"68a33343c30b0049633cc7df1f9d3b3ddc1994030a58f4ddbc87657d165f9611\": rpc error: code = NotFound desc = could not find container \"68a33343c30b0049633cc7df1f9d3b3ddc1994030a58f4ddbc87657d165f9611\": container with ID starting with 68a33343c30b0049633cc7df1f9d3b3ddc1994030a58f4ddbc87657d165f9611 not found: ID does not exist" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.857429 4724 scope.go:117] "RemoveContainer" containerID="de482463721a91271fe141f706afecc2d0f53a6e1e3ede673e072d21189c8d40" Feb 26 11:15:41 crc kubenswrapper[4724]: E0226 11:15:41.857730 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de482463721a91271fe141f706afecc2d0f53a6e1e3ede673e072d21189c8d40\": container with ID starting with de482463721a91271fe141f706afecc2d0f53a6e1e3ede673e072d21189c8d40 not found: ID does not exist" containerID="de482463721a91271fe141f706afecc2d0f53a6e1e3ede673e072d21189c8d40" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.857756 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de482463721a91271fe141f706afecc2d0f53a6e1e3ede673e072d21189c8d40"} err="failed to get container status \"de482463721a91271fe141f706afecc2d0f53a6e1e3ede673e072d21189c8d40\": rpc error: code = NotFound desc = could not find container \"de482463721a91271fe141f706afecc2d0f53a6e1e3ede673e072d21189c8d40\": container with ID starting with de482463721a91271fe141f706afecc2d0f53a6e1e3ede673e072d21189c8d40 not found: ID does not exist" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.857769 4724 scope.go:117] "RemoveContainer" containerID="2abcfecb919182f1a655c61fe5d154bff920a083a8832e369dfa1acba547361e" Feb 26 11:15:41 crc kubenswrapper[4724]: E0226 11:15:41.857986 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2abcfecb919182f1a655c61fe5d154bff920a083a8832e369dfa1acba547361e\": container with ID starting with 2abcfecb919182f1a655c61fe5d154bff920a083a8832e369dfa1acba547361e not found: ID does not exist" containerID="2abcfecb919182f1a655c61fe5d154bff920a083a8832e369dfa1acba547361e" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.858005 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2abcfecb919182f1a655c61fe5d154bff920a083a8832e369dfa1acba547361e"} err="failed to get container status \"2abcfecb919182f1a655c61fe5d154bff920a083a8832e369dfa1acba547361e\": rpc error: code = NotFound desc = could not find container \"2abcfecb919182f1a655c61fe5d154bff920a083a8832e369dfa1acba547361e\": container with ID starting with 2abcfecb919182f1a655c61fe5d154bff920a083a8832e369dfa1acba547361e not found: ID does not exist" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.858017 4724 scope.go:117] "RemoveContainer" containerID="2fd06c1e0167a1445e0f39f8f9447bbecd8e138bc772f2f5c6d35250766ad7db" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.875688 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-catalog-content\") pod \"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc\" (UID: \"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc\") " Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.875736 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/481dac61-2ecf-46c9-b8f8-981815ceb9c5-marketplace-operator-metrics\") pod \"481dac61-2ecf-46c9-b8f8-981815ceb9c5\" (UID: \"481dac61-2ecf-46c9-b8f8-981815ceb9c5\") " Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.875760 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ngbn\" (UniqueName: \"kubernetes.io/projected/481dac61-2ecf-46c9-b8f8-981815ceb9c5-kube-api-access-8ngbn\") pod \"481dac61-2ecf-46c9-b8f8-981815ceb9c5\" (UID: \"481dac61-2ecf-46c9-b8f8-981815ceb9c5\") " Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.875777 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq4gb\" (UniqueName: \"kubernetes.io/projected/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-kube-api-access-hq4gb\") pod \"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc\" (UID: \"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc\") " Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.876418 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/481dac61-2ecf-46c9-b8f8-981815ceb9c5-marketplace-trusted-ca\") pod \"481dac61-2ecf-46c9-b8f8-981815ceb9c5\" (UID: \"481dac61-2ecf-46c9-b8f8-981815ceb9c5\") " Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.876869 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.877224 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/481dac61-2ecf-46c9-b8f8-981815ceb9c5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "481dac61-2ecf-46c9-b8f8-981815ceb9c5" (UID: "481dac61-2ecf-46c9-b8f8-981815ceb9c5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.880797 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/481dac61-2ecf-46c9-b8f8-981815ceb9c5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "481dac61-2ecf-46c9-b8f8-981815ceb9c5" (UID: "481dac61-2ecf-46c9-b8f8-981815ceb9c5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.882670 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-kube-api-access-hq4gb" (OuterVolumeSpecName: "kube-api-access-hq4gb") pod "8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" (UID: "8a06e0f8-4c39-4fbd-a7fc-710337cbfafc"). InnerVolumeSpecName "kube-api-access-hq4gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.882870 4724 scope.go:117] "RemoveContainer" containerID="ce9109683db87f4fd1179b0fd5816a4a83accab340c5a68563b265d2fffe6f21" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.883964 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481dac61-2ecf-46c9-b8f8-981815ceb9c5-kube-api-access-8ngbn" (OuterVolumeSpecName: "kube-api-access-8ngbn") pod "481dac61-2ecf-46c9-b8f8-981815ceb9c5" (UID: "481dac61-2ecf-46c9-b8f8-981815ceb9c5"). InnerVolumeSpecName "kube-api-access-8ngbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.921747 4724 scope.go:117] "RemoveContainer" containerID="e51d3b0a466d4991aff3d941e6f0f98b9a674da5b67c4304cfeeb58716f85783" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.931812 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" (UID: "8a06e0f8-4c39-4fbd-a7fc-710337cbfafc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.942310 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xb5gc"] Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.943881 4724 scope.go:117] "RemoveContainer" containerID="2fd06c1e0167a1445e0f39f8f9447bbecd8e138bc772f2f5c6d35250766ad7db" Feb 26 11:15:41 crc kubenswrapper[4724]: E0226 11:15:41.944451 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fd06c1e0167a1445e0f39f8f9447bbecd8e138bc772f2f5c6d35250766ad7db\": container with ID starting with 2fd06c1e0167a1445e0f39f8f9447bbecd8e138bc772f2f5c6d35250766ad7db not found: ID does not exist" containerID="2fd06c1e0167a1445e0f39f8f9447bbecd8e138bc772f2f5c6d35250766ad7db" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.944494 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fd06c1e0167a1445e0f39f8f9447bbecd8e138bc772f2f5c6d35250766ad7db"} err="failed to get container status \"2fd06c1e0167a1445e0f39f8f9447bbecd8e138bc772f2f5c6d35250766ad7db\": rpc error: code = NotFound desc = could not find container \"2fd06c1e0167a1445e0f39f8f9447bbecd8e138bc772f2f5c6d35250766ad7db\": container with ID starting with 2fd06c1e0167a1445e0f39f8f9447bbecd8e138bc772f2f5c6d35250766ad7db not found: ID does not exist" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.944526 4724 scope.go:117] "RemoveContainer" containerID="ce9109683db87f4fd1179b0fd5816a4a83accab340c5a68563b265d2fffe6f21" Feb 26 11:15:41 crc kubenswrapper[4724]: E0226 11:15:41.944973 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce9109683db87f4fd1179b0fd5816a4a83accab340c5a68563b265d2fffe6f21\": container with ID starting with ce9109683db87f4fd1179b0fd5816a4a83accab340c5a68563b265d2fffe6f21 not found: ID does not exist" containerID="ce9109683db87f4fd1179b0fd5816a4a83accab340c5a68563b265d2fffe6f21" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.945010 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce9109683db87f4fd1179b0fd5816a4a83accab340c5a68563b265d2fffe6f21"} err="failed to get container status \"ce9109683db87f4fd1179b0fd5816a4a83accab340c5a68563b265d2fffe6f21\": rpc error: code = NotFound desc = could not find container \"ce9109683db87f4fd1179b0fd5816a4a83accab340c5a68563b265d2fffe6f21\": container with ID starting with ce9109683db87f4fd1179b0fd5816a4a83accab340c5a68563b265d2fffe6f21 not found: ID does not exist" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.945034 4724 scope.go:117] "RemoveContainer" containerID="e51d3b0a466d4991aff3d941e6f0f98b9a674da5b67c4304cfeeb58716f85783" Feb 26 11:15:41 crc kubenswrapper[4724]: E0226 11:15:41.948523 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e51d3b0a466d4991aff3d941e6f0f98b9a674da5b67c4304cfeeb58716f85783\": container with ID starting with e51d3b0a466d4991aff3d941e6f0f98b9a674da5b67c4304cfeeb58716f85783 not found: ID does not exist" containerID="e51d3b0a466d4991aff3d941e6f0f98b9a674da5b67c4304cfeeb58716f85783" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.948558 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e51d3b0a466d4991aff3d941e6f0f98b9a674da5b67c4304cfeeb58716f85783"} err="failed to get container status \"e51d3b0a466d4991aff3d941e6f0f98b9a674da5b67c4304cfeeb58716f85783\": rpc error: code = NotFound desc = could not find container \"e51d3b0a466d4991aff3d941e6f0f98b9a674da5b67c4304cfeeb58716f85783\": container with ID starting with e51d3b0a466d4991aff3d941e6f0f98b9a674da5b67c4304cfeeb58716f85783 not found: ID does not exist" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.948581 4724 scope.go:117] "RemoveContainer" containerID="416a1d5a3cf7f428ab79b4fd4d226abd53743d103395b27b9fb360cd62a9533c" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.959633 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xb5gc"] Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.964441 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mqtct"] Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.971204 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mqtct"] Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.977862 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.977898 4724 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/481dac61-2ecf-46c9-b8f8-981815ceb9c5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.977912 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ngbn\" (UniqueName: \"kubernetes.io/projected/481dac61-2ecf-46c9-b8f8-981815ceb9c5-kube-api-access-8ngbn\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.977926 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq4gb\" (UniqueName: \"kubernetes.io/projected/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc-kube-api-access-hq4gb\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.977938 4724 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/481dac61-2ecf-46c9-b8f8-981815ceb9c5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.985973 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056030ad-19ca-4542-a486-139eb62524b0" path="/var/lib/kubelet/pods/056030ad-19ca-4542-a486-139eb62524b0/volumes" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.986820 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" path="/var/lib/kubelet/pods/0eb55921-4244-4557-aa72-97cea802c3fb/volumes" Feb 26 11:15:41 crc kubenswrapper[4724]: I0226 11:15:41.987602 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" path="/var/lib/kubelet/pods/48a2c1ec-376b-440a-9dd2-6037d5dfdd1f/volumes" Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.656378 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" event={"ID":"2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3","Type":"ContainerStarted","Data":"251f45856f7f451624d91ca929f15910d582f3d0404e001be5ebff4c8e5adff4"} Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.656460 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" event={"ID":"2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3","Type":"ContainerStarted","Data":"c7eb4e3bdcff9d196f97bbcc2ff5015a0d40836b2f13191add5b6d447e89fdc8"} Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.656956 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.658475 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" event={"ID":"481dac61-2ecf-46c9-b8f8-981815ceb9c5","Type":"ContainerDied","Data":"c9416bdb5c0137356e8452bd208c2ce63e71a9b96bff1bb953c0a0194faa4c48"} Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.658533 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8kd6n" Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.658554 4724 scope.go:117] "RemoveContainer" containerID="60bb794b9218f7d4380c1132409b524fbffcc56aa1be0e27425474e6b08c43ca" Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.661517 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92dsj" event={"ID":"8a06e0f8-4c39-4fbd-a7fc-710337cbfafc","Type":"ContainerDied","Data":"88c6fd068c3900d96f1c3cb1e5cdb6638b47d5caf15cc888124d07ed4f75c162"} Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.661676 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92dsj" Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.666345 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.675970 4724 scope.go:117] "RemoveContainer" containerID="c805f42b4b6b1239e46c0e5d1cf780973199cb7010c00ebdeaa519107094af98" Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.688846 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" podStartSLOduration=2.688820175 podStartE2EDuration="2.688820175s" podCreationTimestamp="2026-02-26 11:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:15:42.680443795 +0000 UTC m=+609.336182920" watchObservedRunningTime="2026-02-26 11:15:42.688820175 +0000 UTC m=+609.344559310" Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.692241 4724 scope.go:117] "RemoveContainer" containerID="34bd3724bd7f361d1cde69fdf74167630cb7f6bd8f6b0023e121e01a2c0b03f2" Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.721113 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-92dsj"] Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.721194 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-92dsj"] Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.726167 4724 scope.go:117] "RemoveContainer" containerID="4813411aa567eae908b02addf2ce6181ac31597794f264c7ee8b1d0852ce8da2" Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.755764 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8kd6n"] Feb 26 11:15:42 crc kubenswrapper[4724]: I0226 11:15:42.760112 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8kd6n"] Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.156253 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6hw4f"] Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.156676 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.156753 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.156822 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" containerName="extract-utilities" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.156890 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" containerName="extract-utilities" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.156964 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" containerName="extract-utilities" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.157032 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" containerName="extract-utilities" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.157090 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056030ad-19ca-4542-a486-139eb62524b0" containerName="extract-content" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.157150 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="056030ad-19ca-4542-a486-139eb62524b0" containerName="extract-content" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.157254 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" containerName="registry-server" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.157332 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" containerName="registry-server" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.157401 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056030ad-19ca-4542-a486-139eb62524b0" containerName="registry-server" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.157456 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="056030ad-19ca-4542-a486-139eb62524b0" containerName="registry-server" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.157520 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" containerName="registry-server" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.157574 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" containerName="registry-server" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.157655 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.157722 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.157784 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" containerName="extract-utilities" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.157839 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" containerName="extract-utilities" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.157898 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056030ad-19ca-4542-a486-139eb62524b0" containerName="extract-utilities" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.157951 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="056030ad-19ca-4542-a486-139eb62524b0" containerName="extract-utilities" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.158004 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" containerName="extract-content" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.158067 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" containerName="extract-content" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.158120 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.158195 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.158256 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" containerName="extract-content" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.158333 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" containerName="extract-content" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.158398 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" containerName="extract-content" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.158458 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" containerName="extract-content" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.158518 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" containerName="registry-server" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.158572 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" containerName="registry-server" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.158719 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.158783 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.158844 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a2c1ec-376b-440a-9dd2-6037d5dfdd1f" containerName="registry-server" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.158900 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.158957 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.159014 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" containerName="registry-server" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.159068 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb55921-4244-4557-aa72-97cea802c3fb" containerName="registry-server" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.159128 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="056030ad-19ca-4542-a486-139eb62524b0" containerName="registry-server" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.159278 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.159337 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" Feb 26 11:15:43 crc kubenswrapper[4724]: E0226 11:15:43.159407 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.159721 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.159928 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" containerName="marketplace-operator" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.160533 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hw4f" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.164866 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6hw4f"] Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.165514 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.191292 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-catalog-content\") pod \"community-operators-6hw4f\" (UID: \"1390f0e7-ad55-44f1-9ef0-0a732c57cc28\") " pod="openshift-marketplace/community-operators-6hw4f" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.191493 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-utilities\") pod \"community-operators-6hw4f\" (UID: \"1390f0e7-ad55-44f1-9ef0-0a732c57cc28\") " pod="openshift-marketplace/community-operators-6hw4f" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.191606 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qpbd\" (UniqueName: \"kubernetes.io/projected/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-kube-api-access-9qpbd\") pod \"community-operators-6hw4f\" (UID: \"1390f0e7-ad55-44f1-9ef0-0a732c57cc28\") " pod="openshift-marketplace/community-operators-6hw4f" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.291981 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qpbd\" (UniqueName: \"kubernetes.io/projected/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-kube-api-access-9qpbd\") pod \"community-operators-6hw4f\" (UID: \"1390f0e7-ad55-44f1-9ef0-0a732c57cc28\") " pod="openshift-marketplace/community-operators-6hw4f" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.292375 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-catalog-content\") pod \"community-operators-6hw4f\" (UID: \"1390f0e7-ad55-44f1-9ef0-0a732c57cc28\") " pod="openshift-marketplace/community-operators-6hw4f" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.292500 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-utilities\") pod \"community-operators-6hw4f\" (UID: \"1390f0e7-ad55-44f1-9ef0-0a732c57cc28\") " pod="openshift-marketplace/community-operators-6hw4f" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.292914 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-catalog-content\") pod \"community-operators-6hw4f\" (UID: \"1390f0e7-ad55-44f1-9ef0-0a732c57cc28\") " pod="openshift-marketplace/community-operators-6hw4f" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.292969 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-utilities\") pod \"community-operators-6hw4f\" (UID: \"1390f0e7-ad55-44f1-9ef0-0a732c57cc28\") " pod="openshift-marketplace/community-operators-6hw4f" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.318942 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qpbd\" (UniqueName: \"kubernetes.io/projected/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-kube-api-access-9qpbd\") pod \"community-operators-6hw4f\" (UID: \"1390f0e7-ad55-44f1-9ef0-0a732c57cc28\") " pod="openshift-marketplace/community-operators-6hw4f" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.489919 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hw4f" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.683926 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6hw4f"] Feb 26 11:15:43 crc kubenswrapper[4724]: W0226 11:15:43.691525 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1390f0e7_ad55_44f1_9ef0_0a732c57cc28.slice/crio-44a592933bb3afd64f8281d3d39f4fd425be2984484b1dd2a056cc06b82af48f WatchSource:0}: Error finding container 44a592933bb3afd64f8281d3d39f4fd425be2984484b1dd2a056cc06b82af48f: Status 404 returned error can't find the container with id 44a592933bb3afd64f8281d3d39f4fd425be2984484b1dd2a056cc06b82af48f Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.981944 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481dac61-2ecf-46c9-b8f8-981815ceb9c5" path="/var/lib/kubelet/pods/481dac61-2ecf-46c9-b8f8-981815ceb9c5/volumes" Feb 26 11:15:43 crc kubenswrapper[4724]: I0226 11:15:43.982677 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a06e0f8-4c39-4fbd-a7fc-710337cbfafc" path="/var/lib/kubelet/pods/8a06e0f8-4c39-4fbd-a7fc-710337cbfafc/volumes" Feb 26 11:15:44 crc kubenswrapper[4724]: I0226 11:15:44.682458 4724 generic.go:334] "Generic (PLEG): container finished" podID="1390f0e7-ad55-44f1-9ef0-0a732c57cc28" containerID="b1764800ed13fd553e7e0bc366982ad8d2202defde84d07e318ca82e19d781e1" exitCode=0 Feb 26 11:15:44 crc kubenswrapper[4724]: I0226 11:15:44.683334 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hw4f" event={"ID":"1390f0e7-ad55-44f1-9ef0-0a732c57cc28","Type":"ContainerDied","Data":"b1764800ed13fd553e7e0bc366982ad8d2202defde84d07e318ca82e19d781e1"} Feb 26 11:15:44 crc kubenswrapper[4724]: I0226 11:15:44.683462 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hw4f" event={"ID":"1390f0e7-ad55-44f1-9ef0-0a732c57cc28","Type":"ContainerStarted","Data":"44a592933bb3afd64f8281d3d39f4fd425be2984484b1dd2a056cc06b82af48f"} Feb 26 11:15:44 crc kubenswrapper[4724]: I0226 11:15:44.950808 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8h6mc"] Feb 26 11:15:44 crc kubenswrapper[4724]: I0226 11:15:44.952044 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8h6mc" Feb 26 11:15:44 crc kubenswrapper[4724]: I0226 11:15:44.954420 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 11:15:44 crc kubenswrapper[4724]: I0226 11:15:44.978798 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8h6mc"] Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.115019 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k88rc\" (UniqueName: \"kubernetes.io/projected/e8868abd-2431-4e5b-98d6-574ca6449d4b-kube-api-access-k88rc\") pod \"redhat-marketplace-8h6mc\" (UID: \"e8868abd-2431-4e5b-98d6-574ca6449d4b\") " pod="openshift-marketplace/redhat-marketplace-8h6mc" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.115115 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8868abd-2431-4e5b-98d6-574ca6449d4b-utilities\") pod \"redhat-marketplace-8h6mc\" (UID: \"e8868abd-2431-4e5b-98d6-574ca6449d4b\") " pod="openshift-marketplace/redhat-marketplace-8h6mc" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.115145 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8868abd-2431-4e5b-98d6-574ca6449d4b-catalog-content\") pod \"redhat-marketplace-8h6mc\" (UID: \"e8868abd-2431-4e5b-98d6-574ca6449d4b\") " pod="openshift-marketplace/redhat-marketplace-8h6mc" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.216192 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k88rc\" (UniqueName: \"kubernetes.io/projected/e8868abd-2431-4e5b-98d6-574ca6449d4b-kube-api-access-k88rc\") pod \"redhat-marketplace-8h6mc\" (UID: \"e8868abd-2431-4e5b-98d6-574ca6449d4b\") " pod="openshift-marketplace/redhat-marketplace-8h6mc" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.216310 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8868abd-2431-4e5b-98d6-574ca6449d4b-utilities\") pod \"redhat-marketplace-8h6mc\" (UID: \"e8868abd-2431-4e5b-98d6-574ca6449d4b\") " pod="openshift-marketplace/redhat-marketplace-8h6mc" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.216345 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8868abd-2431-4e5b-98d6-574ca6449d4b-catalog-content\") pod \"redhat-marketplace-8h6mc\" (UID: \"e8868abd-2431-4e5b-98d6-574ca6449d4b\") " pod="openshift-marketplace/redhat-marketplace-8h6mc" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.216938 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8868abd-2431-4e5b-98d6-574ca6449d4b-utilities\") pod \"redhat-marketplace-8h6mc\" (UID: \"e8868abd-2431-4e5b-98d6-574ca6449d4b\") " pod="openshift-marketplace/redhat-marketplace-8h6mc" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.217001 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8868abd-2431-4e5b-98d6-574ca6449d4b-catalog-content\") pod \"redhat-marketplace-8h6mc\" (UID: \"e8868abd-2431-4e5b-98d6-574ca6449d4b\") " pod="openshift-marketplace/redhat-marketplace-8h6mc" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.238136 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k88rc\" (UniqueName: \"kubernetes.io/projected/e8868abd-2431-4e5b-98d6-574ca6449d4b-kube-api-access-k88rc\") pod \"redhat-marketplace-8h6mc\" (UID: \"e8868abd-2431-4e5b-98d6-574ca6449d4b\") " pod="openshift-marketplace/redhat-marketplace-8h6mc" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.286512 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8h6mc" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.523499 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8h6mc"] Feb 26 11:15:45 crc kubenswrapper[4724]: W0226 11:15:45.531681 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8868abd_2431_4e5b_98d6_574ca6449d4b.slice/crio-07335651b55a3396c1d01b1f470160f32aa97790d964bf149b23138b3a05f049 WatchSource:0}: Error finding container 07335651b55a3396c1d01b1f470160f32aa97790d964bf149b23138b3a05f049: Status 404 returned error can't find the container with id 07335651b55a3396c1d01b1f470160f32aa97790d964bf149b23138b3a05f049 Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.548198 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rgvbv"] Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.549093 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.552819 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.562887 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rgvbv"] Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.701989 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8h6mc" event={"ID":"e8868abd-2431-4e5b-98d6-574ca6449d4b","Type":"ContainerStarted","Data":"07335651b55a3396c1d01b1f470160f32aa97790d964bf149b23138b3a05f049"} Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.724519 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11e1e3c7-2b69-4645-9219-806bc00f5717-catalog-content\") pod \"redhat-operators-rgvbv\" (UID: \"11e1e3c7-2b69-4645-9219-806bc00f5717\") " pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.724568 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11e1e3c7-2b69-4645-9219-806bc00f5717-utilities\") pod \"redhat-operators-rgvbv\" (UID: \"11e1e3c7-2b69-4645-9219-806bc00f5717\") " pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.724593 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk62p\" (UniqueName: \"kubernetes.io/projected/11e1e3c7-2b69-4645-9219-806bc00f5717-kube-api-access-rk62p\") pod \"redhat-operators-rgvbv\" (UID: \"11e1e3c7-2b69-4645-9219-806bc00f5717\") " pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.825990 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk62p\" (UniqueName: \"kubernetes.io/projected/11e1e3c7-2b69-4645-9219-806bc00f5717-kube-api-access-rk62p\") pod \"redhat-operators-rgvbv\" (UID: \"11e1e3c7-2b69-4645-9219-806bc00f5717\") " pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.826106 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11e1e3c7-2b69-4645-9219-806bc00f5717-catalog-content\") pod \"redhat-operators-rgvbv\" (UID: \"11e1e3c7-2b69-4645-9219-806bc00f5717\") " pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.826131 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11e1e3c7-2b69-4645-9219-806bc00f5717-utilities\") pod \"redhat-operators-rgvbv\" (UID: \"11e1e3c7-2b69-4645-9219-806bc00f5717\") " pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.826964 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11e1e3c7-2b69-4645-9219-806bc00f5717-utilities\") pod \"redhat-operators-rgvbv\" (UID: \"11e1e3c7-2b69-4645-9219-806bc00f5717\") " pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.827402 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11e1e3c7-2b69-4645-9219-806bc00f5717-catalog-content\") pod \"redhat-operators-rgvbv\" (UID: \"11e1e3c7-2b69-4645-9219-806bc00f5717\") " pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.848637 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk62p\" (UniqueName: \"kubernetes.io/projected/11e1e3c7-2b69-4645-9219-806bc00f5717-kube-api-access-rk62p\") pod \"redhat-operators-rgvbv\" (UID: \"11e1e3c7-2b69-4645-9219-806bc00f5717\") " pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:15:45 crc kubenswrapper[4724]: I0226 11:15:45.928530 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:15:46 crc kubenswrapper[4724]: I0226 11:15:46.118666 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rgvbv"] Feb 26 11:15:46 crc kubenswrapper[4724]: W0226 11:15:46.127149 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11e1e3c7_2b69_4645_9219_806bc00f5717.slice/crio-1db7d4f73687c8f7fc1cc43bdbeb7b63894416c1c95c6412b9fb499dad8b67ce WatchSource:0}: Error finding container 1db7d4f73687c8f7fc1cc43bdbeb7b63894416c1c95c6412b9fb499dad8b67ce: Status 404 returned error can't find the container with id 1db7d4f73687c8f7fc1cc43bdbeb7b63894416c1c95c6412b9fb499dad8b67ce Feb 26 11:15:46 crc kubenswrapper[4724]: I0226 11:15:46.707705 4724 generic.go:334] "Generic (PLEG): container finished" podID="1390f0e7-ad55-44f1-9ef0-0a732c57cc28" containerID="e5a22da4c1c5497c40d0239a8b9010a7b64d505ce15433765c72c7b970f75000" exitCode=0 Feb 26 11:15:46 crc kubenswrapper[4724]: I0226 11:15:46.707900 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hw4f" event={"ID":"1390f0e7-ad55-44f1-9ef0-0a732c57cc28","Type":"ContainerDied","Data":"e5a22da4c1c5497c40d0239a8b9010a7b64d505ce15433765c72c7b970f75000"} Feb 26 11:15:46 crc kubenswrapper[4724]: I0226 11:15:46.711746 4724 generic.go:334] "Generic (PLEG): container finished" podID="e8868abd-2431-4e5b-98d6-574ca6449d4b" containerID="da085d29504f05afb097a2544e52a9cb7ba946e39cc63d5fd012c3384cce1068" exitCode=0 Feb 26 11:15:46 crc kubenswrapper[4724]: I0226 11:15:46.711773 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8h6mc" event={"ID":"e8868abd-2431-4e5b-98d6-574ca6449d4b","Type":"ContainerDied","Data":"da085d29504f05afb097a2544e52a9cb7ba946e39cc63d5fd012c3384cce1068"} Feb 26 11:15:46 crc kubenswrapper[4724]: I0226 11:15:46.714275 4724 generic.go:334] "Generic (PLEG): container finished" podID="11e1e3c7-2b69-4645-9219-806bc00f5717" containerID="e74868a908cb2b969cb7866ad998411af99c3357bc99f76067c98dc0fdb85701" exitCode=0 Feb 26 11:15:46 crc kubenswrapper[4724]: I0226 11:15:46.714318 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgvbv" event={"ID":"11e1e3c7-2b69-4645-9219-806bc00f5717","Type":"ContainerDied","Data":"e74868a908cb2b969cb7866ad998411af99c3357bc99f76067c98dc0fdb85701"} Feb 26 11:15:46 crc kubenswrapper[4724]: I0226 11:15:46.714387 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgvbv" event={"ID":"11e1e3c7-2b69-4645-9219-806bc00f5717","Type":"ContainerStarted","Data":"1db7d4f73687c8f7fc1cc43bdbeb7b63894416c1c95c6412b9fb499dad8b67ce"} Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.350973 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7gnhv"] Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.352921 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.357507 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.367694 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gnhv"] Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.549732 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0988507e-1e0a-40d5-becb-7dff50d436ac-utilities\") pod \"certified-operators-7gnhv\" (UID: \"0988507e-1e0a-40d5-becb-7dff50d436ac\") " pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.549832 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56cts\" (UniqueName: \"kubernetes.io/projected/0988507e-1e0a-40d5-becb-7dff50d436ac-kube-api-access-56cts\") pod \"certified-operators-7gnhv\" (UID: \"0988507e-1e0a-40d5-becb-7dff50d436ac\") " pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.549860 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0988507e-1e0a-40d5-becb-7dff50d436ac-catalog-content\") pod \"certified-operators-7gnhv\" (UID: \"0988507e-1e0a-40d5-becb-7dff50d436ac\") " pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.652090 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0988507e-1e0a-40d5-becb-7dff50d436ac-utilities\") pod \"certified-operators-7gnhv\" (UID: \"0988507e-1e0a-40d5-becb-7dff50d436ac\") " pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.652261 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56cts\" (UniqueName: \"kubernetes.io/projected/0988507e-1e0a-40d5-becb-7dff50d436ac-kube-api-access-56cts\") pod \"certified-operators-7gnhv\" (UID: \"0988507e-1e0a-40d5-becb-7dff50d436ac\") " pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.652292 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0988507e-1e0a-40d5-becb-7dff50d436ac-catalog-content\") pod \"certified-operators-7gnhv\" (UID: \"0988507e-1e0a-40d5-becb-7dff50d436ac\") " pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.653083 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0988507e-1e0a-40d5-becb-7dff50d436ac-catalog-content\") pod \"certified-operators-7gnhv\" (UID: \"0988507e-1e0a-40d5-becb-7dff50d436ac\") " pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.653306 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0988507e-1e0a-40d5-becb-7dff50d436ac-utilities\") pod \"certified-operators-7gnhv\" (UID: \"0988507e-1e0a-40d5-becb-7dff50d436ac\") " pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.672575 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56cts\" (UniqueName: \"kubernetes.io/projected/0988507e-1e0a-40d5-becb-7dff50d436ac-kube-api-access-56cts\") pod \"certified-operators-7gnhv\" (UID: \"0988507e-1e0a-40d5-becb-7dff50d436ac\") " pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.688083 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.740770 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hw4f" event={"ID":"1390f0e7-ad55-44f1-9ef0-0a732c57cc28","Type":"ContainerStarted","Data":"673a1d66f5308f991c9b1ce81660bb0d7b6bfceea770cfcd67bb88075f7243c6"} Feb 26 11:15:47 crc kubenswrapper[4724]: I0226 11:15:47.760756 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6hw4f" podStartSLOduration=2.280669848 podStartE2EDuration="4.760736233s" podCreationTimestamp="2026-02-26 11:15:43 +0000 UTC" firstStartedPulling="2026-02-26 11:15:44.685555509 +0000 UTC m=+611.341294624" lastFinishedPulling="2026-02-26 11:15:47.165621894 +0000 UTC m=+613.821361009" observedRunningTime="2026-02-26 11:15:47.756439745 +0000 UTC m=+614.412178870" watchObservedRunningTime="2026-02-26 11:15:47.760736233 +0000 UTC m=+614.416475348" Feb 26 11:15:48 crc kubenswrapper[4724]: I0226 11:15:48.167443 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gnhv"] Feb 26 11:15:48 crc kubenswrapper[4724]: W0226 11:15:48.170109 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0988507e_1e0a_40d5_becb_7dff50d436ac.slice/crio-8beebfc9cea5dacdae8feb5ce15d2681b88afd3a349eede28bc82138ee5bbdd9 WatchSource:0}: Error finding container 8beebfc9cea5dacdae8feb5ce15d2681b88afd3a349eede28bc82138ee5bbdd9: Status 404 returned error can't find the container with id 8beebfc9cea5dacdae8feb5ce15d2681b88afd3a349eede28bc82138ee5bbdd9 Feb 26 11:15:48 crc kubenswrapper[4724]: I0226 11:15:48.747705 4724 generic.go:334] "Generic (PLEG): container finished" podID="0988507e-1e0a-40d5-becb-7dff50d436ac" containerID="995520926b3b94cdc7bcf673617c332a6cbc0f28364a4a0b7aaf176d743080b3" exitCode=0 Feb 26 11:15:48 crc kubenswrapper[4724]: I0226 11:15:48.747767 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gnhv" event={"ID":"0988507e-1e0a-40d5-becb-7dff50d436ac","Type":"ContainerDied","Data":"995520926b3b94cdc7bcf673617c332a6cbc0f28364a4a0b7aaf176d743080b3"} Feb 26 11:15:48 crc kubenswrapper[4724]: I0226 11:15:48.748064 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gnhv" event={"ID":"0988507e-1e0a-40d5-becb-7dff50d436ac","Type":"ContainerStarted","Data":"8beebfc9cea5dacdae8feb5ce15d2681b88afd3a349eede28bc82138ee5bbdd9"} Feb 26 11:15:48 crc kubenswrapper[4724]: I0226 11:15:48.753434 4724 generic.go:334] "Generic (PLEG): container finished" podID="e8868abd-2431-4e5b-98d6-574ca6449d4b" containerID="e8615dd0bd7282691a5d24ee416e2f5e040518287e970b96e4670de8a24346d3" exitCode=0 Feb 26 11:15:48 crc kubenswrapper[4724]: I0226 11:15:48.753522 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8h6mc" event={"ID":"e8868abd-2431-4e5b-98d6-574ca6449d4b","Type":"ContainerDied","Data":"e8615dd0bd7282691a5d24ee416e2f5e040518287e970b96e4670de8a24346d3"} Feb 26 11:15:48 crc kubenswrapper[4724]: I0226 11:15:48.758682 4724 generic.go:334] "Generic (PLEG): container finished" podID="11e1e3c7-2b69-4645-9219-806bc00f5717" containerID="9839beb9d9f07797c6c47f08c3ff8a4c742a9feaecbc6f516be4db8526d5be9b" exitCode=0 Feb 26 11:15:48 crc kubenswrapper[4724]: I0226 11:15:48.759695 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgvbv" event={"ID":"11e1e3c7-2b69-4645-9219-806bc00f5717","Type":"ContainerDied","Data":"9839beb9d9f07797c6c47f08c3ff8a4c742a9feaecbc6f516be4db8526d5be9b"} Feb 26 11:15:49 crc kubenswrapper[4724]: I0226 11:15:49.765802 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8h6mc" event={"ID":"e8868abd-2431-4e5b-98d6-574ca6449d4b","Type":"ContainerStarted","Data":"aa85bf090ac10d174afee26cda0b69632cdc3cb024f4248600ac71c1361a174e"} Feb 26 11:15:49 crc kubenswrapper[4724]: I0226 11:15:49.768511 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgvbv" event={"ID":"11e1e3c7-2b69-4645-9219-806bc00f5717","Type":"ContainerStarted","Data":"7a55874dfed892b1f0935adbc519bb605d08005363b20300a214536fcf65e46b"} Feb 26 11:15:49 crc kubenswrapper[4724]: I0226 11:15:49.771541 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gnhv" event={"ID":"0988507e-1e0a-40d5-becb-7dff50d436ac","Type":"ContainerStarted","Data":"1913c638ba48d2d77ac5f2e534bb66e0dfc4d99c67bc73c98efe46bf967f4424"} Feb 26 11:15:49 crc kubenswrapper[4724]: I0226 11:15:49.809637 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8h6mc" podStartSLOduration=3.3743314509999998 podStartE2EDuration="5.809609779s" podCreationTimestamp="2026-02-26 11:15:44 +0000 UTC" firstStartedPulling="2026-02-26 11:15:46.714442172 +0000 UTC m=+613.370181287" lastFinishedPulling="2026-02-26 11:15:49.1497205 +0000 UTC m=+615.805459615" observedRunningTime="2026-02-26 11:15:49.786905228 +0000 UTC m=+616.442644343" watchObservedRunningTime="2026-02-26 11:15:49.809609779 +0000 UTC m=+616.465348894" Feb 26 11:15:49 crc kubenswrapper[4724]: I0226 11:15:49.809837 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rgvbv" podStartSLOduration=2.335140846 podStartE2EDuration="4.809831055s" podCreationTimestamp="2026-02-26 11:15:45 +0000 UTC" firstStartedPulling="2026-02-26 11:15:46.715433047 +0000 UTC m=+613.371172162" lastFinishedPulling="2026-02-26 11:15:49.190123256 +0000 UTC m=+615.845862371" observedRunningTime="2026-02-26 11:15:49.807607839 +0000 UTC m=+616.463346954" watchObservedRunningTime="2026-02-26 11:15:49.809831055 +0000 UTC m=+616.465570190" Feb 26 11:15:50 crc kubenswrapper[4724]: I0226 11:15:50.778624 4724 generic.go:334] "Generic (PLEG): container finished" podID="0988507e-1e0a-40d5-becb-7dff50d436ac" containerID="1913c638ba48d2d77ac5f2e534bb66e0dfc4d99c67bc73c98efe46bf967f4424" exitCode=0 Feb 26 11:15:50 crc kubenswrapper[4724]: I0226 11:15:50.778739 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gnhv" event={"ID":"0988507e-1e0a-40d5-becb-7dff50d436ac","Type":"ContainerDied","Data":"1913c638ba48d2d77ac5f2e534bb66e0dfc4d99c67bc73c98efe46bf967f4424"} Feb 26 11:15:51 crc kubenswrapper[4724]: I0226 11:15:51.786933 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gnhv" event={"ID":"0988507e-1e0a-40d5-becb-7dff50d436ac","Type":"ContainerStarted","Data":"2e01f8064d60ca1c149d5d85a9937168b7214dc2f6b2959585469b9f801ce087"} Feb 26 11:15:53 crc kubenswrapper[4724]: I0226 11:15:53.495479 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6hw4f" Feb 26 11:15:53 crc kubenswrapper[4724]: I0226 11:15:53.496771 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6hw4f" Feb 26 11:15:53 crc kubenswrapper[4724]: I0226 11:15:53.536431 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6hw4f" Feb 26 11:15:53 crc kubenswrapper[4724]: I0226 11:15:53.589800 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7gnhv" podStartSLOduration=4.094201112 podStartE2EDuration="6.589774516s" podCreationTimestamp="2026-02-26 11:15:47 +0000 UTC" firstStartedPulling="2026-02-26 11:15:48.749298654 +0000 UTC m=+615.405037769" lastFinishedPulling="2026-02-26 11:15:51.244872058 +0000 UTC m=+617.900611173" observedRunningTime="2026-02-26 11:15:51.808769174 +0000 UTC m=+618.464508289" watchObservedRunningTime="2026-02-26 11:15:53.589774516 +0000 UTC m=+620.245513641" Feb 26 11:15:53 crc kubenswrapper[4724]: I0226 11:15:53.850985 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6hw4f" Feb 26 11:15:55 crc kubenswrapper[4724]: I0226 11:15:55.287022 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8h6mc" Feb 26 11:15:55 crc kubenswrapper[4724]: I0226 11:15:55.287363 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8h6mc" Feb 26 11:15:55 crc kubenswrapper[4724]: I0226 11:15:55.346737 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8h6mc" Feb 26 11:15:55 crc kubenswrapper[4724]: I0226 11:15:55.857932 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8h6mc" Feb 26 11:15:55 crc kubenswrapper[4724]: I0226 11:15:55.929240 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:15:55 crc kubenswrapper[4724]: I0226 11:15:55.929516 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:15:55 crc kubenswrapper[4724]: I0226 11:15:55.981365 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:15:56 crc kubenswrapper[4724]: I0226 11:15:56.862290 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:15:57 crc kubenswrapper[4724]: I0226 11:15:57.689443 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 11:15:57 crc kubenswrapper[4724]: I0226 11:15:57.689489 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 11:15:57 crc kubenswrapper[4724]: I0226 11:15:57.738586 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 11:15:57 crc kubenswrapper[4724]: I0226 11:15:57.884978 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 11:16:00 crc kubenswrapper[4724]: I0226 11:16:00.129403 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535076-dn7r2"] Feb 26 11:16:00 crc kubenswrapper[4724]: I0226 11:16:00.130419 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535076-dn7r2" Feb 26 11:16:00 crc kubenswrapper[4724]: I0226 11:16:00.133427 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:16:00 crc kubenswrapper[4724]: I0226 11:16:00.133861 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:16:00 crc kubenswrapper[4724]: I0226 11:16:00.135335 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:16:00 crc kubenswrapper[4724]: I0226 11:16:00.141582 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535076-dn7r2"] Feb 26 11:16:00 crc kubenswrapper[4724]: I0226 11:16:00.211157 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvbkh\" (UniqueName: \"kubernetes.io/projected/bcdf7642-2b76-4b06-9e81-c0e117d23043-kube-api-access-dvbkh\") pod \"auto-csr-approver-29535076-dn7r2\" (UID: \"bcdf7642-2b76-4b06-9e81-c0e117d23043\") " pod="openshift-infra/auto-csr-approver-29535076-dn7r2" Feb 26 11:16:00 crc kubenswrapper[4724]: I0226 11:16:00.312422 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvbkh\" (UniqueName: \"kubernetes.io/projected/bcdf7642-2b76-4b06-9e81-c0e117d23043-kube-api-access-dvbkh\") pod \"auto-csr-approver-29535076-dn7r2\" (UID: \"bcdf7642-2b76-4b06-9e81-c0e117d23043\") " pod="openshift-infra/auto-csr-approver-29535076-dn7r2" Feb 26 11:16:00 crc kubenswrapper[4724]: I0226 11:16:00.342904 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvbkh\" (UniqueName: \"kubernetes.io/projected/bcdf7642-2b76-4b06-9e81-c0e117d23043-kube-api-access-dvbkh\") pod \"auto-csr-approver-29535076-dn7r2\" (UID: \"bcdf7642-2b76-4b06-9e81-c0e117d23043\") " pod="openshift-infra/auto-csr-approver-29535076-dn7r2" Feb 26 11:16:00 crc kubenswrapper[4724]: I0226 11:16:00.461466 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535076-dn7r2" Feb 26 11:16:00 crc kubenswrapper[4724]: I0226 11:16:00.648775 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535076-dn7r2"] Feb 26 11:16:03 crc kubenswrapper[4724]: I0226 11:16:00.842262 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535076-dn7r2" event={"ID":"bcdf7642-2b76-4b06-9e81-c0e117d23043","Type":"ContainerStarted","Data":"820645eab988b12bcba8ffe5cde1267194f6a510dc809c0e32637d7fd6d78372"} Feb 26 11:16:03 crc kubenswrapper[4724]: I0226 11:16:03.865294 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535076-dn7r2" event={"ID":"bcdf7642-2b76-4b06-9e81-c0e117d23043","Type":"ContainerStarted","Data":"9c18ac377ccbeb67ba87dfab2a39a13c39c98068afbb9f175dd620d990849919"} Feb 26 11:16:03 crc kubenswrapper[4724]: I0226 11:16:03.879823 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535076-dn7r2" podStartSLOduration=1.3627724350000001 podStartE2EDuration="3.879806278s" podCreationTimestamp="2026-02-26 11:16:00 +0000 UTC" firstStartedPulling="2026-02-26 11:16:00.658968362 +0000 UTC m=+627.314707477" lastFinishedPulling="2026-02-26 11:16:03.176002195 +0000 UTC m=+629.831741320" observedRunningTime="2026-02-26 11:16:03.876754401 +0000 UTC m=+630.532493526" watchObservedRunningTime="2026-02-26 11:16:03.879806278 +0000 UTC m=+630.535545393" Feb 26 11:16:04 crc kubenswrapper[4724]: I0226 11:16:04.871920 4724 generic.go:334] "Generic (PLEG): container finished" podID="bcdf7642-2b76-4b06-9e81-c0e117d23043" containerID="9c18ac377ccbeb67ba87dfab2a39a13c39c98068afbb9f175dd620d990849919" exitCode=0 Feb 26 11:16:04 crc kubenswrapper[4724]: I0226 11:16:04.871967 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535076-dn7r2" event={"ID":"bcdf7642-2b76-4b06-9e81-c0e117d23043","Type":"ContainerDied","Data":"9c18ac377ccbeb67ba87dfab2a39a13c39c98068afbb9f175dd620d990849919"} Feb 26 11:16:06 crc kubenswrapper[4724]: I0226 11:16:06.082443 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535076-dn7r2" Feb 26 11:16:06 crc kubenswrapper[4724]: I0226 11:16:06.223617 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvbkh\" (UniqueName: \"kubernetes.io/projected/bcdf7642-2b76-4b06-9e81-c0e117d23043-kube-api-access-dvbkh\") pod \"bcdf7642-2b76-4b06-9e81-c0e117d23043\" (UID: \"bcdf7642-2b76-4b06-9e81-c0e117d23043\") " Feb 26 11:16:06 crc kubenswrapper[4724]: I0226 11:16:06.234107 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcdf7642-2b76-4b06-9e81-c0e117d23043-kube-api-access-dvbkh" (OuterVolumeSpecName: "kube-api-access-dvbkh") pod "bcdf7642-2b76-4b06-9e81-c0e117d23043" (UID: "bcdf7642-2b76-4b06-9e81-c0e117d23043"). InnerVolumeSpecName "kube-api-access-dvbkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:16:06 crc kubenswrapper[4724]: I0226 11:16:06.325071 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvbkh\" (UniqueName: \"kubernetes.io/projected/bcdf7642-2b76-4b06-9e81-c0e117d23043-kube-api-access-dvbkh\") on node \"crc\" DevicePath \"\"" Feb 26 11:16:06 crc kubenswrapper[4724]: I0226 11:16:06.884946 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535076-dn7r2" event={"ID":"bcdf7642-2b76-4b06-9e81-c0e117d23043","Type":"ContainerDied","Data":"820645eab988b12bcba8ffe5cde1267194f6a510dc809c0e32637d7fd6d78372"} Feb 26 11:16:06 crc kubenswrapper[4724]: I0226 11:16:06.884987 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="820645eab988b12bcba8ffe5cde1267194f6a510dc809c0e32637d7fd6d78372" Feb 26 11:16:06 crc kubenswrapper[4724]: I0226 11:16:06.884999 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535076-dn7r2" Feb 26 11:16:06 crc kubenswrapper[4724]: I0226 11:16:06.931335 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535070-lxjqb"] Feb 26 11:16:06 crc kubenswrapper[4724]: I0226 11:16:06.935802 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535070-lxjqb"] Feb 26 11:16:07 crc kubenswrapper[4724]: I0226 11:16:07.982258 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7940e7c1-723b-42e3-818f-dfbd7a795e71" path="/var/lib/kubelet/pods/7940e7c1-723b-42e3-818f-dfbd7a795e71/volumes" Feb 26 11:16:46 crc kubenswrapper[4724]: I0226 11:16:46.906484 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:16:46 crc kubenswrapper[4724]: I0226 11:16:46.907996 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:17:16 crc kubenswrapper[4724]: I0226 11:17:16.906235 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:17:16 crc kubenswrapper[4724]: I0226 11:17:16.906916 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:17:46 crc kubenswrapper[4724]: I0226 11:17:46.906843 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:17:46 crc kubenswrapper[4724]: I0226 11:17:46.907517 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:17:46 crc kubenswrapper[4724]: I0226 11:17:46.907573 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:17:46 crc kubenswrapper[4724]: I0226 11:17:46.908217 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1edc54f7129749b0acdb90a5fcc53d2261e46a8913bfac1b99f27a0443dc7c8a"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:17:46 crc kubenswrapper[4724]: I0226 11:17:46.908280 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://1edc54f7129749b0acdb90a5fcc53d2261e46a8913bfac1b99f27a0443dc7c8a" gracePeriod=600 Feb 26 11:17:47 crc kubenswrapper[4724]: I0226 11:17:47.404719 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="1edc54f7129749b0acdb90a5fcc53d2261e46a8913bfac1b99f27a0443dc7c8a" exitCode=0 Feb 26 11:17:47 crc kubenswrapper[4724]: I0226 11:17:47.404793 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"1edc54f7129749b0acdb90a5fcc53d2261e46a8913bfac1b99f27a0443dc7c8a"} Feb 26 11:17:47 crc kubenswrapper[4724]: I0226 11:17:47.405094 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"4ea38ea9f17bd357f830c4f1610289188452d159aa12b5f949dbdd14483c4545"} Feb 26 11:17:47 crc kubenswrapper[4724]: I0226 11:17:47.405127 4724 scope.go:117] "RemoveContainer" containerID="512c865cae468760a5a7701ee00c685edb3eb8ce270a9fed6d0b0e6c4c9fab74" Feb 26 11:18:00 crc kubenswrapper[4724]: I0226 11:18:00.131080 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535078-c5qd4"] Feb 26 11:18:00 crc kubenswrapper[4724]: E0226 11:18:00.133170 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcdf7642-2b76-4b06-9e81-c0e117d23043" containerName="oc" Feb 26 11:18:00 crc kubenswrapper[4724]: I0226 11:18:00.133217 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcdf7642-2b76-4b06-9e81-c0e117d23043" containerName="oc" Feb 26 11:18:00 crc kubenswrapper[4724]: I0226 11:18:00.133371 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcdf7642-2b76-4b06-9e81-c0e117d23043" containerName="oc" Feb 26 11:18:00 crc kubenswrapper[4724]: I0226 11:18:00.133895 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535078-c5qd4" Feb 26 11:18:00 crc kubenswrapper[4724]: I0226 11:18:00.135759 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:18:00 crc kubenswrapper[4724]: I0226 11:18:00.138309 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:18:00 crc kubenswrapper[4724]: I0226 11:18:00.140139 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535078-c5qd4"] Feb 26 11:18:00 crc kubenswrapper[4724]: I0226 11:18:00.140658 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:18:00 crc kubenswrapper[4724]: I0226 11:18:00.208547 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xls2b\" (UniqueName: \"kubernetes.io/projected/8abdce5a-e575-4855-8bc2-8fe66527b99b-kube-api-access-xls2b\") pod \"auto-csr-approver-29535078-c5qd4\" (UID: \"8abdce5a-e575-4855-8bc2-8fe66527b99b\") " pod="openshift-infra/auto-csr-approver-29535078-c5qd4" Feb 26 11:18:00 crc kubenswrapper[4724]: I0226 11:18:00.309352 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xls2b\" (UniqueName: \"kubernetes.io/projected/8abdce5a-e575-4855-8bc2-8fe66527b99b-kube-api-access-xls2b\") pod \"auto-csr-approver-29535078-c5qd4\" (UID: \"8abdce5a-e575-4855-8bc2-8fe66527b99b\") " pod="openshift-infra/auto-csr-approver-29535078-c5qd4" Feb 26 11:18:00 crc kubenswrapper[4724]: I0226 11:18:00.326571 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xls2b\" (UniqueName: \"kubernetes.io/projected/8abdce5a-e575-4855-8bc2-8fe66527b99b-kube-api-access-xls2b\") pod \"auto-csr-approver-29535078-c5qd4\" (UID: \"8abdce5a-e575-4855-8bc2-8fe66527b99b\") " pod="openshift-infra/auto-csr-approver-29535078-c5qd4" Feb 26 11:18:00 crc kubenswrapper[4724]: I0226 11:18:00.450382 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535078-c5qd4" Feb 26 11:18:00 crc kubenswrapper[4724]: I0226 11:18:00.841649 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535078-c5qd4"] Feb 26 11:18:00 crc kubenswrapper[4724]: W0226 11:18:00.854373 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8abdce5a_e575_4855_8bc2_8fe66527b99b.slice/crio-4e8dd1c556473e022f9fb75b447bc9cf713f432dd831d48d25bd89ba5b52eac5 WatchSource:0}: Error finding container 4e8dd1c556473e022f9fb75b447bc9cf713f432dd831d48d25bd89ba5b52eac5: Status 404 returned error can't find the container with id 4e8dd1c556473e022f9fb75b447bc9cf713f432dd831d48d25bd89ba5b52eac5 Feb 26 11:18:01 crc kubenswrapper[4724]: I0226 11:18:01.493356 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535078-c5qd4" event={"ID":"8abdce5a-e575-4855-8bc2-8fe66527b99b","Type":"ContainerStarted","Data":"4e8dd1c556473e022f9fb75b447bc9cf713f432dd831d48d25bd89ba5b52eac5"} Feb 26 11:18:03 crc kubenswrapper[4724]: I0226 11:18:03.509698 4724 generic.go:334] "Generic (PLEG): container finished" podID="8abdce5a-e575-4855-8bc2-8fe66527b99b" containerID="82b24805739d3def8d9f13587b0e5bca452b03f9f63d072b99854e4721dd70aa" exitCode=0 Feb 26 11:18:03 crc kubenswrapper[4724]: I0226 11:18:03.509930 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535078-c5qd4" event={"ID":"8abdce5a-e575-4855-8bc2-8fe66527b99b","Type":"ContainerDied","Data":"82b24805739d3def8d9f13587b0e5bca452b03f9f63d072b99854e4721dd70aa"} Feb 26 11:18:04 crc kubenswrapper[4724]: I0226 11:18:04.842888 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535078-c5qd4" Feb 26 11:18:04 crc kubenswrapper[4724]: I0226 11:18:04.966817 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xls2b\" (UniqueName: \"kubernetes.io/projected/8abdce5a-e575-4855-8bc2-8fe66527b99b-kube-api-access-xls2b\") pod \"8abdce5a-e575-4855-8bc2-8fe66527b99b\" (UID: \"8abdce5a-e575-4855-8bc2-8fe66527b99b\") " Feb 26 11:18:04 crc kubenswrapper[4724]: I0226 11:18:04.971583 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8abdce5a-e575-4855-8bc2-8fe66527b99b-kube-api-access-xls2b" (OuterVolumeSpecName: "kube-api-access-xls2b") pod "8abdce5a-e575-4855-8bc2-8fe66527b99b" (UID: "8abdce5a-e575-4855-8bc2-8fe66527b99b"). InnerVolumeSpecName "kube-api-access-xls2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:18:05 crc kubenswrapper[4724]: I0226 11:18:05.068251 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xls2b\" (UniqueName: \"kubernetes.io/projected/8abdce5a-e575-4855-8bc2-8fe66527b99b-kube-api-access-xls2b\") on node \"crc\" DevicePath \"\"" Feb 26 11:18:05 crc kubenswrapper[4724]: I0226 11:18:05.522256 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535078-c5qd4" event={"ID":"8abdce5a-e575-4855-8bc2-8fe66527b99b","Type":"ContainerDied","Data":"4e8dd1c556473e022f9fb75b447bc9cf713f432dd831d48d25bd89ba5b52eac5"} Feb 26 11:18:05 crc kubenswrapper[4724]: I0226 11:18:05.522292 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e8dd1c556473e022f9fb75b447bc9cf713f432dd831d48d25bd89ba5b52eac5" Feb 26 11:18:05 crc kubenswrapper[4724]: I0226 11:18:05.522292 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535078-c5qd4" Feb 26 11:18:05 crc kubenswrapper[4724]: I0226 11:18:05.725529 4724 scope.go:117] "RemoveContainer" containerID="b3cfc0eb4e47a693d43dd4113a84c88d408e112e4e68c57057e6512a8879bc5e" Feb 26 11:18:05 crc kubenswrapper[4724]: I0226 11:18:05.746642 4724 scope.go:117] "RemoveContainer" containerID="5b76f1a8012fe6e0eeb4815d4aefb6b5593c7df2aacd6955d88d6b9bc93d2046" Feb 26 11:18:05 crc kubenswrapper[4724]: I0226 11:18:05.899774 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535072-h4cjv"] Feb 26 11:18:05 crc kubenswrapper[4724]: I0226 11:18:05.902890 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535072-h4cjv"] Feb 26 11:18:05 crc kubenswrapper[4724]: I0226 11:18:05.983771 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f17d90f-02c5-4721-9f39-2f50cafbd329" path="/var/lib/kubelet/pods/5f17d90f-02c5-4721-9f39-2f50cafbd329/volumes" Feb 26 11:18:48 crc kubenswrapper[4724]: I0226 11:18:48.406303 4724 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 11:20:00 crc kubenswrapper[4724]: I0226 11:20:00.138828 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535080-vg4gp"] Feb 26 11:20:00 crc kubenswrapper[4724]: E0226 11:20:00.140148 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8abdce5a-e575-4855-8bc2-8fe66527b99b" containerName="oc" Feb 26 11:20:00 crc kubenswrapper[4724]: I0226 11:20:00.140170 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8abdce5a-e575-4855-8bc2-8fe66527b99b" containerName="oc" Feb 26 11:20:00 crc kubenswrapper[4724]: I0226 11:20:00.140321 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8abdce5a-e575-4855-8bc2-8fe66527b99b" containerName="oc" Feb 26 11:20:00 crc kubenswrapper[4724]: I0226 11:20:00.140951 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535080-vg4gp" Feb 26 11:20:00 crc kubenswrapper[4724]: I0226 11:20:00.142739 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:20:00 crc kubenswrapper[4724]: I0226 11:20:00.144638 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:20:00 crc kubenswrapper[4724]: I0226 11:20:00.146447 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:20:00 crc kubenswrapper[4724]: I0226 11:20:00.158856 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535080-vg4gp"] Feb 26 11:20:00 crc kubenswrapper[4724]: I0226 11:20:00.295958 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-856nz\" (UniqueName: \"kubernetes.io/projected/1048467b-0158-4faa-b646-9ca7667afae5-kube-api-access-856nz\") pod \"auto-csr-approver-29535080-vg4gp\" (UID: \"1048467b-0158-4faa-b646-9ca7667afae5\") " pod="openshift-infra/auto-csr-approver-29535080-vg4gp" Feb 26 11:20:00 crc kubenswrapper[4724]: I0226 11:20:00.397151 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-856nz\" (UniqueName: \"kubernetes.io/projected/1048467b-0158-4faa-b646-9ca7667afae5-kube-api-access-856nz\") pod \"auto-csr-approver-29535080-vg4gp\" (UID: \"1048467b-0158-4faa-b646-9ca7667afae5\") " pod="openshift-infra/auto-csr-approver-29535080-vg4gp" Feb 26 11:20:00 crc kubenswrapper[4724]: I0226 11:20:00.417881 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-856nz\" (UniqueName: \"kubernetes.io/projected/1048467b-0158-4faa-b646-9ca7667afae5-kube-api-access-856nz\") pod \"auto-csr-approver-29535080-vg4gp\" (UID: \"1048467b-0158-4faa-b646-9ca7667afae5\") " pod="openshift-infra/auto-csr-approver-29535080-vg4gp" Feb 26 11:20:00 crc kubenswrapper[4724]: I0226 11:20:00.464881 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535080-vg4gp" Feb 26 11:20:00 crc kubenswrapper[4724]: I0226 11:20:00.649701 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535080-vg4gp"] Feb 26 11:20:00 crc kubenswrapper[4724]: I0226 11:20:00.663015 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:20:01 crc kubenswrapper[4724]: I0226 11:20:01.152813 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535080-vg4gp" event={"ID":"1048467b-0158-4faa-b646-9ca7667afae5","Type":"ContainerStarted","Data":"7020f146c8f5998ab6a5742a6c1e6fa7eda9bfde7d63b4ffe0cc62e2be462db3"} Feb 26 11:20:03 crc kubenswrapper[4724]: I0226 11:20:03.175598 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535080-vg4gp" event={"ID":"1048467b-0158-4faa-b646-9ca7667afae5","Type":"ContainerStarted","Data":"f0114419db2843b1f6baa899c2bb4ea118535df743b2164195d4fc6cdf0298bc"} Feb 26 11:20:04 crc kubenswrapper[4724]: I0226 11:20:04.184079 4724 generic.go:334] "Generic (PLEG): container finished" podID="1048467b-0158-4faa-b646-9ca7667afae5" containerID="f0114419db2843b1f6baa899c2bb4ea118535df743b2164195d4fc6cdf0298bc" exitCode=0 Feb 26 11:20:04 crc kubenswrapper[4724]: I0226 11:20:04.184155 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535080-vg4gp" event={"ID":"1048467b-0158-4faa-b646-9ca7667afae5","Type":"ContainerDied","Data":"f0114419db2843b1f6baa899c2bb4ea118535df743b2164195d4fc6cdf0298bc"} Feb 26 11:20:05 crc kubenswrapper[4724]: I0226 11:20:05.412783 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535080-vg4gp" Feb 26 11:20:05 crc kubenswrapper[4724]: I0226 11:20:05.568221 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-856nz\" (UniqueName: \"kubernetes.io/projected/1048467b-0158-4faa-b646-9ca7667afae5-kube-api-access-856nz\") pod \"1048467b-0158-4faa-b646-9ca7667afae5\" (UID: \"1048467b-0158-4faa-b646-9ca7667afae5\") " Feb 26 11:20:05 crc kubenswrapper[4724]: I0226 11:20:05.573469 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1048467b-0158-4faa-b646-9ca7667afae5-kube-api-access-856nz" (OuterVolumeSpecName: "kube-api-access-856nz") pod "1048467b-0158-4faa-b646-9ca7667afae5" (UID: "1048467b-0158-4faa-b646-9ca7667afae5"). InnerVolumeSpecName "kube-api-access-856nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:20:05 crc kubenswrapper[4724]: I0226 11:20:05.669892 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-856nz\" (UniqueName: \"kubernetes.io/projected/1048467b-0158-4faa-b646-9ca7667afae5-kube-api-access-856nz\") on node \"crc\" DevicePath \"\"" Feb 26 11:20:05 crc kubenswrapper[4724]: I0226 11:20:05.808542 4724 scope.go:117] "RemoveContainer" containerID="11e1403e5e6119f071ff6aee52bb43715d37c187f93e13cb72e2b562dc780dcf" Feb 26 11:20:06 crc kubenswrapper[4724]: I0226 11:20:06.197721 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535080-vg4gp" event={"ID":"1048467b-0158-4faa-b646-9ca7667afae5","Type":"ContainerDied","Data":"7020f146c8f5998ab6a5742a6c1e6fa7eda9bfde7d63b4ffe0cc62e2be462db3"} Feb 26 11:20:06 crc kubenswrapper[4724]: I0226 11:20:06.197795 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7020f146c8f5998ab6a5742a6c1e6fa7eda9bfde7d63b4ffe0cc62e2be462db3" Feb 26 11:20:06 crc kubenswrapper[4724]: I0226 11:20:06.197925 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535080-vg4gp" Feb 26 11:20:06 crc kubenswrapper[4724]: I0226 11:20:06.473483 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535074-wvpbv"] Feb 26 11:20:06 crc kubenswrapper[4724]: I0226 11:20:06.479910 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535074-wvpbv"] Feb 26 11:20:07 crc kubenswrapper[4724]: I0226 11:20:07.984085 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b79a85-e78f-427f-8250-bfe8be1e098b" path="/var/lib/kubelet/pods/a2b79a85-e78f-427f-8250-bfe8be1e098b/volumes" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.238834 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9wnqm"] Feb 26 11:20:10 crc kubenswrapper[4724]: E0226 11:20:10.239431 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1048467b-0158-4faa-b646-9ca7667afae5" containerName="oc" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.239448 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1048467b-0158-4faa-b646-9ca7667afae5" containerName="oc" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.239564 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1048467b-0158-4faa-b646-9ca7667afae5" containerName="oc" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.240015 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.297172 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9wnqm"] Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.427641 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.427694 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkrj4\" (UniqueName: \"kubernetes.io/projected/626c607d-5887-4f76-91de-f6e69b65cbbd-kube-api-access-nkrj4\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.427718 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/626c607d-5887-4f76-91de-f6e69b65cbbd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.427749 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/626c607d-5887-4f76-91de-f6e69b65cbbd-registry-certificates\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.427769 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/626c607d-5887-4f76-91de-f6e69b65cbbd-trusted-ca\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.427795 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/626c607d-5887-4f76-91de-f6e69b65cbbd-bound-sa-token\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.427822 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/626c607d-5887-4f76-91de-f6e69b65cbbd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.427972 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/626c607d-5887-4f76-91de-f6e69b65cbbd-registry-tls\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.459391 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.529129 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkrj4\" (UniqueName: \"kubernetes.io/projected/626c607d-5887-4f76-91de-f6e69b65cbbd-kube-api-access-nkrj4\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.529232 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/626c607d-5887-4f76-91de-f6e69b65cbbd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.529282 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/626c607d-5887-4f76-91de-f6e69b65cbbd-registry-certificates\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.529477 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/626c607d-5887-4f76-91de-f6e69b65cbbd-trusted-ca\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.529519 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/626c607d-5887-4f76-91de-f6e69b65cbbd-bound-sa-token\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.529577 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/626c607d-5887-4f76-91de-f6e69b65cbbd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.529679 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/626c607d-5887-4f76-91de-f6e69b65cbbd-registry-tls\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.529854 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/626c607d-5887-4f76-91de-f6e69b65cbbd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.531100 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/626c607d-5887-4f76-91de-f6e69b65cbbd-trusted-ca\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.531404 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/626c607d-5887-4f76-91de-f6e69b65cbbd-registry-certificates\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.535796 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/626c607d-5887-4f76-91de-f6e69b65cbbd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.535832 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/626c607d-5887-4f76-91de-f6e69b65cbbd-registry-tls\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.546895 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/626c607d-5887-4f76-91de-f6e69b65cbbd-bound-sa-token\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.547883 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkrj4\" (UniqueName: \"kubernetes.io/projected/626c607d-5887-4f76-91de-f6e69b65cbbd-kube-api-access-nkrj4\") pod \"image-registry-66df7c8f76-9wnqm\" (UID: \"626c607d-5887-4f76-91de-f6e69b65cbbd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.554648 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:10 crc kubenswrapper[4724]: I0226 11:20:10.750895 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9wnqm"] Feb 26 11:20:11 crc kubenswrapper[4724]: I0226 11:20:11.227863 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" event={"ID":"626c607d-5887-4f76-91de-f6e69b65cbbd","Type":"ContainerStarted","Data":"3b72459bee3b0ae72823c1e052ff04c7804123131dbd793bb33cc899d2dd2848"} Feb 26 11:20:11 crc kubenswrapper[4724]: I0226 11:20:11.227919 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" event={"ID":"626c607d-5887-4f76-91de-f6e69b65cbbd","Type":"ContainerStarted","Data":"1f037bc161803239c3a851210747440f1d3af3ed7862b7435b666efc8cbb6c70"} Feb 26 11:20:11 crc kubenswrapper[4724]: I0226 11:20:11.228366 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:11 crc kubenswrapper[4724]: I0226 11:20:11.249586 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" podStartSLOduration=1.249567386 podStartE2EDuration="1.249567386s" podCreationTimestamp="2026-02-26 11:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:20:11.246154021 +0000 UTC m=+877.901893136" watchObservedRunningTime="2026-02-26 11:20:11.249567386 +0000 UTC m=+877.905306501" Feb 26 11:20:16 crc kubenswrapper[4724]: I0226 11:20:16.906744 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:20:16 crc kubenswrapper[4724]: I0226 11:20:16.907348 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:20:30 crc kubenswrapper[4724]: I0226 11:20:30.560656 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9wnqm" Feb 26 11:20:30 crc kubenswrapper[4724]: I0226 11:20:30.617393 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vxxfb"] Feb 26 11:20:46 crc kubenswrapper[4724]: I0226 11:20:46.906249 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:20:46 crc kubenswrapper[4724]: I0226 11:20:46.906819 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:20:55 crc kubenswrapper[4724]: I0226 11:20:55.656662 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" podUID="c4f276b5-977b-4a34-9c9c-2b699d10345c" containerName="registry" containerID="cri-o://f7450fbe9852626c163bf1fcddaa5fa2c930b946b28a67abd5887779ffb80483" gracePeriod=30 Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.031564 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.195593 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-registry-tls\") pod \"c4f276b5-977b-4a34-9c9c-2b699d10345c\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.195653 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x4ws\" (UniqueName: \"kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-kube-api-access-7x4ws\") pod \"c4f276b5-977b-4a34-9c9c-2b699d10345c\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.195882 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c4f276b5-977b-4a34-9c9c-2b699d10345c\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.195907 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4f276b5-977b-4a34-9c9c-2b699d10345c-installation-pull-secrets\") pod \"c4f276b5-977b-4a34-9c9c-2b699d10345c\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.195934 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-bound-sa-token\") pod \"c4f276b5-977b-4a34-9c9c-2b699d10345c\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.195959 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4f276b5-977b-4a34-9c9c-2b699d10345c-registry-certificates\") pod \"c4f276b5-977b-4a34-9c9c-2b699d10345c\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.196055 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f276b5-977b-4a34-9c9c-2b699d10345c-trusted-ca\") pod \"c4f276b5-977b-4a34-9c9c-2b699d10345c\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.196088 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4f276b5-977b-4a34-9c9c-2b699d10345c-ca-trust-extracted\") pod \"c4f276b5-977b-4a34-9c9c-2b699d10345c\" (UID: \"c4f276b5-977b-4a34-9c9c-2b699d10345c\") " Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.197037 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f276b5-977b-4a34-9c9c-2b699d10345c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c4f276b5-977b-4a34-9c9c-2b699d10345c" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.197099 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f276b5-977b-4a34-9c9c-2b699d10345c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c4f276b5-977b-4a34-9c9c-2b699d10345c" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.201760 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c4f276b5-977b-4a34-9c9c-2b699d10345c" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.201779 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f276b5-977b-4a34-9c9c-2b699d10345c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c4f276b5-977b-4a34-9c9c-2b699d10345c" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.203033 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c4f276b5-977b-4a34-9c9c-2b699d10345c" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.203372 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-kube-api-access-7x4ws" (OuterVolumeSpecName: "kube-api-access-7x4ws") pod "c4f276b5-977b-4a34-9c9c-2b699d10345c" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c"). InnerVolumeSpecName "kube-api-access-7x4ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.215059 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f276b5-977b-4a34-9c9c-2b699d10345c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c4f276b5-977b-4a34-9c9c-2b699d10345c" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.246723 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c4f276b5-977b-4a34-9c9c-2b699d10345c" (UID: "c4f276b5-977b-4a34-9c9c-2b699d10345c"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.297442 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f276b5-977b-4a34-9c9c-2b699d10345c-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.297470 4724 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4f276b5-977b-4a34-9c9c-2b699d10345c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.297480 4724 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.297488 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x4ws\" (UniqueName: \"kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-kube-api-access-7x4ws\") on node \"crc\" DevicePath \"\"" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.297497 4724 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4f276b5-977b-4a34-9c9c-2b699d10345c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.297504 4724 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4f276b5-977b-4a34-9c9c-2b699d10345c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.297514 4724 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4f276b5-977b-4a34-9c9c-2b699d10345c-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.487727 4724 generic.go:334] "Generic (PLEG): container finished" podID="c4f276b5-977b-4a34-9c9c-2b699d10345c" containerID="f7450fbe9852626c163bf1fcddaa5fa2c930b946b28a67abd5887779ffb80483" exitCode=0 Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.487784 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" event={"ID":"c4f276b5-977b-4a34-9c9c-2b699d10345c","Type":"ContainerDied","Data":"f7450fbe9852626c163bf1fcddaa5fa2c930b946b28a67abd5887779ffb80483"} Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.487820 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" event={"ID":"c4f276b5-977b-4a34-9c9c-2b699d10345c","Type":"ContainerDied","Data":"b1c5b6937deee5464fc1c9ff64df0a816291780ef22bcbea9fc436040b4fa385"} Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.487854 4724 scope.go:117] "RemoveContainer" containerID="f7450fbe9852626c163bf1fcddaa5fa2c930b946b28a67abd5887779ffb80483" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.488080 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vxxfb" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.505716 4724 scope.go:117] "RemoveContainer" containerID="f7450fbe9852626c163bf1fcddaa5fa2c930b946b28a67abd5887779ffb80483" Feb 26 11:20:56 crc kubenswrapper[4724]: E0226 11:20:56.506376 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7450fbe9852626c163bf1fcddaa5fa2c930b946b28a67abd5887779ffb80483\": container with ID starting with f7450fbe9852626c163bf1fcddaa5fa2c930b946b28a67abd5887779ffb80483 not found: ID does not exist" containerID="f7450fbe9852626c163bf1fcddaa5fa2c930b946b28a67abd5887779ffb80483" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.506429 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7450fbe9852626c163bf1fcddaa5fa2c930b946b28a67abd5887779ffb80483"} err="failed to get container status \"f7450fbe9852626c163bf1fcddaa5fa2c930b946b28a67abd5887779ffb80483\": rpc error: code = NotFound desc = could not find container \"f7450fbe9852626c163bf1fcddaa5fa2c930b946b28a67abd5887779ffb80483\": container with ID starting with f7450fbe9852626c163bf1fcddaa5fa2c930b946b28a67abd5887779ffb80483 not found: ID does not exist" Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.517283 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vxxfb"] Feb 26 11:20:56 crc kubenswrapper[4724]: I0226 11:20:56.519371 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vxxfb"] Feb 26 11:20:57 crc kubenswrapper[4724]: I0226 11:20:57.981665 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f276b5-977b-4a34-9c9c-2b699d10345c" path="/var/lib/kubelet/pods/c4f276b5-977b-4a34-9c9c-2b699d10345c/volumes" Feb 26 11:21:05 crc kubenswrapper[4724]: I0226 11:21:05.850876 4724 scope.go:117] "RemoveContainer" containerID="514423164b87bfec0bc2047e5625b5bff2d273e028d6dba53d3dfb9b15d72049" Feb 26 11:21:16 crc kubenswrapper[4724]: I0226 11:21:16.906764 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:21:16 crc kubenswrapper[4724]: I0226 11:21:16.907446 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:21:16 crc kubenswrapper[4724]: I0226 11:21:16.907505 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:21:16 crc kubenswrapper[4724]: I0226 11:21:16.908161 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ea38ea9f17bd357f830c4f1610289188452d159aa12b5f949dbdd14483c4545"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:21:16 crc kubenswrapper[4724]: I0226 11:21:16.908231 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://4ea38ea9f17bd357f830c4f1610289188452d159aa12b5f949dbdd14483c4545" gracePeriod=600 Feb 26 11:21:17 crc kubenswrapper[4724]: I0226 11:21:17.619642 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="4ea38ea9f17bd357f830c4f1610289188452d159aa12b5f949dbdd14483c4545" exitCode=0 Feb 26 11:21:17 crc kubenswrapper[4724]: I0226 11:21:17.619759 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"4ea38ea9f17bd357f830c4f1610289188452d159aa12b5f949dbdd14483c4545"} Feb 26 11:21:17 crc kubenswrapper[4724]: I0226 11:21:17.620026 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"9ba5115481d1102dd3adf13dea4151bf50f3cbd49195796f340f8393348a53ce"} Feb 26 11:21:17 crc kubenswrapper[4724]: I0226 11:21:17.620062 4724 scope.go:117] "RemoveContainer" containerID="1edc54f7129749b0acdb90a5fcc53d2261e46a8913bfac1b99f27a0443dc7c8a" Feb 26 11:22:00 crc kubenswrapper[4724]: I0226 11:22:00.131423 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535082-zvg5w"] Feb 26 11:22:00 crc kubenswrapper[4724]: E0226 11:22:00.132120 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f276b5-977b-4a34-9c9c-2b699d10345c" containerName="registry" Feb 26 11:22:00 crc kubenswrapper[4724]: I0226 11:22:00.132133 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f276b5-977b-4a34-9c9c-2b699d10345c" containerName="registry" Feb 26 11:22:00 crc kubenswrapper[4724]: I0226 11:22:00.132332 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f276b5-977b-4a34-9c9c-2b699d10345c" containerName="registry" Feb 26 11:22:00 crc kubenswrapper[4724]: I0226 11:22:00.132771 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535082-zvg5w" Feb 26 11:22:00 crc kubenswrapper[4724]: I0226 11:22:00.136250 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:22:00 crc kubenswrapper[4724]: I0226 11:22:00.139564 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:22:00 crc kubenswrapper[4724]: I0226 11:22:00.143276 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535082-zvg5w"] Feb 26 11:22:00 crc kubenswrapper[4724]: I0226 11:22:00.143760 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:22:00 crc kubenswrapper[4724]: I0226 11:22:00.301769 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4297\" (UniqueName: \"kubernetes.io/projected/be10571b-4581-4365-9f84-a1e04076f8d4-kube-api-access-k4297\") pod \"auto-csr-approver-29535082-zvg5w\" (UID: \"be10571b-4581-4365-9f84-a1e04076f8d4\") " pod="openshift-infra/auto-csr-approver-29535082-zvg5w" Feb 26 11:22:00 crc kubenswrapper[4724]: I0226 11:22:00.403261 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4297\" (UniqueName: \"kubernetes.io/projected/be10571b-4581-4365-9f84-a1e04076f8d4-kube-api-access-k4297\") pod \"auto-csr-approver-29535082-zvg5w\" (UID: \"be10571b-4581-4365-9f84-a1e04076f8d4\") " pod="openshift-infra/auto-csr-approver-29535082-zvg5w" Feb 26 11:22:00 crc kubenswrapper[4724]: I0226 11:22:00.424973 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4297\" (UniqueName: \"kubernetes.io/projected/be10571b-4581-4365-9f84-a1e04076f8d4-kube-api-access-k4297\") pod \"auto-csr-approver-29535082-zvg5w\" (UID: \"be10571b-4581-4365-9f84-a1e04076f8d4\") " pod="openshift-infra/auto-csr-approver-29535082-zvg5w" Feb 26 11:22:00 crc kubenswrapper[4724]: I0226 11:22:00.450898 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535082-zvg5w" Feb 26 11:22:00 crc kubenswrapper[4724]: I0226 11:22:00.634059 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535082-zvg5w"] Feb 26 11:22:00 crc kubenswrapper[4724]: I0226 11:22:00.998686 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535082-zvg5w" event={"ID":"be10571b-4581-4365-9f84-a1e04076f8d4","Type":"ContainerStarted","Data":"058d5b8735fdbba7738ae0728f0fc6ed44d56095f0a96fdb5c8529574105479a"} Feb 26 11:22:02 crc kubenswrapper[4724]: I0226 11:22:02.006082 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535082-zvg5w" event={"ID":"be10571b-4581-4365-9f84-a1e04076f8d4","Type":"ContainerStarted","Data":"e57c843ee14ebaa3663e4d3163f22e09905491a0a163c402b52e48cd7b2e0b37"} Feb 26 11:22:02 crc kubenswrapper[4724]: I0226 11:22:02.022684 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535082-zvg5w" podStartSLOduration=0.971950158 podStartE2EDuration="2.022663772s" podCreationTimestamp="2026-02-26 11:22:00 +0000 UTC" firstStartedPulling="2026-02-26 11:22:00.641688826 +0000 UTC m=+987.297427941" lastFinishedPulling="2026-02-26 11:22:01.69240244 +0000 UTC m=+988.348141555" observedRunningTime="2026-02-26 11:22:02.021049722 +0000 UTC m=+988.676788867" watchObservedRunningTime="2026-02-26 11:22:02.022663772 +0000 UTC m=+988.678402887" Feb 26 11:22:03 crc kubenswrapper[4724]: I0226 11:22:03.013005 4724 generic.go:334] "Generic (PLEG): container finished" podID="be10571b-4581-4365-9f84-a1e04076f8d4" containerID="e57c843ee14ebaa3663e4d3163f22e09905491a0a163c402b52e48cd7b2e0b37" exitCode=0 Feb 26 11:22:03 crc kubenswrapper[4724]: I0226 11:22:03.013094 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535082-zvg5w" event={"ID":"be10571b-4581-4365-9f84-a1e04076f8d4","Type":"ContainerDied","Data":"e57c843ee14ebaa3663e4d3163f22e09905491a0a163c402b52e48cd7b2e0b37"} Feb 26 11:22:04 crc kubenswrapper[4724]: I0226 11:22:04.233633 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535082-zvg5w" Feb 26 11:22:04 crc kubenswrapper[4724]: I0226 11:22:04.348861 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4297\" (UniqueName: \"kubernetes.io/projected/be10571b-4581-4365-9f84-a1e04076f8d4-kube-api-access-k4297\") pod \"be10571b-4581-4365-9f84-a1e04076f8d4\" (UID: \"be10571b-4581-4365-9f84-a1e04076f8d4\") " Feb 26 11:22:04 crc kubenswrapper[4724]: I0226 11:22:04.358548 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be10571b-4581-4365-9f84-a1e04076f8d4-kube-api-access-k4297" (OuterVolumeSpecName: "kube-api-access-k4297") pod "be10571b-4581-4365-9f84-a1e04076f8d4" (UID: "be10571b-4581-4365-9f84-a1e04076f8d4"). InnerVolumeSpecName "kube-api-access-k4297". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:22:04 crc kubenswrapper[4724]: I0226 11:22:04.452062 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4297\" (UniqueName: \"kubernetes.io/projected/be10571b-4581-4365-9f84-a1e04076f8d4-kube-api-access-k4297\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:05 crc kubenswrapper[4724]: I0226 11:22:05.035498 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535082-zvg5w" event={"ID":"be10571b-4581-4365-9f84-a1e04076f8d4","Type":"ContainerDied","Data":"058d5b8735fdbba7738ae0728f0fc6ed44d56095f0a96fdb5c8529574105479a"} Feb 26 11:22:05 crc kubenswrapper[4724]: I0226 11:22:05.035539 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="058d5b8735fdbba7738ae0728f0fc6ed44d56095f0a96fdb5c8529574105479a" Feb 26 11:22:05 crc kubenswrapper[4724]: I0226 11:22:05.035534 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535082-zvg5w" Feb 26 11:22:05 crc kubenswrapper[4724]: I0226 11:22:05.075965 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535076-dn7r2"] Feb 26 11:22:05 crc kubenswrapper[4724]: I0226 11:22:05.079656 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535076-dn7r2"] Feb 26 11:22:05 crc kubenswrapper[4724]: I0226 11:22:05.982658 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcdf7642-2b76-4b06-9e81-c0e117d23043" path="/var/lib/kubelet/pods/bcdf7642-2b76-4b06-9e81-c0e117d23043/volumes" Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.887604 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-n2mfn"] Feb 26 11:22:12 crc kubenswrapper[4724]: E0226 11:22:12.888032 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be10571b-4581-4365-9f84-a1e04076f8d4" containerName="oc" Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.888044 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="be10571b-4581-4365-9f84-a1e04076f8d4" containerName="oc" Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.888133 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="be10571b-4581-4365-9f84-a1e04076f8d4" containerName="oc" Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.888519 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-n2mfn" Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.892582 4724 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2znf7" Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.892609 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.892645 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.908579 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-n2mfn"] Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.921313 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-h8dsz"] Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.922169 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-h8dsz" Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.925047 4724 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-6fdh8" Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.943097 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-h8dsz"] Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.952971 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4h46l"] Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.955495 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4h46l" Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.958069 4724 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-lx488" Feb 26 11:22:12 crc kubenswrapper[4724]: I0226 11:22:12.983976 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4h46l"] Feb 26 11:22:13 crc kubenswrapper[4724]: I0226 11:22:13.059299 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtl9b\" (UniqueName: \"kubernetes.io/projected/edc23874-b08b-4197-8662-4daac14a41bb-kube-api-access-xtl9b\") pod \"cert-manager-cainjector-cf98fcc89-n2mfn\" (UID: \"edc23874-b08b-4197-8662-4daac14a41bb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-n2mfn" Feb 26 11:22:13 crc kubenswrapper[4724]: I0226 11:22:13.059359 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm5jm\" (UniqueName: \"kubernetes.io/projected/bd772542-67d2-4628-9b09-34bc55eec26d-kube-api-access-cm5jm\") pod \"cert-manager-webhook-687f57d79b-4h46l\" (UID: \"bd772542-67d2-4628-9b09-34bc55eec26d\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4h46l" Feb 26 11:22:13 crc kubenswrapper[4724]: I0226 11:22:13.059423 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcwsp\" (UniqueName: \"kubernetes.io/projected/949d93dc-988e-49b8-9fde-63c227730e7a-kube-api-access-rcwsp\") pod \"cert-manager-858654f9db-h8dsz\" (UID: \"949d93dc-988e-49b8-9fde-63c227730e7a\") " pod="cert-manager/cert-manager-858654f9db-h8dsz" Feb 26 11:22:13 crc kubenswrapper[4724]: I0226 11:22:13.160258 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcwsp\" (UniqueName: \"kubernetes.io/projected/949d93dc-988e-49b8-9fde-63c227730e7a-kube-api-access-rcwsp\") pod \"cert-manager-858654f9db-h8dsz\" (UID: \"949d93dc-988e-49b8-9fde-63c227730e7a\") " pod="cert-manager/cert-manager-858654f9db-h8dsz" Feb 26 11:22:13 crc kubenswrapper[4724]: I0226 11:22:13.160382 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtl9b\" (UniqueName: \"kubernetes.io/projected/edc23874-b08b-4197-8662-4daac14a41bb-kube-api-access-xtl9b\") pod \"cert-manager-cainjector-cf98fcc89-n2mfn\" (UID: \"edc23874-b08b-4197-8662-4daac14a41bb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-n2mfn" Feb 26 11:22:13 crc kubenswrapper[4724]: I0226 11:22:13.160408 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm5jm\" (UniqueName: \"kubernetes.io/projected/bd772542-67d2-4628-9b09-34bc55eec26d-kube-api-access-cm5jm\") pod \"cert-manager-webhook-687f57d79b-4h46l\" (UID: \"bd772542-67d2-4628-9b09-34bc55eec26d\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4h46l" Feb 26 11:22:13 crc kubenswrapper[4724]: I0226 11:22:13.184079 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm5jm\" (UniqueName: \"kubernetes.io/projected/bd772542-67d2-4628-9b09-34bc55eec26d-kube-api-access-cm5jm\") pod \"cert-manager-webhook-687f57d79b-4h46l\" (UID: \"bd772542-67d2-4628-9b09-34bc55eec26d\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4h46l" Feb 26 11:22:13 crc kubenswrapper[4724]: I0226 11:22:13.190070 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtl9b\" (UniqueName: \"kubernetes.io/projected/edc23874-b08b-4197-8662-4daac14a41bb-kube-api-access-xtl9b\") pod \"cert-manager-cainjector-cf98fcc89-n2mfn\" (UID: \"edc23874-b08b-4197-8662-4daac14a41bb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-n2mfn" Feb 26 11:22:13 crc kubenswrapper[4724]: I0226 11:22:13.190688 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcwsp\" (UniqueName: \"kubernetes.io/projected/949d93dc-988e-49b8-9fde-63c227730e7a-kube-api-access-rcwsp\") pod \"cert-manager-858654f9db-h8dsz\" (UID: \"949d93dc-988e-49b8-9fde-63c227730e7a\") " pod="cert-manager/cert-manager-858654f9db-h8dsz" Feb 26 11:22:13 crc kubenswrapper[4724]: I0226 11:22:13.202100 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-n2mfn" Feb 26 11:22:13 crc kubenswrapper[4724]: I0226 11:22:13.236062 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-h8dsz" Feb 26 11:22:13 crc kubenswrapper[4724]: I0226 11:22:13.273533 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4h46l" Feb 26 11:22:13 crc kubenswrapper[4724]: I0226 11:22:13.658680 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-n2mfn"] Feb 26 11:22:13 crc kubenswrapper[4724]: I0226 11:22:13.699449 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-h8dsz"] Feb 26 11:22:13 crc kubenswrapper[4724]: W0226 11:22:13.706013 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod949d93dc_988e_49b8_9fde_63c227730e7a.slice/crio-6082a29bf5bb856e4f97e2fd3b92f0160aee7fa466814fc0253e29591c4f6b17 WatchSource:0}: Error finding container 6082a29bf5bb856e4f97e2fd3b92f0160aee7fa466814fc0253e29591c4f6b17: Status 404 returned error can't find the container with id 6082a29bf5bb856e4f97e2fd3b92f0160aee7fa466814fc0253e29591c4f6b17 Feb 26 11:22:13 crc kubenswrapper[4724]: I0226 11:22:13.748937 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4h46l"] Feb 26 11:22:13 crc kubenswrapper[4724]: W0226 11:22:13.752775 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd772542_67d2_4628_9b09_34bc55eec26d.slice/crio-d78f9979e481fc7d80ca7cf440942ec150703cc6c96767a81103e5967e2924f5 WatchSource:0}: Error finding container d78f9979e481fc7d80ca7cf440942ec150703cc6c96767a81103e5967e2924f5: Status 404 returned error can't find the container with id d78f9979e481fc7d80ca7cf440942ec150703cc6c96767a81103e5967e2924f5 Feb 26 11:22:14 crc kubenswrapper[4724]: I0226 11:22:14.090746 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-h8dsz" event={"ID":"949d93dc-988e-49b8-9fde-63c227730e7a","Type":"ContainerStarted","Data":"6082a29bf5bb856e4f97e2fd3b92f0160aee7fa466814fc0253e29591c4f6b17"} Feb 26 11:22:14 crc kubenswrapper[4724]: I0226 11:22:14.093487 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4h46l" event={"ID":"bd772542-67d2-4628-9b09-34bc55eec26d","Type":"ContainerStarted","Data":"d78f9979e481fc7d80ca7cf440942ec150703cc6c96767a81103e5967e2924f5"} Feb 26 11:22:14 crc kubenswrapper[4724]: I0226 11:22:14.095202 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-n2mfn" event={"ID":"edc23874-b08b-4197-8662-4daac14a41bb","Type":"ContainerStarted","Data":"2cec67c679985de21683909adb52cbedce55f6440e5b672aa72715527655a83a"} Feb 26 11:22:18 crc kubenswrapper[4724]: I0226 11:22:18.120651 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-h8dsz" event={"ID":"949d93dc-988e-49b8-9fde-63c227730e7a","Type":"ContainerStarted","Data":"c6fe436f967ced470488bf4f3e22c8048c7866711b131384021e39db72df5736"} Feb 26 11:22:18 crc kubenswrapper[4724]: I0226 11:22:18.122355 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-n2mfn" event={"ID":"edc23874-b08b-4197-8662-4daac14a41bb","Type":"ContainerStarted","Data":"d06f8a6dcb6feea349e8d5ce15278731a3e5e89eba7e043037aee0676b8d74df"} Feb 26 11:22:18 crc kubenswrapper[4724]: I0226 11:22:18.123880 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4h46l" event={"ID":"bd772542-67d2-4628-9b09-34bc55eec26d","Type":"ContainerStarted","Data":"31deae06bc099fa472b491d81b335a052dc3bb105d51212ed7a6230f5388b1ce"} Feb 26 11:22:18 crc kubenswrapper[4724]: I0226 11:22:18.124040 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-4h46l" Feb 26 11:22:18 crc kubenswrapper[4724]: I0226 11:22:18.139325 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-h8dsz" podStartSLOduration=2.088809698 podStartE2EDuration="6.139307965s" podCreationTimestamp="2026-02-26 11:22:12 +0000 UTC" firstStartedPulling="2026-02-26 11:22:13.707724385 +0000 UTC m=+1000.363463500" lastFinishedPulling="2026-02-26 11:22:17.758222652 +0000 UTC m=+1004.413961767" observedRunningTime="2026-02-26 11:22:18.137313345 +0000 UTC m=+1004.793052460" watchObservedRunningTime="2026-02-26 11:22:18.139307965 +0000 UTC m=+1004.795047090" Feb 26 11:22:18 crc kubenswrapper[4724]: I0226 11:22:18.159353 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-4h46l" podStartSLOduration=2.092623253 podStartE2EDuration="6.159333216s" podCreationTimestamp="2026-02-26 11:22:12 +0000 UTC" firstStartedPulling="2026-02-26 11:22:13.754821433 +0000 UTC m=+1000.410560548" lastFinishedPulling="2026-02-26 11:22:17.821531396 +0000 UTC m=+1004.477270511" observedRunningTime="2026-02-26 11:22:18.156602698 +0000 UTC m=+1004.812341823" watchObservedRunningTime="2026-02-26 11:22:18.159333216 +0000 UTC m=+1004.815072331" Feb 26 11:22:23 crc kubenswrapper[4724]: I0226 11:22:23.277213 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-4h46l" Feb 26 11:22:23 crc kubenswrapper[4724]: I0226 11:22:23.291788 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-n2mfn" podStartSLOduration=7.196081982 podStartE2EDuration="11.29177117s" podCreationTimestamp="2026-02-26 11:22:12 +0000 UTC" firstStartedPulling="2026-02-26 11:22:13.670443472 +0000 UTC m=+1000.326182587" lastFinishedPulling="2026-02-26 11:22:17.76613266 +0000 UTC m=+1004.421871775" observedRunningTime="2026-02-26 11:22:18.185151532 +0000 UTC m=+1004.840890657" watchObservedRunningTime="2026-02-26 11:22:23.29177117 +0000 UTC m=+1009.947510285" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.617381 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z56jr"] Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.618352 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovn-controller" containerID="cri-o://56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071" gracePeriod=30 Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.618734 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="sbdb" containerID="cri-o://71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2" gracePeriod=30 Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.618777 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="nbdb" containerID="cri-o://ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e" gracePeriod=30 Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.618812 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="northd" containerID="cri-o://5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de" gracePeriod=30 Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.618844 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4" gracePeriod=30 Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.618871 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="kube-rbac-proxy-node" containerID="cri-o://16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87" gracePeriod=30 Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.618907 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovn-acl-logging" containerID="cri-o://177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374" gracePeriod=30 Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.662448 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovnkube-controller" containerID="cri-o://31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73" gracePeriod=30 Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.936887 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovnkube-controller/2.log" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.939503 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovn-acl-logging/0.log" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.939995 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovn-controller/0.log" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.940511 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.997652 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k6qtw"] Feb 26 11:22:34 crc kubenswrapper[4724]: E0226 11:22:34.997854 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="nbdb" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.997866 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="nbdb" Feb 26 11:22:34 crc kubenswrapper[4724]: E0226 11:22:34.997875 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="northd" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.997881 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="northd" Feb 26 11:22:34 crc kubenswrapper[4724]: E0226 11:22:34.997889 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovnkube-controller" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.997897 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovnkube-controller" Feb 26 11:22:34 crc kubenswrapper[4724]: E0226 11:22:34.997904 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovnkube-controller" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.997909 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovnkube-controller" Feb 26 11:22:34 crc kubenswrapper[4724]: E0226 11:22:34.997948 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovnkube-controller" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.997954 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovnkube-controller" Feb 26 11:22:34 crc kubenswrapper[4724]: E0226 11:22:34.997962 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="kubecfg-setup" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.997968 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="kubecfg-setup" Feb 26 11:22:34 crc kubenswrapper[4724]: E0226 11:22:34.997977 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.997983 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 11:22:34 crc kubenswrapper[4724]: E0226 11:22:34.997992 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovn-controller" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.997998 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovn-controller" Feb 26 11:22:34 crc kubenswrapper[4724]: E0226 11:22:34.998005 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovn-acl-logging" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.998010 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovn-acl-logging" Feb 26 11:22:34 crc kubenswrapper[4724]: E0226 11:22:34.998019 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="kube-rbac-proxy-node" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.998024 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="kube-rbac-proxy-node" Feb 26 11:22:34 crc kubenswrapper[4724]: E0226 11:22:34.998033 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovnkube-controller" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.998038 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovnkube-controller" Feb 26 11:22:34 crc kubenswrapper[4724]: E0226 11:22:34.998047 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="sbdb" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.998052 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="sbdb" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.998134 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="sbdb" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.998144 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.998151 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovnkube-controller" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.998160 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="nbdb" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.998167 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovn-acl-logging" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.998177 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="northd" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.998184 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovnkube-controller" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.998206 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovn-controller" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.998215 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="kube-rbac-proxy-node" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.998375 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovnkube-controller" Feb 26 11:22:34 crc kubenswrapper[4724]: I0226 11:22:34.998384 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerName="ovnkube-controller" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.008706 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.047752 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c1140bb-3473-456a-b916-cfef4d4b7222-ovn-node-metrics-cert\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.047890 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-ovnkube-config\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.047934 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-run-netns\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.047955 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-slash\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.047972 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-run-ovn-kubernetes\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.047993 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.048012 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-env-overrides\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.048422 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-cni-netd\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.048663 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-kubelet\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.048694 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-cni-bin\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.048718 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvffk\" (UniqueName: \"kubernetes.io/projected/4c1140bb-3473-456a-b916-cfef4d4b7222-kube-api-access-wvffk\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.048748 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-node-log\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.048779 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-ovnkube-script-lib\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.048803 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-openvswitch\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.048823 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-log-socket\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.048839 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-systemd-units\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.048864 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-etc-openvswitch\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.048979 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-systemd\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.049058 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-ovn\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.049111 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-var-lib-openvswitch\") pod \"4c1140bb-3473-456a-b916-cfef4d4b7222\" (UID: \"4c1140bb-3473-456a-b916-cfef4d4b7222\") " Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.049253 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.049855 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.049972 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.050012 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.050032 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-log-socket" (OuterVolumeSpecName: "log-socket") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.050049 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-node-log" (OuterVolumeSpecName: "node-log") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.050058 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.050078 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.050078 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.050093 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-slash" (OuterVolumeSpecName: "host-slash") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.050110 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.050133 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.050209 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.050232 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.050258 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.050486 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.050747 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.053533 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1140bb-3473-456a-b916-cfef4d4b7222-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.054385 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1140bb-3473-456a-b916-cfef4d4b7222-kube-api-access-wvffk" (OuterVolumeSpecName: "kube-api-access-wvffk") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "kube-api-access-wvffk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.062077 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4c1140bb-3473-456a-b916-cfef4d4b7222" (UID: "4c1140bb-3473-456a-b916-cfef4d4b7222"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.150999 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-node-log\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151040 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-var-lib-openvswitch\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151060 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-run-systemd\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151077 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-systemd-units\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151094 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-cni-bin\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151252 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-run-netns\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151289 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-ovnkube-config\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151323 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxgk4\" (UniqueName: \"kubernetes.io/projected/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-kube-api-access-cxgk4\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151343 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-slash\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151473 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-ovn-node-metrics-cert\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151504 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151532 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-run-openvswitch\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151546 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-run-ovn-kubernetes\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151576 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-etc-openvswitch\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151601 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-run-ovn\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151619 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-ovnkube-script-lib\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151635 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-cni-netd\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151659 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-env-overrides\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151680 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-kubelet\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151696 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-log-socket\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151736 4724 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151748 4724 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151757 4724 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c1140bb-3473-456a-b916-cfef4d4b7222-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151766 4724 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151774 4724 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151782 4724 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-slash\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151791 4724 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151800 4724 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151808 4724 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151817 4724 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151825 4724 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151833 4724 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151841 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvffk\" (UniqueName: \"kubernetes.io/projected/4c1140bb-3473-456a-b916-cfef4d4b7222-kube-api-access-wvffk\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151849 4724 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-node-log\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151857 4724 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4c1140bb-3473-456a-b916-cfef4d4b7222-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151866 4724 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151873 4724 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-log-socket\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151881 4724 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151888 4724 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.151896 4724 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4c1140bb-3473-456a-b916-cfef4d4b7222-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.218521 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovnkube-controller/2.log" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.220538 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovn-acl-logging/0.log" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221044 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z56jr_4c1140bb-3473-456a-b916-cfef4d4b7222/ovn-controller/0.log" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221469 4724 generic.go:334] "Generic (PLEG): container finished" podID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerID="31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73" exitCode=0 Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221492 4724 generic.go:334] "Generic (PLEG): container finished" podID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerID="71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2" exitCode=0 Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221501 4724 generic.go:334] "Generic (PLEG): container finished" podID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerID="ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e" exitCode=0 Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221508 4724 generic.go:334] "Generic (PLEG): container finished" podID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerID="5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de" exitCode=0 Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221516 4724 generic.go:334] "Generic (PLEG): container finished" podID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerID="ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4" exitCode=0 Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221525 4724 generic.go:334] "Generic (PLEG): container finished" podID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerID="16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87" exitCode=0 Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221532 4724 generic.go:334] "Generic (PLEG): container finished" podID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerID="177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374" exitCode=143 Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221540 4724 generic.go:334] "Generic (PLEG): container finished" podID="4c1140bb-3473-456a-b916-cfef4d4b7222" containerID="56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071" exitCode=143 Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221551 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerDied","Data":"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221591 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerDied","Data":"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221607 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerDied","Data":"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221620 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerDied","Data":"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221632 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerDied","Data":"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221633 4724 scope.go:117] "RemoveContainer" containerID="31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221647 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerDied","Data":"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221735 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221749 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221757 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221764 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221771 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221777 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221783 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221790 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221795 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221806 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerDied","Data":"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221818 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221826 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221833 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221839 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221845 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221852 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221858 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221864 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221870 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221877 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221886 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerDied","Data":"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221895 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221903 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221909 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221916 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221922 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221928 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221933 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221939 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221946 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221952 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221961 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" event={"ID":"4c1140bb-3473-456a-b916-cfef4d4b7222","Type":"ContainerDied","Data":"e5cd0dc09af5164561011dce55ac433aa2030a83598390a79ac165522fb761e7"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221969 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221976 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221983 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221989 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.221995 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.222003 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.222010 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.222016 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.222021 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.222027 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.222622 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z56jr" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.223424 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ns2kr_332754e6-e64b-4e47-988d-6f1ddbe4912e/kube-multus/1.log" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.223942 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ns2kr_332754e6-e64b-4e47-988d-6f1ddbe4912e/kube-multus/0.log" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.223980 4724 generic.go:334] "Generic (PLEG): container finished" podID="332754e6-e64b-4e47-988d-6f1ddbe4912e" containerID="3829e785517cb10660ea5da6eca25ac7b18f4295076abb5a63943bf4f7a06384" exitCode=2 Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.224002 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ns2kr" event={"ID":"332754e6-e64b-4e47-988d-6f1ddbe4912e","Type":"ContainerDied","Data":"3829e785517cb10660ea5da6eca25ac7b18f4295076abb5a63943bf4f7a06384"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.224020 4724 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0"} Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.224419 4724 scope.go:117] "RemoveContainer" containerID="3829e785517cb10660ea5da6eca25ac7b18f4295076abb5a63943bf4f7a06384" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.255977 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-etc-openvswitch\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256309 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-run-ovn\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256358 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-ovnkube-script-lib\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256414 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-cni-netd\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256475 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-env-overrides\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256511 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-kubelet\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256560 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-log-socket\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256590 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-node-log\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256612 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-var-lib-openvswitch\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256637 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-run-systemd\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256662 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-systemd-units\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256688 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-cni-bin\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256709 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-run-netns\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256734 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-ovnkube-config\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256762 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxgk4\" (UniqueName: \"kubernetes.io/projected/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-kube-api-access-cxgk4\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256784 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-slash\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256804 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-ovn-node-metrics-cert\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256828 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256860 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-run-openvswitch\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.256883 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-run-ovn-kubernetes\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.257078 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-run-systemd\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.257152 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-etc-openvswitch\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.257233 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-run-ovn\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.257285 4724 scope.go:117] "RemoveContainer" containerID="40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.257592 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-kubelet\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.257681 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-cni-netd\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.258210 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-log-socket\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.258264 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-node-log\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.258295 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-var-lib-openvswitch\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.258288 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.258362 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-slash\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.258340 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-ovnkube-script-lib\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.258629 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-run-netns\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.258774 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-cni-bin\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.259079 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-env-overrides\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.259211 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-run-openvswitch\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.259254 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-host-run-ovn-kubernetes\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.260147 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-ovnkube-config\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.260629 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-systemd-units\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.273017 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-ovn-node-metrics-cert\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.275024 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z56jr"] Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.279134 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z56jr"] Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.280640 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxgk4\" (UniqueName: \"kubernetes.io/projected/2fd2635b-2dbc-4a26-94b4-ff73bda4af00-kube-api-access-cxgk4\") pod \"ovnkube-node-k6qtw\" (UID: \"2fd2635b-2dbc-4a26-94b4-ff73bda4af00\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.287468 4724 scope.go:117] "RemoveContainer" containerID="71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.307318 4724 scope.go:117] "RemoveContainer" containerID="ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.329676 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.342520 4724 scope.go:117] "RemoveContainer" containerID="5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.372438 4724 scope.go:117] "RemoveContainer" containerID="ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.398993 4724 scope.go:117] "RemoveContainer" containerID="16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.418356 4724 scope.go:117] "RemoveContainer" containerID="177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.432799 4724 scope.go:117] "RemoveContainer" containerID="56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.451906 4724 scope.go:117] "RemoveContainer" containerID="381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.478497 4724 scope.go:117] "RemoveContainer" containerID="31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73" Feb 26 11:22:35 crc kubenswrapper[4724]: E0226 11:22:35.480218 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73\": container with ID starting with 31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73 not found: ID does not exist" containerID="31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.480265 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73"} err="failed to get container status \"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73\": rpc error: code = NotFound desc = could not find container \"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73\": container with ID starting with 31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.480294 4724 scope.go:117] "RemoveContainer" containerID="40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402" Feb 26 11:22:35 crc kubenswrapper[4724]: E0226 11:22:35.481215 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\": container with ID starting with 40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402 not found: ID does not exist" containerID="40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.481258 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402"} err="failed to get container status \"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\": rpc error: code = NotFound desc = could not find container \"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\": container with ID starting with 40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.481289 4724 scope.go:117] "RemoveContainer" containerID="71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2" Feb 26 11:22:35 crc kubenswrapper[4724]: E0226 11:22:35.481741 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\": container with ID starting with 71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2 not found: ID does not exist" containerID="71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.481766 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2"} err="failed to get container status \"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\": rpc error: code = NotFound desc = could not find container \"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\": container with ID starting with 71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.481780 4724 scope.go:117] "RemoveContainer" containerID="ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e" Feb 26 11:22:35 crc kubenswrapper[4724]: E0226 11:22:35.482222 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\": container with ID starting with ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e not found: ID does not exist" containerID="ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.482245 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e"} err="failed to get container status \"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\": rpc error: code = NotFound desc = could not find container \"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\": container with ID starting with ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.482259 4724 scope.go:117] "RemoveContainer" containerID="5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de" Feb 26 11:22:35 crc kubenswrapper[4724]: E0226 11:22:35.482589 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\": container with ID starting with 5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de not found: ID does not exist" containerID="5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.482614 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de"} err="failed to get container status \"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\": rpc error: code = NotFound desc = could not find container \"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\": container with ID starting with 5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.482628 4724 scope.go:117] "RemoveContainer" containerID="ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4" Feb 26 11:22:35 crc kubenswrapper[4724]: E0226 11:22:35.483001 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\": container with ID starting with ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4 not found: ID does not exist" containerID="ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.483047 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4"} err="failed to get container status \"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\": rpc error: code = NotFound desc = could not find container \"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\": container with ID starting with ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.483087 4724 scope.go:117] "RemoveContainer" containerID="16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87" Feb 26 11:22:35 crc kubenswrapper[4724]: E0226 11:22:35.483614 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\": container with ID starting with 16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87 not found: ID does not exist" containerID="16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.483639 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87"} err="failed to get container status \"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\": rpc error: code = NotFound desc = could not find container \"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\": container with ID starting with 16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.483658 4724 scope.go:117] "RemoveContainer" containerID="177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374" Feb 26 11:22:35 crc kubenswrapper[4724]: E0226 11:22:35.483987 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\": container with ID starting with 177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374 not found: ID does not exist" containerID="177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.484006 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374"} err="failed to get container status \"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\": rpc error: code = NotFound desc = could not find container \"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\": container with ID starting with 177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.484017 4724 scope.go:117] "RemoveContainer" containerID="56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071" Feb 26 11:22:35 crc kubenswrapper[4724]: E0226 11:22:35.484635 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\": container with ID starting with 56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071 not found: ID does not exist" containerID="56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.484670 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071"} err="failed to get container status \"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\": rpc error: code = NotFound desc = could not find container \"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\": container with ID starting with 56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.484691 4724 scope.go:117] "RemoveContainer" containerID="381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497" Feb 26 11:22:35 crc kubenswrapper[4724]: E0226 11:22:35.484933 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\": container with ID starting with 381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497 not found: ID does not exist" containerID="381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.484959 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497"} err="failed to get container status \"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\": rpc error: code = NotFound desc = could not find container \"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\": container with ID starting with 381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.484975 4724 scope.go:117] "RemoveContainer" containerID="31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.485247 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73"} err="failed to get container status \"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73\": rpc error: code = NotFound desc = could not find container \"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73\": container with ID starting with 31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.485271 4724 scope.go:117] "RemoveContainer" containerID="40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.485599 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402"} err="failed to get container status \"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\": rpc error: code = NotFound desc = could not find container \"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\": container with ID starting with 40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.485615 4724 scope.go:117] "RemoveContainer" containerID="71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.485890 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2"} err="failed to get container status \"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\": rpc error: code = NotFound desc = could not find container \"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\": container with ID starting with 71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.485911 4724 scope.go:117] "RemoveContainer" containerID="ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.486608 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e"} err="failed to get container status \"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\": rpc error: code = NotFound desc = could not find container \"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\": container with ID starting with ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.486631 4724 scope.go:117] "RemoveContainer" containerID="5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.487065 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de"} err="failed to get container status \"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\": rpc error: code = NotFound desc = could not find container \"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\": container with ID starting with 5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.487081 4724 scope.go:117] "RemoveContainer" containerID="ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.487314 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4"} err="failed to get container status \"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\": rpc error: code = NotFound desc = could not find container \"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\": container with ID starting with ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.487337 4724 scope.go:117] "RemoveContainer" containerID="16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.487547 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87"} err="failed to get container status \"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\": rpc error: code = NotFound desc = could not find container \"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\": container with ID starting with 16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.487562 4724 scope.go:117] "RemoveContainer" containerID="177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.489049 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374"} err="failed to get container status \"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\": rpc error: code = NotFound desc = could not find container \"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\": container with ID starting with 177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.489096 4724 scope.go:117] "RemoveContainer" containerID="56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.489364 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071"} err="failed to get container status \"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\": rpc error: code = NotFound desc = could not find container \"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\": container with ID starting with 56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.489394 4724 scope.go:117] "RemoveContainer" containerID="381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.490506 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497"} err="failed to get container status \"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\": rpc error: code = NotFound desc = could not find container \"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\": container with ID starting with 381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.490584 4724 scope.go:117] "RemoveContainer" containerID="31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.491019 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73"} err="failed to get container status \"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73\": rpc error: code = NotFound desc = could not find container \"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73\": container with ID starting with 31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.491042 4724 scope.go:117] "RemoveContainer" containerID="40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.493794 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402"} err="failed to get container status \"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\": rpc error: code = NotFound desc = could not find container \"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\": container with ID starting with 40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.493823 4724 scope.go:117] "RemoveContainer" containerID="71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.494486 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2"} err="failed to get container status \"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\": rpc error: code = NotFound desc = could not find container \"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\": container with ID starting with 71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.494518 4724 scope.go:117] "RemoveContainer" containerID="ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.494822 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e"} err="failed to get container status \"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\": rpc error: code = NotFound desc = could not find container \"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\": container with ID starting with ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.494847 4724 scope.go:117] "RemoveContainer" containerID="5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.495213 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de"} err="failed to get container status \"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\": rpc error: code = NotFound desc = could not find container \"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\": container with ID starting with 5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.495298 4724 scope.go:117] "RemoveContainer" containerID="ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.496111 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4"} err="failed to get container status \"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\": rpc error: code = NotFound desc = could not find container \"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\": container with ID starting with ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.496146 4724 scope.go:117] "RemoveContainer" containerID="16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.497909 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87"} err="failed to get container status \"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\": rpc error: code = NotFound desc = could not find container \"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\": container with ID starting with 16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.497934 4724 scope.go:117] "RemoveContainer" containerID="177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.498193 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374"} err="failed to get container status \"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\": rpc error: code = NotFound desc = could not find container \"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\": container with ID starting with 177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.498214 4724 scope.go:117] "RemoveContainer" containerID="56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.498552 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071"} err="failed to get container status \"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\": rpc error: code = NotFound desc = could not find container \"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\": container with ID starting with 56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.498575 4724 scope.go:117] "RemoveContainer" containerID="381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.498841 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497"} err="failed to get container status \"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\": rpc error: code = NotFound desc = could not find container \"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\": container with ID starting with 381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.498884 4724 scope.go:117] "RemoveContainer" containerID="31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.499862 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73"} err="failed to get container status \"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73\": rpc error: code = NotFound desc = could not find container \"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73\": container with ID starting with 31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.499889 4724 scope.go:117] "RemoveContainer" containerID="40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.500240 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402"} err="failed to get container status \"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\": rpc error: code = NotFound desc = could not find container \"40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402\": container with ID starting with 40acd38581d9eda728e3b437d1b15c4dc90856a1d6106447ce795f8cf85c1402 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.500267 4724 scope.go:117] "RemoveContainer" containerID="71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.500542 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2"} err="failed to get container status \"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\": rpc error: code = NotFound desc = could not find container \"71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2\": container with ID starting with 71ca1b4e70188aba7dfcadfd2b4dfb9744f2e82cf030060845e4bde564963ae2 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.500566 4724 scope.go:117] "RemoveContainer" containerID="ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.500813 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e"} err="failed to get container status \"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\": rpc error: code = NotFound desc = could not find container \"ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e\": container with ID starting with ec116af22af9556823b535788ede912ae4634e74bfa183d895d54f999d70dd6e not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.500837 4724 scope.go:117] "RemoveContainer" containerID="5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.501088 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de"} err="failed to get container status \"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\": rpc error: code = NotFound desc = could not find container \"5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de\": container with ID starting with 5598d38da94c0afb60c0fb4bbf5ac5521d86ef3332ce3fd2a4ca10d3881e09de not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.501108 4724 scope.go:117] "RemoveContainer" containerID="ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.501532 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4"} err="failed to get container status \"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\": rpc error: code = NotFound desc = could not find container \"ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4\": container with ID starting with ea3055992011e78ce75c34ba8c3ac3af403b204d89bba44a626f6384591b85b4 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.501552 4724 scope.go:117] "RemoveContainer" containerID="16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.501747 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87"} err="failed to get container status \"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\": rpc error: code = NotFound desc = could not find container \"16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87\": container with ID starting with 16366f398507990e7d44faee2490ef5d6a49189a9be466cdf35578773246dc87 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.501766 4724 scope.go:117] "RemoveContainer" containerID="177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.501953 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374"} err="failed to get container status \"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\": rpc error: code = NotFound desc = could not find container \"177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374\": container with ID starting with 177d3254365acb2a2acb7649fad024106a721f141b7455e3fc34ff11dcbbd374 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.501974 4724 scope.go:117] "RemoveContainer" containerID="56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.502279 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071"} err="failed to get container status \"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\": rpc error: code = NotFound desc = could not find container \"56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071\": container with ID starting with 56e170d1f477dbcb60447e8445b201dd98e8184fddbbcbd6387a04bdd8707071 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.502302 4724 scope.go:117] "RemoveContainer" containerID="381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.502618 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497"} err="failed to get container status \"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\": rpc error: code = NotFound desc = could not find container \"381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497\": container with ID starting with 381739a43d5f34eefd64f0ecdb0c68ea76aebad9be51a456ca19444b44626497 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.502640 4724 scope.go:117] "RemoveContainer" containerID="31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.502908 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73"} err="failed to get container status \"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73\": rpc error: code = NotFound desc = could not find container \"31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73\": container with ID starting with 31353b777bb9a417b5d8a17f5760365e9173002f765835f59708f3f8719b6b73 not found: ID does not exist" Feb 26 11:22:35 crc kubenswrapper[4724]: I0226 11:22:35.985521 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1140bb-3473-456a-b916-cfef4d4b7222" path="/var/lib/kubelet/pods/4c1140bb-3473-456a-b916-cfef4d4b7222/volumes" Feb 26 11:22:36 crc kubenswrapper[4724]: I0226 11:22:36.232381 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ns2kr_332754e6-e64b-4e47-988d-6f1ddbe4912e/kube-multus/1.log" Feb 26 11:22:36 crc kubenswrapper[4724]: I0226 11:22:36.233463 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ns2kr_332754e6-e64b-4e47-988d-6f1ddbe4912e/kube-multus/0.log" Feb 26 11:22:36 crc kubenswrapper[4724]: I0226 11:22:36.233623 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ns2kr" event={"ID":"332754e6-e64b-4e47-988d-6f1ddbe4912e","Type":"ContainerStarted","Data":"56fcbfaaf446edfbd5e399ccd0214c41b46f18b3429afd1ac4b359d158e757d3"} Feb 26 11:22:36 crc kubenswrapper[4724]: I0226 11:22:36.237413 4724 generic.go:334] "Generic (PLEG): container finished" podID="2fd2635b-2dbc-4a26-94b4-ff73bda4af00" containerID="cbaac47276fb4d776185c9673aec67b27d0ec0bc9614b345c5ed4d255fe357a8" exitCode=0 Feb 26 11:22:36 crc kubenswrapper[4724]: I0226 11:22:36.237445 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" event={"ID":"2fd2635b-2dbc-4a26-94b4-ff73bda4af00","Type":"ContainerDied","Data":"cbaac47276fb4d776185c9673aec67b27d0ec0bc9614b345c5ed4d255fe357a8"} Feb 26 11:22:36 crc kubenswrapper[4724]: I0226 11:22:36.237467 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" event={"ID":"2fd2635b-2dbc-4a26-94b4-ff73bda4af00","Type":"ContainerStarted","Data":"3a19dc33b22f2d62891e0b9079315d34645d60aefcb7fe484da30755a727015b"} Feb 26 11:22:37 crc kubenswrapper[4724]: I0226 11:22:37.246357 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" event={"ID":"2fd2635b-2dbc-4a26-94b4-ff73bda4af00","Type":"ContainerStarted","Data":"f845abc70cb8cc1825b034cf7884f09fca789ded67bbffa59d1921e62b8d2828"} Feb 26 11:22:37 crc kubenswrapper[4724]: I0226 11:22:37.247275 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" event={"ID":"2fd2635b-2dbc-4a26-94b4-ff73bda4af00","Type":"ContainerStarted","Data":"0b7d06e441a873aafd930e137e122810e1337526b4f0e0e4f03a615d78efaf88"} Feb 26 11:22:37 crc kubenswrapper[4724]: I0226 11:22:37.247312 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" event={"ID":"2fd2635b-2dbc-4a26-94b4-ff73bda4af00","Type":"ContainerStarted","Data":"9e3c3e12c64085d4975460c122f752c66ac78e7be560480748a1015ca7d2825f"} Feb 26 11:22:37 crc kubenswrapper[4724]: I0226 11:22:37.247324 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" event={"ID":"2fd2635b-2dbc-4a26-94b4-ff73bda4af00","Type":"ContainerStarted","Data":"0020f4a7bc28a4d42838315892d8d678042f7dc93872b61ca52e577de76d4780"} Feb 26 11:22:37 crc kubenswrapper[4724]: I0226 11:22:37.247334 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" event={"ID":"2fd2635b-2dbc-4a26-94b4-ff73bda4af00","Type":"ContainerStarted","Data":"402598a186708308d2966d8442eaba1a1622ed37b3458ded8bba91a7076a4337"} Feb 26 11:22:37 crc kubenswrapper[4724]: I0226 11:22:37.247343 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" event={"ID":"2fd2635b-2dbc-4a26-94b4-ff73bda4af00","Type":"ContainerStarted","Data":"5cb9719d020e279673f096e1f22fd113b9be773b844ea5adac3bf2c264e32005"} Feb 26 11:22:40 crc kubenswrapper[4724]: I0226 11:22:40.266597 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" event={"ID":"2fd2635b-2dbc-4a26-94b4-ff73bda4af00","Type":"ContainerStarted","Data":"5171f4f641eb73044cb1be511033fe2b3e50bd0ceeea3a28d7fd6fcf7f3e83d0"} Feb 26 11:22:43 crc kubenswrapper[4724]: I0226 11:22:43.284880 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" event={"ID":"2fd2635b-2dbc-4a26-94b4-ff73bda4af00","Type":"ContainerStarted","Data":"108529e4d448d090e42fed29f34272c01194d4a0f3d059d8e126d800fc238324"} Feb 26 11:22:43 crc kubenswrapper[4724]: I0226 11:22:43.285639 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:43 crc kubenswrapper[4724]: I0226 11:22:43.285661 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:43 crc kubenswrapper[4724]: I0226 11:22:43.285673 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:43 crc kubenswrapper[4724]: I0226 11:22:43.313721 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:43 crc kubenswrapper[4724]: I0226 11:22:43.316235 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:22:43 crc kubenswrapper[4724]: I0226 11:22:43.322235 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" podStartSLOduration=9.322218412 podStartE2EDuration="9.322218412s" podCreationTimestamp="2026-02-26 11:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:22:43.320619952 +0000 UTC m=+1029.976359087" watchObservedRunningTime="2026-02-26 11:22:43.322218412 +0000 UTC m=+1029.977957527" Feb 26 11:23:04 crc kubenswrapper[4724]: I0226 11:23:04.988576 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s"] Feb 26 11:23:04 crc kubenswrapper[4724]: I0226 11:23:04.989969 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" Feb 26 11:23:04 crc kubenswrapper[4724]: I0226 11:23:04.991559 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 11:23:05 crc kubenswrapper[4724]: I0226 11:23:05.007476 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s"] Feb 26 11:23:05 crc kubenswrapper[4724]: I0226 11:23:05.051817 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/648c1a76-a342-4f33-b06e-3a7969b0e1bb-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s\" (UID: \"648c1a76-a342-4f33-b06e-3a7969b0e1bb\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" Feb 26 11:23:05 crc kubenswrapper[4724]: I0226 11:23:05.051858 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vlhb\" (UniqueName: \"kubernetes.io/projected/648c1a76-a342-4f33-b06e-3a7969b0e1bb-kube-api-access-4vlhb\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s\" (UID: \"648c1a76-a342-4f33-b06e-3a7969b0e1bb\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" Feb 26 11:23:05 crc kubenswrapper[4724]: I0226 11:23:05.051885 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/648c1a76-a342-4f33-b06e-3a7969b0e1bb-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s\" (UID: \"648c1a76-a342-4f33-b06e-3a7969b0e1bb\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" Feb 26 11:23:05 crc kubenswrapper[4724]: I0226 11:23:05.152744 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/648c1a76-a342-4f33-b06e-3a7969b0e1bb-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s\" (UID: \"648c1a76-a342-4f33-b06e-3a7969b0e1bb\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" Feb 26 11:23:05 crc kubenswrapper[4724]: I0226 11:23:05.153080 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vlhb\" (UniqueName: \"kubernetes.io/projected/648c1a76-a342-4f33-b06e-3a7969b0e1bb-kube-api-access-4vlhb\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s\" (UID: \"648c1a76-a342-4f33-b06e-3a7969b0e1bb\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" Feb 26 11:23:05 crc kubenswrapper[4724]: I0226 11:23:05.153228 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/648c1a76-a342-4f33-b06e-3a7969b0e1bb-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s\" (UID: \"648c1a76-a342-4f33-b06e-3a7969b0e1bb\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" Feb 26 11:23:05 crc kubenswrapper[4724]: I0226 11:23:05.153318 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/648c1a76-a342-4f33-b06e-3a7969b0e1bb-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s\" (UID: \"648c1a76-a342-4f33-b06e-3a7969b0e1bb\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" Feb 26 11:23:05 crc kubenswrapper[4724]: I0226 11:23:05.153735 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/648c1a76-a342-4f33-b06e-3a7969b0e1bb-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s\" (UID: \"648c1a76-a342-4f33-b06e-3a7969b0e1bb\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" Feb 26 11:23:05 crc kubenswrapper[4724]: I0226 11:23:05.173062 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vlhb\" (UniqueName: \"kubernetes.io/projected/648c1a76-a342-4f33-b06e-3a7969b0e1bb-kube-api-access-4vlhb\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s\" (UID: \"648c1a76-a342-4f33-b06e-3a7969b0e1bb\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" Feb 26 11:23:05 crc kubenswrapper[4724]: I0226 11:23:05.307148 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" Feb 26 11:23:05 crc kubenswrapper[4724]: I0226 11:23:05.368864 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k6qtw" Feb 26 11:23:05 crc kubenswrapper[4724]: I0226 11:23:05.799642 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s"] Feb 26 11:23:05 crc kubenswrapper[4724]: I0226 11:23:05.926947 4724 scope.go:117] "RemoveContainer" containerID="9c18ac377ccbeb67ba87dfab2a39a13c39c98068afbb9f175dd620d990849919" Feb 26 11:23:05 crc kubenswrapper[4724]: I0226 11:23:05.957455 4724 scope.go:117] "RemoveContainer" containerID="f8bbd2447df323d89699be521357dbe7c968d5129560dee75d356d5048b0cdf0" Feb 26 11:23:06 crc kubenswrapper[4724]: I0226 11:23:06.441285 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" event={"ID":"648c1a76-a342-4f33-b06e-3a7969b0e1bb","Type":"ContainerStarted","Data":"ef44f8a208d1b79997feddd0742f23646e610053f2e1c531b9bbb2135469a3da"} Feb 26 11:23:06 crc kubenswrapper[4724]: I0226 11:23:06.441662 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" event={"ID":"648c1a76-a342-4f33-b06e-3a7969b0e1bb","Type":"ContainerStarted","Data":"d3e3756305c9f1af4ae6a753ed3238292d90dd98acbea701bdf7d4a4bbf0bcb7"} Feb 26 11:23:06 crc kubenswrapper[4724]: I0226 11:23:06.443005 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ns2kr_332754e6-e64b-4e47-988d-6f1ddbe4912e/kube-multus/1.log" Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.247620 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zd8ht"] Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.249335 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.262508 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zd8ht"] Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.279146 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7tsv\" (UniqueName: \"kubernetes.io/projected/3cc101ce-cc9a-495a-8c52-1e16f32ab574-kube-api-access-m7tsv\") pod \"redhat-operators-zd8ht\" (UID: \"3cc101ce-cc9a-495a-8c52-1e16f32ab574\") " pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.279577 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc101ce-cc9a-495a-8c52-1e16f32ab574-catalog-content\") pod \"redhat-operators-zd8ht\" (UID: \"3cc101ce-cc9a-495a-8c52-1e16f32ab574\") " pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.279731 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc101ce-cc9a-495a-8c52-1e16f32ab574-utilities\") pod \"redhat-operators-zd8ht\" (UID: \"3cc101ce-cc9a-495a-8c52-1e16f32ab574\") " pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.381296 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc101ce-cc9a-495a-8c52-1e16f32ab574-utilities\") pod \"redhat-operators-zd8ht\" (UID: \"3cc101ce-cc9a-495a-8c52-1e16f32ab574\") " pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.381655 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7tsv\" (UniqueName: \"kubernetes.io/projected/3cc101ce-cc9a-495a-8c52-1e16f32ab574-kube-api-access-m7tsv\") pod \"redhat-operators-zd8ht\" (UID: \"3cc101ce-cc9a-495a-8c52-1e16f32ab574\") " pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.381781 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc101ce-cc9a-495a-8c52-1e16f32ab574-catalog-content\") pod \"redhat-operators-zd8ht\" (UID: \"3cc101ce-cc9a-495a-8c52-1e16f32ab574\") " pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.381860 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc101ce-cc9a-495a-8c52-1e16f32ab574-utilities\") pod \"redhat-operators-zd8ht\" (UID: \"3cc101ce-cc9a-495a-8c52-1e16f32ab574\") " pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.382333 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc101ce-cc9a-495a-8c52-1e16f32ab574-catalog-content\") pod \"redhat-operators-zd8ht\" (UID: \"3cc101ce-cc9a-495a-8c52-1e16f32ab574\") " pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.402700 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7tsv\" (UniqueName: \"kubernetes.io/projected/3cc101ce-cc9a-495a-8c52-1e16f32ab574-kube-api-access-m7tsv\") pod \"redhat-operators-zd8ht\" (UID: \"3cc101ce-cc9a-495a-8c52-1e16f32ab574\") " pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.450742 4724 generic.go:334] "Generic (PLEG): container finished" podID="648c1a76-a342-4f33-b06e-3a7969b0e1bb" containerID="ef44f8a208d1b79997feddd0742f23646e610053f2e1c531b9bbb2135469a3da" exitCode=0 Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.450795 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" event={"ID":"648c1a76-a342-4f33-b06e-3a7969b0e1bb","Type":"ContainerDied","Data":"ef44f8a208d1b79997feddd0742f23646e610053f2e1c531b9bbb2135469a3da"} Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.565437 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:07 crc kubenswrapper[4724]: I0226 11:23:07.799215 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zd8ht"] Feb 26 11:23:08 crc kubenswrapper[4724]: I0226 11:23:08.469624 4724 generic.go:334] "Generic (PLEG): container finished" podID="3cc101ce-cc9a-495a-8c52-1e16f32ab574" containerID="31c1c6b71f4000930753cf9133e71e3a11819648713ce948251d9d21b8e6e512" exitCode=0 Feb 26 11:23:08 crc kubenswrapper[4724]: I0226 11:23:08.469925 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zd8ht" event={"ID":"3cc101ce-cc9a-495a-8c52-1e16f32ab574","Type":"ContainerDied","Data":"31c1c6b71f4000930753cf9133e71e3a11819648713ce948251d9d21b8e6e512"} Feb 26 11:23:08 crc kubenswrapper[4724]: I0226 11:23:08.469954 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zd8ht" event={"ID":"3cc101ce-cc9a-495a-8c52-1e16f32ab574","Type":"ContainerStarted","Data":"9ca44ae101b91b9b7b48eb7c171404e978d32c812e2b19d6287d7e6a257f4c57"} Feb 26 11:23:10 crc kubenswrapper[4724]: I0226 11:23:10.481475 4724 generic.go:334] "Generic (PLEG): container finished" podID="648c1a76-a342-4f33-b06e-3a7969b0e1bb" containerID="2b77922264e34f44378771aef48bc795831a1f9ef898e1a1009d9f8afc6ac83b" exitCode=0 Feb 26 11:23:10 crc kubenswrapper[4724]: I0226 11:23:10.481513 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" event={"ID":"648c1a76-a342-4f33-b06e-3a7969b0e1bb","Type":"ContainerDied","Data":"2b77922264e34f44378771aef48bc795831a1f9ef898e1a1009d9f8afc6ac83b"} Feb 26 11:23:10 crc kubenswrapper[4724]: I0226 11:23:10.483543 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zd8ht" event={"ID":"3cc101ce-cc9a-495a-8c52-1e16f32ab574","Type":"ContainerStarted","Data":"4a1c882ab126bffc0ea6926b89f0827c4c5c08c6da0df2fa4e10f9103a54f656"} Feb 26 11:23:11 crc kubenswrapper[4724]: I0226 11:23:11.529401 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" event={"ID":"648c1a76-a342-4f33-b06e-3a7969b0e1bb","Type":"ContainerStarted","Data":"5f6b158aa33e4fb410be187ac1a37e225e6fa13b42d3d5b03922fe14010b00f8"} Feb 26 11:23:11 crc kubenswrapper[4724]: I0226 11:23:11.556227 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" podStartSLOduration=5.204849826 podStartE2EDuration="7.556202847s" podCreationTimestamp="2026-02-26 11:23:04 +0000 UTC" firstStartedPulling="2026-02-26 11:23:07.453290879 +0000 UTC m=+1054.109029994" lastFinishedPulling="2026-02-26 11:23:09.8046439 +0000 UTC m=+1056.460383015" observedRunningTime="2026-02-26 11:23:11.550579546 +0000 UTC m=+1058.206318681" watchObservedRunningTime="2026-02-26 11:23:11.556202847 +0000 UTC m=+1058.211941972" Feb 26 11:23:12 crc kubenswrapper[4724]: I0226 11:23:12.540742 4724 generic.go:334] "Generic (PLEG): container finished" podID="648c1a76-a342-4f33-b06e-3a7969b0e1bb" containerID="5f6b158aa33e4fb410be187ac1a37e225e6fa13b42d3d5b03922fe14010b00f8" exitCode=0 Feb 26 11:23:12 crc kubenswrapper[4724]: I0226 11:23:12.540794 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" event={"ID":"648c1a76-a342-4f33-b06e-3a7969b0e1bb","Type":"ContainerDied","Data":"5f6b158aa33e4fb410be187ac1a37e225e6fa13b42d3d5b03922fe14010b00f8"} Feb 26 11:23:13 crc kubenswrapper[4724]: I0226 11:23:13.870053 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" Feb 26 11:23:13 crc kubenswrapper[4724]: I0226 11:23:13.903125 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/648c1a76-a342-4f33-b06e-3a7969b0e1bb-bundle\") pod \"648c1a76-a342-4f33-b06e-3a7969b0e1bb\" (UID: \"648c1a76-a342-4f33-b06e-3a7969b0e1bb\") " Feb 26 11:23:13 crc kubenswrapper[4724]: I0226 11:23:13.903234 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/648c1a76-a342-4f33-b06e-3a7969b0e1bb-util\") pod \"648c1a76-a342-4f33-b06e-3a7969b0e1bb\" (UID: \"648c1a76-a342-4f33-b06e-3a7969b0e1bb\") " Feb 26 11:23:13 crc kubenswrapper[4724]: I0226 11:23:13.903377 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vlhb\" (UniqueName: \"kubernetes.io/projected/648c1a76-a342-4f33-b06e-3a7969b0e1bb-kube-api-access-4vlhb\") pod \"648c1a76-a342-4f33-b06e-3a7969b0e1bb\" (UID: \"648c1a76-a342-4f33-b06e-3a7969b0e1bb\") " Feb 26 11:23:13 crc kubenswrapper[4724]: I0226 11:23:13.904341 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/648c1a76-a342-4f33-b06e-3a7969b0e1bb-bundle" (OuterVolumeSpecName: "bundle") pod "648c1a76-a342-4f33-b06e-3a7969b0e1bb" (UID: "648c1a76-a342-4f33-b06e-3a7969b0e1bb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:23:13 crc kubenswrapper[4724]: I0226 11:23:13.911347 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/648c1a76-a342-4f33-b06e-3a7969b0e1bb-kube-api-access-4vlhb" (OuterVolumeSpecName: "kube-api-access-4vlhb") pod "648c1a76-a342-4f33-b06e-3a7969b0e1bb" (UID: "648c1a76-a342-4f33-b06e-3a7969b0e1bb"). InnerVolumeSpecName "kube-api-access-4vlhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:23:13 crc kubenswrapper[4724]: I0226 11:23:13.917197 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/648c1a76-a342-4f33-b06e-3a7969b0e1bb-util" (OuterVolumeSpecName: "util") pod "648c1a76-a342-4f33-b06e-3a7969b0e1bb" (UID: "648c1a76-a342-4f33-b06e-3a7969b0e1bb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:23:14 crc kubenswrapper[4724]: I0226 11:23:14.005157 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vlhb\" (UniqueName: \"kubernetes.io/projected/648c1a76-a342-4f33-b06e-3a7969b0e1bb-kube-api-access-4vlhb\") on node \"crc\" DevicePath \"\"" Feb 26 11:23:14 crc kubenswrapper[4724]: I0226 11:23:14.005360 4724 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/648c1a76-a342-4f33-b06e-3a7969b0e1bb-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:23:14 crc kubenswrapper[4724]: I0226 11:23:14.005605 4724 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/648c1a76-a342-4f33-b06e-3a7969b0e1bb-util\") on node \"crc\" DevicePath \"\"" Feb 26 11:23:14 crc kubenswrapper[4724]: I0226 11:23:14.559039 4724 generic.go:334] "Generic (PLEG): container finished" podID="3cc101ce-cc9a-495a-8c52-1e16f32ab574" containerID="4a1c882ab126bffc0ea6926b89f0827c4c5c08c6da0df2fa4e10f9103a54f656" exitCode=0 Feb 26 11:23:14 crc kubenswrapper[4724]: I0226 11:23:14.559308 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zd8ht" event={"ID":"3cc101ce-cc9a-495a-8c52-1e16f32ab574","Type":"ContainerDied","Data":"4a1c882ab126bffc0ea6926b89f0827c4c5c08c6da0df2fa4e10f9103a54f656"} Feb 26 11:23:14 crc kubenswrapper[4724]: I0226 11:23:14.563412 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" event={"ID":"648c1a76-a342-4f33-b06e-3a7969b0e1bb","Type":"ContainerDied","Data":"d3e3756305c9f1af4ae6a753ed3238292d90dd98acbea701bdf7d4a4bbf0bcb7"} Feb 26 11:23:14 crc kubenswrapper[4724]: I0226 11:23:14.563451 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3e3756305c9f1af4ae6a753ed3238292d90dd98acbea701bdf7d4a4bbf0bcb7" Feb 26 11:23:14 crc kubenswrapper[4724]: I0226 11:23:14.563524 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s" Feb 26 11:23:15 crc kubenswrapper[4724]: I0226 11:23:15.571736 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zd8ht" event={"ID":"3cc101ce-cc9a-495a-8c52-1e16f32ab574","Type":"ContainerStarted","Data":"559a9af9e70d0b2ec37bb5e2bc445b59d0014fca04ebf5cffffe1a30eb6b666e"} Feb 26 11:23:15 crc kubenswrapper[4724]: I0226 11:23:15.639831 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zd8ht" podStartSLOduration=2.058873184 podStartE2EDuration="8.639811523s" podCreationTimestamp="2026-02-26 11:23:07 +0000 UTC" firstStartedPulling="2026-02-26 11:23:08.490970497 +0000 UTC m=+1055.146709612" lastFinishedPulling="2026-02-26 11:23:15.071908836 +0000 UTC m=+1061.727647951" observedRunningTime="2026-02-26 11:23:15.639028104 +0000 UTC m=+1062.294767239" watchObservedRunningTime="2026-02-26 11:23:15.639811523 +0000 UTC m=+1062.295550638" Feb 26 11:23:15 crc kubenswrapper[4724]: I0226 11:23:15.843020 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-2qxwx"] Feb 26 11:23:15 crc kubenswrapper[4724]: E0226 11:23:15.843348 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648c1a76-a342-4f33-b06e-3a7969b0e1bb" containerName="util" Feb 26 11:23:15 crc kubenswrapper[4724]: I0226 11:23:15.843370 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="648c1a76-a342-4f33-b06e-3a7969b0e1bb" containerName="util" Feb 26 11:23:15 crc kubenswrapper[4724]: E0226 11:23:15.843403 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648c1a76-a342-4f33-b06e-3a7969b0e1bb" containerName="pull" Feb 26 11:23:15 crc kubenswrapper[4724]: I0226 11:23:15.843412 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="648c1a76-a342-4f33-b06e-3a7969b0e1bb" containerName="pull" Feb 26 11:23:15 crc kubenswrapper[4724]: E0226 11:23:15.843423 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648c1a76-a342-4f33-b06e-3a7969b0e1bb" containerName="extract" Feb 26 11:23:15 crc kubenswrapper[4724]: I0226 11:23:15.843431 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="648c1a76-a342-4f33-b06e-3a7969b0e1bb" containerName="extract" Feb 26 11:23:15 crc kubenswrapper[4724]: I0226 11:23:15.843566 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="648c1a76-a342-4f33-b06e-3a7969b0e1bb" containerName="extract" Feb 26 11:23:15 crc kubenswrapper[4724]: I0226 11:23:15.844022 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2qxwx" Feb 26 11:23:15 crc kubenswrapper[4724]: I0226 11:23:15.850464 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 26 11:23:15 crc kubenswrapper[4724]: I0226 11:23:15.850756 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 26 11:23:15 crc kubenswrapper[4724]: I0226 11:23:15.851006 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-8ckqb" Feb 26 11:23:15 crc kubenswrapper[4724]: I0226 11:23:15.880320 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-2qxwx"] Feb 26 11:23:15 crc kubenswrapper[4724]: I0226 11:23:15.928889 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxgh\" (UniqueName: \"kubernetes.io/projected/25512be6-334e-4f85-9466-8505e3f3eb51-kube-api-access-vbxgh\") pod \"nmstate-operator-75c5dccd6c-2qxwx\" (UID: \"25512be6-334e-4f85-9466-8505e3f3eb51\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2qxwx" Feb 26 11:23:16 crc kubenswrapper[4724]: I0226 11:23:16.029641 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbxgh\" (UniqueName: \"kubernetes.io/projected/25512be6-334e-4f85-9466-8505e3f3eb51-kube-api-access-vbxgh\") pod \"nmstate-operator-75c5dccd6c-2qxwx\" (UID: \"25512be6-334e-4f85-9466-8505e3f3eb51\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2qxwx" Feb 26 11:23:16 crc kubenswrapper[4724]: I0226 11:23:16.051233 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbxgh\" (UniqueName: \"kubernetes.io/projected/25512be6-334e-4f85-9466-8505e3f3eb51-kube-api-access-vbxgh\") pod \"nmstate-operator-75c5dccd6c-2qxwx\" (UID: \"25512be6-334e-4f85-9466-8505e3f3eb51\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2qxwx" Feb 26 11:23:16 crc kubenswrapper[4724]: I0226 11:23:16.163309 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2qxwx" Feb 26 11:23:16 crc kubenswrapper[4724]: I0226 11:23:16.663422 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-2qxwx"] Feb 26 11:23:16 crc kubenswrapper[4724]: W0226 11:23:16.684228 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25512be6_334e_4f85_9466_8505e3f3eb51.slice/crio-4699995fabfbd07661174a25bdf39bbcf0f66bee0404d89a31798df278d07dae WatchSource:0}: Error finding container 4699995fabfbd07661174a25bdf39bbcf0f66bee0404d89a31798df278d07dae: Status 404 returned error can't find the container with id 4699995fabfbd07661174a25bdf39bbcf0f66bee0404d89a31798df278d07dae Feb 26 11:23:17 crc kubenswrapper[4724]: I0226 11:23:17.566593 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:17 crc kubenswrapper[4724]: I0226 11:23:17.566691 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:17 crc kubenswrapper[4724]: I0226 11:23:17.585787 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2qxwx" event={"ID":"25512be6-334e-4f85-9466-8505e3f3eb51","Type":"ContainerStarted","Data":"4699995fabfbd07661174a25bdf39bbcf0f66bee0404d89a31798df278d07dae"} Feb 26 11:23:18 crc kubenswrapper[4724]: I0226 11:23:18.619161 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zd8ht" podUID="3cc101ce-cc9a-495a-8c52-1e16f32ab574" containerName="registry-server" probeResult="failure" output=< Feb 26 11:23:18 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:23:18 crc kubenswrapper[4724]: > Feb 26 11:23:21 crc kubenswrapper[4724]: I0226 11:23:21.164744 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2qxwx" event={"ID":"25512be6-334e-4f85-9466-8505e3f3eb51","Type":"ContainerStarted","Data":"eaedffb16f908fe5b57f9b712410930e41ecb9c44443c55883d183a79ce40a7b"} Feb 26 11:23:21 crc kubenswrapper[4724]: I0226 11:23:21.189658 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2qxwx" podStartSLOduration=2.014250696 podStartE2EDuration="6.189635688s" podCreationTimestamp="2026-02-26 11:23:15 +0000 UTC" firstStartedPulling="2026-02-26 11:23:16.687836911 +0000 UTC m=+1063.343576036" lastFinishedPulling="2026-02-26 11:23:20.863221913 +0000 UTC m=+1067.518961028" observedRunningTime="2026-02-26 11:23:21.187461604 +0000 UTC m=+1067.843200729" watchObservedRunningTime="2026-02-26 11:23:21.189635688 +0000 UTC m=+1067.845374803" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.410916 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-7cg6v"] Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.412627 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.416776 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-4k9lv"] Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.417825 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-4k9lv" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.434069 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-x8fbb" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.434950 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9"] Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.435832 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.437427 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.442646 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9"] Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.454709 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-4k9lv"] Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.485092 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9"] Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.485951 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.492773 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-qhmts" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.493268 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.493428 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.513680 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9"] Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.536474 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpgjw\" (UniqueName: \"kubernetes.io/projected/9897fa30-971d-4825-9dea-05da142cc1d1-kube-api-access-jpgjw\") pod \"nmstate-handler-7cg6v\" (UID: \"9897fa30-971d-4825-9dea-05da142cc1d1\") " pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.536521 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-r6xm9\" (UID: \"2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.536576 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pvcg\" (UniqueName: \"kubernetes.io/projected/2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb-kube-api-access-4pvcg\") pod \"nmstate-webhook-786f45cff4-r6xm9\" (UID: \"2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.536598 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9897fa30-971d-4825-9dea-05da142cc1d1-dbus-socket\") pod \"nmstate-handler-7cg6v\" (UID: \"9897fa30-971d-4825-9dea-05da142cc1d1\") " pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.536618 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pqxm\" (UniqueName: \"kubernetes.io/projected/c61316af-28e2-4430-a8d4-058db8a35946-kube-api-access-9pqxm\") pod \"nmstate-metrics-69594cc75-4k9lv\" (UID: \"c61316af-28e2-4430-a8d4-058db8a35946\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-4k9lv" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.536635 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9897fa30-971d-4825-9dea-05da142cc1d1-ovs-socket\") pod \"nmstate-handler-7cg6v\" (UID: \"9897fa30-971d-4825-9dea-05da142cc1d1\") " pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.536665 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9897fa30-971d-4825-9dea-05da142cc1d1-nmstate-lock\") pod \"nmstate-handler-7cg6v\" (UID: \"9897fa30-971d-4825-9dea-05da142cc1d1\") " pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.637251 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pvcg\" (UniqueName: \"kubernetes.io/projected/2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb-kube-api-access-4pvcg\") pod \"nmstate-webhook-786f45cff4-r6xm9\" (UID: \"2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.637546 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9897fa30-971d-4825-9dea-05da142cc1d1-dbus-socket\") pod \"nmstate-handler-7cg6v\" (UID: \"9897fa30-971d-4825-9dea-05da142cc1d1\") " pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.637575 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9897fa30-971d-4825-9dea-05da142cc1d1-ovs-socket\") pod \"nmstate-handler-7cg6v\" (UID: \"9897fa30-971d-4825-9dea-05da142cc1d1\") " pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.637593 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pqxm\" (UniqueName: \"kubernetes.io/projected/c61316af-28e2-4430-a8d4-058db8a35946-kube-api-access-9pqxm\") pod \"nmstate-metrics-69594cc75-4k9lv\" (UID: \"c61316af-28e2-4430-a8d4-058db8a35946\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-4k9lv" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.637625 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9897fa30-971d-4825-9dea-05da142cc1d1-nmstate-lock\") pod \"nmstate-handler-7cg6v\" (UID: \"9897fa30-971d-4825-9dea-05da142cc1d1\") " pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.637660 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpgjw\" (UniqueName: \"kubernetes.io/projected/9897fa30-971d-4825-9dea-05da142cc1d1-kube-api-access-jpgjw\") pod \"nmstate-handler-7cg6v\" (UID: \"9897fa30-971d-4825-9dea-05da142cc1d1\") " pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.637680 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jswzg\" (UniqueName: \"kubernetes.io/projected/bffbb3a0-67ab-485c-a82c-1acf6925532e-kube-api-access-jswzg\") pod \"nmstate-console-plugin-5dcbbd79cf-g97p9\" (UID: \"bffbb3a0-67ab-485c-a82c-1acf6925532e\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.637700 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-r6xm9\" (UID: \"2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.637714 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9897fa30-971d-4825-9dea-05da142cc1d1-nmstate-lock\") pod \"nmstate-handler-7cg6v\" (UID: \"9897fa30-971d-4825-9dea-05da142cc1d1\") " pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.637673 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9897fa30-971d-4825-9dea-05da142cc1d1-ovs-socket\") pod \"nmstate-handler-7cg6v\" (UID: \"9897fa30-971d-4825-9dea-05da142cc1d1\") " pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.637717 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bffbb3a0-67ab-485c-a82c-1acf6925532e-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-g97p9\" (UID: \"bffbb3a0-67ab-485c-a82c-1acf6925532e\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.637847 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9897fa30-971d-4825-9dea-05da142cc1d1-dbus-socket\") pod \"nmstate-handler-7cg6v\" (UID: \"9897fa30-971d-4825-9dea-05da142cc1d1\") " pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:25 crc kubenswrapper[4724]: E0226 11:23:25.637918 4724 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 26 11:23:25 crc kubenswrapper[4724]: E0226 11:23:25.637967 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb-tls-key-pair podName:2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb nodeName:}" failed. No retries permitted until 2026-02-26 11:23:26.137949457 +0000 UTC m=+1072.793688572 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb-tls-key-pair") pod "nmstate-webhook-786f45cff4-r6xm9" (UID: "2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb") : secret "openshift-nmstate-webhook" not found Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.638064 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bffbb3a0-67ab-485c-a82c-1acf6925532e-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-g97p9\" (UID: \"bffbb3a0-67ab-485c-a82c-1acf6925532e\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.664043 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpgjw\" (UniqueName: \"kubernetes.io/projected/9897fa30-971d-4825-9dea-05da142cc1d1-kube-api-access-jpgjw\") pod \"nmstate-handler-7cg6v\" (UID: \"9897fa30-971d-4825-9dea-05da142cc1d1\") " pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.677722 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pqxm\" (UniqueName: \"kubernetes.io/projected/c61316af-28e2-4430-a8d4-058db8a35946-kube-api-access-9pqxm\") pod \"nmstate-metrics-69594cc75-4k9lv\" (UID: \"c61316af-28e2-4430-a8d4-058db8a35946\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-4k9lv" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.686492 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pvcg\" (UniqueName: \"kubernetes.io/projected/2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb-kube-api-access-4pvcg\") pod \"nmstate-webhook-786f45cff4-r6xm9\" (UID: \"2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.700125 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-86c5ff6f96-rvhp2"] Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.700815 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.745485 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bffbb3a0-67ab-485c-a82c-1acf6925532e-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-g97p9\" (UID: \"bffbb3a0-67ab-485c-a82c-1acf6925532e\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.745569 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bffbb3a0-67ab-485c-a82c-1acf6925532e-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-g97p9\" (UID: \"bffbb3a0-67ab-485c-a82c-1acf6925532e\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.745643 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jswzg\" (UniqueName: \"kubernetes.io/projected/bffbb3a0-67ab-485c-a82c-1acf6925532e-kube-api-access-jswzg\") pod \"nmstate-console-plugin-5dcbbd79cf-g97p9\" (UID: \"bffbb3a0-67ab-485c-a82c-1acf6925532e\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.746199 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.747231 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bffbb3a0-67ab-485c-a82c-1acf6925532e-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-g97p9\" (UID: \"bffbb3a0-67ab-485c-a82c-1acf6925532e\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.772549 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bffbb3a0-67ab-485c-a82c-1acf6925532e-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-g97p9\" (UID: \"bffbb3a0-67ab-485c-a82c-1acf6925532e\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.772899 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-4k9lv" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.773692 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86c5ff6f96-rvhp2"] Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.780847 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jswzg\" (UniqueName: \"kubernetes.io/projected/bffbb3a0-67ab-485c-a82c-1acf6925532e-kube-api-access-jswzg\") pod \"nmstate-console-plugin-5dcbbd79cf-g97p9\" (UID: \"bffbb3a0-67ab-485c-a82c-1acf6925532e\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.817565 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.846953 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7fc5eb33-d445-46a1-8892-d3101f845f19-console-oauth-config\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.847022 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fc5eb33-d445-46a1-8892-d3101f845f19-console-serving-cert\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.847072 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jtmj\" (UniqueName: \"kubernetes.io/projected/7fc5eb33-d445-46a1-8892-d3101f845f19-kube-api-access-6jtmj\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.847193 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7fc5eb33-d445-46a1-8892-d3101f845f19-console-config\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.847227 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fc5eb33-d445-46a1-8892-d3101f845f19-trusted-ca-bundle\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.847253 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7fc5eb33-d445-46a1-8892-d3101f845f19-oauth-serving-cert\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.847389 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7fc5eb33-d445-46a1-8892-d3101f845f19-service-ca\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.948267 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7fc5eb33-d445-46a1-8892-d3101f845f19-console-config\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.948322 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fc5eb33-d445-46a1-8892-d3101f845f19-trusted-ca-bundle\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.948356 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7fc5eb33-d445-46a1-8892-d3101f845f19-oauth-serving-cert\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.948386 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7fc5eb33-d445-46a1-8892-d3101f845f19-service-ca\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.948445 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7fc5eb33-d445-46a1-8892-d3101f845f19-console-oauth-config\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.948476 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fc5eb33-d445-46a1-8892-d3101f845f19-console-serving-cert\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.948512 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jtmj\" (UniqueName: \"kubernetes.io/projected/7fc5eb33-d445-46a1-8892-d3101f845f19-kube-api-access-6jtmj\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.950566 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7fc5eb33-d445-46a1-8892-d3101f845f19-console-config\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.951289 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7fc5eb33-d445-46a1-8892-d3101f845f19-service-ca\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.952026 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7fc5eb33-d445-46a1-8892-d3101f845f19-oauth-serving-cert\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.952238 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fc5eb33-d445-46a1-8892-d3101f845f19-trusted-ca-bundle\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.954814 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7fc5eb33-d445-46a1-8892-d3101f845f19-console-oauth-config\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.966480 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fc5eb33-d445-46a1-8892-d3101f845f19-console-serving-cert\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:25 crc kubenswrapper[4724]: I0226 11:23:25.967053 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jtmj\" (UniqueName: \"kubernetes.io/projected/7fc5eb33-d445-46a1-8892-d3101f845f19-kube-api-access-6jtmj\") pod \"console-86c5ff6f96-rvhp2\" (UID: \"7fc5eb33-d445-46a1-8892-d3101f845f19\") " pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:26 crc kubenswrapper[4724]: I0226 11:23:26.033536 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:26 crc kubenswrapper[4724]: I0226 11:23:26.170914 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-r6xm9\" (UID: \"2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9" Feb 26 11:23:26 crc kubenswrapper[4724]: I0226 11:23:26.189139 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-r6xm9\" (UID: \"2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9" Feb 26 11:23:26 crc kubenswrapper[4724]: I0226 11:23:26.213893 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7cg6v" event={"ID":"9897fa30-971d-4825-9dea-05da142cc1d1","Type":"ContainerStarted","Data":"a14b9a8d119b916cab416f8c5398bf5ba5c879221a651129d7c46c3a3f32ca99"} Feb 26 11:23:26 crc kubenswrapper[4724]: I0226 11:23:26.315103 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9"] Feb 26 11:23:26 crc kubenswrapper[4724]: I0226 11:23:26.347578 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-4k9lv"] Feb 26 11:23:26 crc kubenswrapper[4724]: I0226 11:23:26.382370 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9" Feb 26 11:23:26 crc kubenswrapper[4724]: I0226 11:23:26.599097 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9"] Feb 26 11:23:26 crc kubenswrapper[4724]: I0226 11:23:26.639863 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86c5ff6f96-rvhp2"] Feb 26 11:23:27 crc kubenswrapper[4724]: I0226 11:23:27.224619 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9" event={"ID":"bffbb3a0-67ab-485c-a82c-1acf6925532e","Type":"ContainerStarted","Data":"86300321b3bb2d0d580ddc8e402f5b6a24d3b103306e44c4077b28e502570c96"} Feb 26 11:23:27 crc kubenswrapper[4724]: I0226 11:23:27.225864 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9" event={"ID":"2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb","Type":"ContainerStarted","Data":"e1bb94e28b6bad32e01a500a3e5d08783c6173d36c56f259e159b7046d4b5e42"} Feb 26 11:23:27 crc kubenswrapper[4724]: I0226 11:23:27.228067 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86c5ff6f96-rvhp2" event={"ID":"7fc5eb33-d445-46a1-8892-d3101f845f19","Type":"ContainerStarted","Data":"564adb7ccd4ecb4ca177f8f4434613920dda5daa9a450e3929e0f4b3efa8e6f0"} Feb 26 11:23:27 crc kubenswrapper[4724]: I0226 11:23:27.228159 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86c5ff6f96-rvhp2" event={"ID":"7fc5eb33-d445-46a1-8892-d3101f845f19","Type":"ContainerStarted","Data":"bb79bc0710c20f4acbd0489d20223e807c9176fc86fc07a753cdd78ca3a611e8"} Feb 26 11:23:27 crc kubenswrapper[4724]: I0226 11:23:27.228977 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-4k9lv" event={"ID":"c61316af-28e2-4430-a8d4-058db8a35946","Type":"ContainerStarted","Data":"b6dbc42414f0f58098451b9dc3f8863976009902b129abd5ca1693a963e89d1a"} Feb 26 11:23:27 crc kubenswrapper[4724]: I0226 11:23:27.246993 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-86c5ff6f96-rvhp2" podStartSLOduration=2.246973439 podStartE2EDuration="2.246973439s" podCreationTimestamp="2026-02-26 11:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:23:27.243923543 +0000 UTC m=+1073.899662678" watchObservedRunningTime="2026-02-26 11:23:27.246973439 +0000 UTC m=+1073.902712554" Feb 26 11:23:27 crc kubenswrapper[4724]: I0226 11:23:27.614220 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:27 crc kubenswrapper[4724]: I0226 11:23:27.658439 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:27 crc kubenswrapper[4724]: I0226 11:23:27.844158 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zd8ht"] Feb 26 11:23:29 crc kubenswrapper[4724]: I0226 11:23:29.242462 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zd8ht" podUID="3cc101ce-cc9a-495a-8c52-1e16f32ab574" containerName="registry-server" containerID="cri-o://559a9af9e70d0b2ec37bb5e2bc445b59d0014fca04ebf5cffffe1a30eb6b666e" gracePeriod=2 Feb 26 11:23:30 crc kubenswrapper[4724]: I0226 11:23:30.314895 4724 generic.go:334] "Generic (PLEG): container finished" podID="3cc101ce-cc9a-495a-8c52-1e16f32ab574" containerID="559a9af9e70d0b2ec37bb5e2bc445b59d0014fca04ebf5cffffe1a30eb6b666e" exitCode=0 Feb 26 11:23:30 crc kubenswrapper[4724]: I0226 11:23:30.314990 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zd8ht" event={"ID":"3cc101ce-cc9a-495a-8c52-1e16f32ab574","Type":"ContainerDied","Data":"559a9af9e70d0b2ec37bb5e2bc445b59d0014fca04ebf5cffffe1a30eb6b666e"} Feb 26 11:23:30 crc kubenswrapper[4724]: I0226 11:23:30.821015 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:30 crc kubenswrapper[4724]: I0226 11:23:30.909350 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7tsv\" (UniqueName: \"kubernetes.io/projected/3cc101ce-cc9a-495a-8c52-1e16f32ab574-kube-api-access-m7tsv\") pod \"3cc101ce-cc9a-495a-8c52-1e16f32ab574\" (UID: \"3cc101ce-cc9a-495a-8c52-1e16f32ab574\") " Feb 26 11:23:30 crc kubenswrapper[4724]: I0226 11:23:30.910044 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc101ce-cc9a-495a-8c52-1e16f32ab574-utilities\") pod \"3cc101ce-cc9a-495a-8c52-1e16f32ab574\" (UID: \"3cc101ce-cc9a-495a-8c52-1e16f32ab574\") " Feb 26 11:23:30 crc kubenswrapper[4724]: I0226 11:23:30.910279 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc101ce-cc9a-495a-8c52-1e16f32ab574-catalog-content\") pod \"3cc101ce-cc9a-495a-8c52-1e16f32ab574\" (UID: \"3cc101ce-cc9a-495a-8c52-1e16f32ab574\") " Feb 26 11:23:30 crc kubenswrapper[4724]: I0226 11:23:30.911755 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc101ce-cc9a-495a-8c52-1e16f32ab574-utilities" (OuterVolumeSpecName: "utilities") pod "3cc101ce-cc9a-495a-8c52-1e16f32ab574" (UID: "3cc101ce-cc9a-495a-8c52-1e16f32ab574"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:23:30 crc kubenswrapper[4724]: I0226 11:23:30.914748 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc101ce-cc9a-495a-8c52-1e16f32ab574-kube-api-access-m7tsv" (OuterVolumeSpecName: "kube-api-access-m7tsv") pod "3cc101ce-cc9a-495a-8c52-1e16f32ab574" (UID: "3cc101ce-cc9a-495a-8c52-1e16f32ab574"). InnerVolumeSpecName "kube-api-access-m7tsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.012013 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7tsv\" (UniqueName: \"kubernetes.io/projected/3cc101ce-cc9a-495a-8c52-1e16f32ab574-kube-api-access-m7tsv\") on node \"crc\" DevicePath \"\"" Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.012045 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc101ce-cc9a-495a-8c52-1e16f32ab574-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.072510 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc101ce-cc9a-495a-8c52-1e16f32ab574-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cc101ce-cc9a-495a-8c52-1e16f32ab574" (UID: "3cc101ce-cc9a-495a-8c52-1e16f32ab574"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.113818 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc101ce-cc9a-495a-8c52-1e16f32ab574-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.322992 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7cg6v" event={"ID":"9897fa30-971d-4825-9dea-05da142cc1d1","Type":"ContainerStarted","Data":"95ab1e247d457f9cdf2ee8ee7551719b947c899a27416886a61b94c9bc0447fc"} Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.324744 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.324772 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-4k9lv" event={"ID":"c61316af-28e2-4430-a8d4-058db8a35946","Type":"ContainerStarted","Data":"7a935a9d286fb0f1f24ac689246958fff982f771def43b07ef3493856a2d9272"} Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.327840 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9" event={"ID":"bffbb3a0-67ab-485c-a82c-1acf6925532e","Type":"ContainerStarted","Data":"a4fb899436987ea02c8d1fa6782f835df901fb709fc52451954433d2a7b7a652"} Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.331756 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zd8ht" event={"ID":"3cc101ce-cc9a-495a-8c52-1e16f32ab574","Type":"ContainerDied","Data":"9ca44ae101b91b9b7b48eb7c171404e978d32c812e2b19d6287d7e6a257f4c57"} Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.331809 4724 scope.go:117] "RemoveContainer" containerID="559a9af9e70d0b2ec37bb5e2bc445b59d0014fca04ebf5cffffe1a30eb6b666e" Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.331934 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zd8ht" Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.341125 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-7cg6v" podStartSLOduration=1.230861969 podStartE2EDuration="6.341108458s" podCreationTimestamp="2026-02-26 11:23:25 +0000 UTC" firstStartedPulling="2026-02-26 11:23:25.788737339 +0000 UTC m=+1072.444476454" lastFinishedPulling="2026-02-26 11:23:30.898983828 +0000 UTC m=+1077.554722943" observedRunningTime="2026-02-26 11:23:31.339457427 +0000 UTC m=+1077.995196562" watchObservedRunningTime="2026-02-26 11:23:31.341108458 +0000 UTC m=+1077.996847573" Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.353323 4724 scope.go:117] "RemoveContainer" containerID="4a1c882ab126bffc0ea6926b89f0827c4c5c08c6da0df2fa4e10f9103a54f656" Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.361833 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-g97p9" podStartSLOduration=1.793085464 podStartE2EDuration="6.361813496s" podCreationTimestamp="2026-02-26 11:23:25 +0000 UTC" firstStartedPulling="2026-02-26 11:23:26.320337938 +0000 UTC m=+1072.976077053" lastFinishedPulling="2026-02-26 11:23:30.88906596 +0000 UTC m=+1077.544805085" observedRunningTime="2026-02-26 11:23:31.360193176 +0000 UTC m=+1078.015932291" watchObservedRunningTime="2026-02-26 11:23:31.361813496 +0000 UTC m=+1078.017552601" Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.393842 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zd8ht"] Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.412306 4724 scope.go:117] "RemoveContainer" containerID="31c1c6b71f4000930753cf9133e71e3a11819648713ce948251d9d21b8e6e512" Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.416512 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zd8ht"] Feb 26 11:23:31 crc kubenswrapper[4724]: I0226 11:23:31.984876 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc101ce-cc9a-495a-8c52-1e16f32ab574" path="/var/lib/kubelet/pods/3cc101ce-cc9a-495a-8c52-1e16f32ab574/volumes" Feb 26 11:23:34 crc kubenswrapper[4724]: I0226 11:23:34.359943 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-4k9lv" event={"ID":"c61316af-28e2-4430-a8d4-058db8a35946","Type":"ContainerStarted","Data":"177f151695fcc63e45260941d9ce6b6d7b3abe7f32727bfbc460107a40c1cdbc"} Feb 26 11:23:34 crc kubenswrapper[4724]: I0226 11:23:34.388377 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-4k9lv" podStartSLOduration=1.961310653 podStartE2EDuration="9.388350779s" podCreationTimestamp="2026-02-26 11:23:25 +0000 UTC" firstStartedPulling="2026-02-26 11:23:26.358666217 +0000 UTC m=+1073.014405332" lastFinishedPulling="2026-02-26 11:23:33.785706343 +0000 UTC m=+1080.441445458" observedRunningTime="2026-02-26 11:23:34.383707382 +0000 UTC m=+1081.039446497" watchObservedRunningTime="2026-02-26 11:23:34.388350779 +0000 UTC m=+1081.044089894" Feb 26 11:23:36 crc kubenswrapper[4724]: I0226 11:23:36.034895 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:36 crc kubenswrapper[4724]: I0226 11:23:36.035475 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:36 crc kubenswrapper[4724]: I0226 11:23:36.039709 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:36 crc kubenswrapper[4724]: I0226 11:23:36.375431 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-86c5ff6f96-rvhp2" Feb 26 11:23:36 crc kubenswrapper[4724]: I0226 11:23:36.447259 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9cwcb"] Feb 26 11:23:40 crc kubenswrapper[4724]: I0226 11:23:40.403554 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9" event={"ID":"2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb","Type":"ContainerStarted","Data":"7b832aa1becd8ce2a28b981fd5afbab99bacd02ed03be42446e7a5062abcbd76"} Feb 26 11:23:40 crc kubenswrapper[4724]: I0226 11:23:40.404245 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9" Feb 26 11:23:40 crc kubenswrapper[4724]: I0226 11:23:40.421682 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9" podStartSLOduration=1.851547977 podStartE2EDuration="15.421663588s" podCreationTimestamp="2026-02-26 11:23:25 +0000 UTC" firstStartedPulling="2026-02-26 11:23:26.612687242 +0000 UTC m=+1073.268426357" lastFinishedPulling="2026-02-26 11:23:40.182802853 +0000 UTC m=+1086.838541968" observedRunningTime="2026-02-26 11:23:40.41975608 +0000 UTC m=+1087.075495195" watchObservedRunningTime="2026-02-26 11:23:40.421663588 +0000 UTC m=+1087.077402703" Feb 26 11:23:40 crc kubenswrapper[4724]: I0226 11:23:40.769962 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-7cg6v" Feb 26 11:23:46 crc kubenswrapper[4724]: I0226 11:23:46.906247 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:23:46 crc kubenswrapper[4724]: I0226 11:23:46.906645 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:23:56 crc kubenswrapper[4724]: I0226 11:23:56.392214 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-r6xm9" Feb 26 11:24:00 crc kubenswrapper[4724]: I0226 11:24:00.132786 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535084-c5fwq"] Feb 26 11:24:00 crc kubenswrapper[4724]: E0226 11:24:00.133912 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc101ce-cc9a-495a-8c52-1e16f32ab574" containerName="registry-server" Feb 26 11:24:00 crc kubenswrapper[4724]: I0226 11:24:00.133931 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc101ce-cc9a-495a-8c52-1e16f32ab574" containerName="registry-server" Feb 26 11:24:00 crc kubenswrapper[4724]: E0226 11:24:00.133955 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc101ce-cc9a-495a-8c52-1e16f32ab574" containerName="extract-utilities" Feb 26 11:24:00 crc kubenswrapper[4724]: I0226 11:24:00.133966 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc101ce-cc9a-495a-8c52-1e16f32ab574" containerName="extract-utilities" Feb 26 11:24:00 crc kubenswrapper[4724]: E0226 11:24:00.133983 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc101ce-cc9a-495a-8c52-1e16f32ab574" containerName="extract-content" Feb 26 11:24:00 crc kubenswrapper[4724]: I0226 11:24:00.133991 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc101ce-cc9a-495a-8c52-1e16f32ab574" containerName="extract-content" Feb 26 11:24:00 crc kubenswrapper[4724]: I0226 11:24:00.134111 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc101ce-cc9a-495a-8c52-1e16f32ab574" containerName="registry-server" Feb 26 11:24:00 crc kubenswrapper[4724]: I0226 11:24:00.134662 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535084-c5fwq" Feb 26 11:24:00 crc kubenswrapper[4724]: I0226 11:24:00.138855 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:24:00 crc kubenswrapper[4724]: I0226 11:24:00.138868 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:24:00 crc kubenswrapper[4724]: I0226 11:24:00.140237 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:24:00 crc kubenswrapper[4724]: I0226 11:24:00.151476 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535084-c5fwq"] Feb 26 11:24:00 crc kubenswrapper[4724]: I0226 11:24:00.306625 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n8fk\" (UniqueName: \"kubernetes.io/projected/893f427b-5554-4b22-82de-204e5893f5e3-kube-api-access-2n8fk\") pod \"auto-csr-approver-29535084-c5fwq\" (UID: \"893f427b-5554-4b22-82de-204e5893f5e3\") " pod="openshift-infra/auto-csr-approver-29535084-c5fwq" Feb 26 11:24:00 crc kubenswrapper[4724]: I0226 11:24:00.407602 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n8fk\" (UniqueName: \"kubernetes.io/projected/893f427b-5554-4b22-82de-204e5893f5e3-kube-api-access-2n8fk\") pod \"auto-csr-approver-29535084-c5fwq\" (UID: \"893f427b-5554-4b22-82de-204e5893f5e3\") " pod="openshift-infra/auto-csr-approver-29535084-c5fwq" Feb 26 11:24:00 crc kubenswrapper[4724]: I0226 11:24:00.430112 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n8fk\" (UniqueName: \"kubernetes.io/projected/893f427b-5554-4b22-82de-204e5893f5e3-kube-api-access-2n8fk\") pod \"auto-csr-approver-29535084-c5fwq\" (UID: \"893f427b-5554-4b22-82de-204e5893f5e3\") " pod="openshift-infra/auto-csr-approver-29535084-c5fwq" Feb 26 11:24:00 crc kubenswrapper[4724]: I0226 11:24:00.457776 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535084-c5fwq" Feb 26 11:24:00 crc kubenswrapper[4724]: I0226 11:24:00.893965 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535084-c5fwq"] Feb 26 11:24:00 crc kubenswrapper[4724]: W0226 11:24:00.895802 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod893f427b_5554_4b22_82de_204e5893f5e3.slice/crio-2961feff843996b7ecada9b435dadd168b7ef9d4c2a5c5ab629a648cb4a417b6 WatchSource:0}: Error finding container 2961feff843996b7ecada9b435dadd168b7ef9d4c2a5c5ab629a648cb4a417b6: Status 404 returned error can't find the container with id 2961feff843996b7ecada9b435dadd168b7ef9d4c2a5c5ab629a648cb4a417b6 Feb 26 11:24:01 crc kubenswrapper[4724]: I0226 11:24:01.494189 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-9cwcb" podUID="0308748d-e26a-4fc4-bc5d-d3bd65936c7b" containerName="console" containerID="cri-o://0fc75f659e460f371bc3c51d45ba27c80fd4d2290b17204eb936522c20b0b83a" gracePeriod=15 Feb 26 11:24:01 crc kubenswrapper[4724]: I0226 11:24:01.540902 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535084-c5fwq" event={"ID":"893f427b-5554-4b22-82de-204e5893f5e3","Type":"ContainerStarted","Data":"2961feff843996b7ecada9b435dadd168b7ef9d4c2a5c5ab629a648cb4a417b6"} Feb 26 11:24:01 crc kubenswrapper[4724]: I0226 11:24:01.844465 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9cwcb_0308748d-e26a-4fc4-bc5d-d3bd65936c7b/console/0.log" Feb 26 11:24:01 crc kubenswrapper[4724]: I0226 11:24:01.844542 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.042136 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-config\") pod \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.042235 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-oauth-config\") pod \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.042324 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-trusted-ca-bundle\") pod \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.042374 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdxq7\" (UniqueName: \"kubernetes.io/projected/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-kube-api-access-fdxq7\") pod \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.042400 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-service-ca\") pod \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.042421 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-oauth-serving-cert\") pod \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.042451 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-serving-cert\") pod \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\" (UID: \"0308748d-e26a-4fc4-bc5d-d3bd65936c7b\") " Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.043445 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0308748d-e26a-4fc4-bc5d-d3bd65936c7b" (UID: "0308748d-e26a-4fc4-bc5d-d3bd65936c7b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.043471 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-service-ca" (OuterVolumeSpecName: "service-ca") pod "0308748d-e26a-4fc4-bc5d-d3bd65936c7b" (UID: "0308748d-e26a-4fc4-bc5d-d3bd65936c7b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.043499 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-config" (OuterVolumeSpecName: "console-config") pod "0308748d-e26a-4fc4-bc5d-d3bd65936c7b" (UID: "0308748d-e26a-4fc4-bc5d-d3bd65936c7b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.044490 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0308748d-e26a-4fc4-bc5d-d3bd65936c7b" (UID: "0308748d-e26a-4fc4-bc5d-d3bd65936c7b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.050964 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-kube-api-access-fdxq7" (OuterVolumeSpecName: "kube-api-access-fdxq7") pod "0308748d-e26a-4fc4-bc5d-d3bd65936c7b" (UID: "0308748d-e26a-4fc4-bc5d-d3bd65936c7b"). InnerVolumeSpecName "kube-api-access-fdxq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.053566 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0308748d-e26a-4fc4-bc5d-d3bd65936c7b" (UID: "0308748d-e26a-4fc4-bc5d-d3bd65936c7b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.060891 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0308748d-e26a-4fc4-bc5d-d3bd65936c7b" (UID: "0308748d-e26a-4fc4-bc5d-d3bd65936c7b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.144305 4724 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.144348 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdxq7\" (UniqueName: \"kubernetes.io/projected/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-kube-api-access-fdxq7\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.144368 4724 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.144382 4724 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.144395 4724 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.144407 4724 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.144417 4724 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0308748d-e26a-4fc4-bc5d-d3bd65936c7b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.552685 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9cwcb_0308748d-e26a-4fc4-bc5d-d3bd65936c7b/console/0.log" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.553037 4724 generic.go:334] "Generic (PLEG): container finished" podID="0308748d-e26a-4fc4-bc5d-d3bd65936c7b" containerID="0fc75f659e460f371bc3c51d45ba27c80fd4d2290b17204eb936522c20b0b83a" exitCode=2 Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.553108 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9cwcb" event={"ID":"0308748d-e26a-4fc4-bc5d-d3bd65936c7b","Type":"ContainerDied","Data":"0fc75f659e460f371bc3c51d45ba27c80fd4d2290b17204eb936522c20b0b83a"} Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.553150 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9cwcb" event={"ID":"0308748d-e26a-4fc4-bc5d-d3bd65936c7b","Type":"ContainerDied","Data":"1af3954d1c49fe708ed8fb411484e13fafcfff104a198b77ca7b71ccb26dc59d"} Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.553229 4724 scope.go:117] "RemoveContainer" containerID="0fc75f659e460f371bc3c51d45ba27c80fd4d2290b17204eb936522c20b0b83a" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.553421 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9cwcb" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.557040 4724 generic.go:334] "Generic (PLEG): container finished" podID="893f427b-5554-4b22-82de-204e5893f5e3" containerID="f48c06d1186a9899ea8e900f222701a491e712f8120851cf9be36495c3f544c3" exitCode=0 Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.557091 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535084-c5fwq" event={"ID":"893f427b-5554-4b22-82de-204e5893f5e3","Type":"ContainerDied","Data":"f48c06d1186a9899ea8e900f222701a491e712f8120851cf9be36495c3f544c3"} Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.576357 4724 scope.go:117] "RemoveContainer" containerID="0fc75f659e460f371bc3c51d45ba27c80fd4d2290b17204eb936522c20b0b83a" Feb 26 11:24:02 crc kubenswrapper[4724]: E0226 11:24:02.577326 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fc75f659e460f371bc3c51d45ba27c80fd4d2290b17204eb936522c20b0b83a\": container with ID starting with 0fc75f659e460f371bc3c51d45ba27c80fd4d2290b17204eb936522c20b0b83a not found: ID does not exist" containerID="0fc75f659e460f371bc3c51d45ba27c80fd4d2290b17204eb936522c20b0b83a" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.577363 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fc75f659e460f371bc3c51d45ba27c80fd4d2290b17204eb936522c20b0b83a"} err="failed to get container status \"0fc75f659e460f371bc3c51d45ba27c80fd4d2290b17204eb936522c20b0b83a\": rpc error: code = NotFound desc = could not find container \"0fc75f659e460f371bc3c51d45ba27c80fd4d2290b17204eb936522c20b0b83a\": container with ID starting with 0fc75f659e460f371bc3c51d45ba27c80fd4d2290b17204eb936522c20b0b83a not found: ID does not exist" Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.597547 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9cwcb"] Feb 26 11:24:02 crc kubenswrapper[4724]: I0226 11:24:02.612081 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-9cwcb"] Feb 26 11:24:03 crc kubenswrapper[4724]: I0226 11:24:03.910099 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535084-c5fwq" Feb 26 11:24:03 crc kubenswrapper[4724]: I0226 11:24:03.974390 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n8fk\" (UniqueName: \"kubernetes.io/projected/893f427b-5554-4b22-82de-204e5893f5e3-kube-api-access-2n8fk\") pod \"893f427b-5554-4b22-82de-204e5893f5e3\" (UID: \"893f427b-5554-4b22-82de-204e5893f5e3\") " Feb 26 11:24:03 crc kubenswrapper[4724]: I0226 11:24:03.996063 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/893f427b-5554-4b22-82de-204e5893f5e3-kube-api-access-2n8fk" (OuterVolumeSpecName: "kube-api-access-2n8fk") pod "893f427b-5554-4b22-82de-204e5893f5e3" (UID: "893f427b-5554-4b22-82de-204e5893f5e3"). InnerVolumeSpecName "kube-api-access-2n8fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:24:04 crc kubenswrapper[4724]: I0226 11:24:04.001804 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0308748d-e26a-4fc4-bc5d-d3bd65936c7b" path="/var/lib/kubelet/pods/0308748d-e26a-4fc4-bc5d-d3bd65936c7b/volumes" Feb 26 11:24:04 crc kubenswrapper[4724]: I0226 11:24:04.076003 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n8fk\" (UniqueName: \"kubernetes.io/projected/893f427b-5554-4b22-82de-204e5893f5e3-kube-api-access-2n8fk\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:04 crc kubenswrapper[4724]: I0226 11:24:04.587727 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535084-c5fwq" event={"ID":"893f427b-5554-4b22-82de-204e5893f5e3","Type":"ContainerDied","Data":"2961feff843996b7ecada9b435dadd168b7ef9d4c2a5c5ab629a648cb4a417b6"} Feb 26 11:24:04 crc kubenswrapper[4724]: I0226 11:24:04.588092 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2961feff843996b7ecada9b435dadd168b7ef9d4c2a5c5ab629a648cb4a417b6" Feb 26 11:24:04 crc kubenswrapper[4724]: I0226 11:24:04.588110 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535084-c5fwq" Feb 26 11:24:04 crc kubenswrapper[4724]: I0226 11:24:04.978458 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535078-c5qd4"] Feb 26 11:24:04 crc kubenswrapper[4724]: I0226 11:24:04.983032 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535078-c5qd4"] Feb 26 11:24:05 crc kubenswrapper[4724]: I0226 11:24:05.987729 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8abdce5a-e575-4855-8bc2-8fe66527b99b" path="/var/lib/kubelet/pods/8abdce5a-e575-4855-8bc2-8fe66527b99b/volumes" Feb 26 11:24:06 crc kubenswrapper[4724]: I0226 11:24:06.022508 4724 scope.go:117] "RemoveContainer" containerID="82b24805739d3def8d9f13587b0e5bca452b03f9f63d072b99854e4721dd70aa" Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.679054 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p"] Feb 26 11:24:11 crc kubenswrapper[4724]: E0226 11:24:11.679903 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0308748d-e26a-4fc4-bc5d-d3bd65936c7b" containerName="console" Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.679917 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0308748d-e26a-4fc4-bc5d-d3bd65936c7b" containerName="console" Feb 26 11:24:11 crc kubenswrapper[4724]: E0226 11:24:11.679940 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f427b-5554-4b22-82de-204e5893f5e3" containerName="oc" Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.679947 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f427b-5554-4b22-82de-204e5893f5e3" containerName="oc" Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.680087 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="893f427b-5554-4b22-82de-204e5893f5e3" containerName="oc" Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.680100 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0308748d-e26a-4fc4-bc5d-d3bd65936c7b" containerName="console" Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.681166 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.683490 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.695852 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p"] Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.878206 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55df853e-3e28-4871-8b98-ac9bc1a02cbf-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p\" (UID: \"55df853e-3e28-4871-8b98-ac9bc1a02cbf\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.878555 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55df853e-3e28-4871-8b98-ac9bc1a02cbf-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p\" (UID: \"55df853e-3e28-4871-8b98-ac9bc1a02cbf\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.878695 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n56wd\" (UniqueName: \"kubernetes.io/projected/55df853e-3e28-4871-8b98-ac9bc1a02cbf-kube-api-access-n56wd\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p\" (UID: \"55df853e-3e28-4871-8b98-ac9bc1a02cbf\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.979967 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n56wd\" (UniqueName: \"kubernetes.io/projected/55df853e-3e28-4871-8b98-ac9bc1a02cbf-kube-api-access-n56wd\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p\" (UID: \"55df853e-3e28-4871-8b98-ac9bc1a02cbf\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.980072 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55df853e-3e28-4871-8b98-ac9bc1a02cbf-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p\" (UID: \"55df853e-3e28-4871-8b98-ac9bc1a02cbf\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.980099 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55df853e-3e28-4871-8b98-ac9bc1a02cbf-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p\" (UID: \"55df853e-3e28-4871-8b98-ac9bc1a02cbf\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.980605 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55df853e-3e28-4871-8b98-ac9bc1a02cbf-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p\" (UID: \"55df853e-3e28-4871-8b98-ac9bc1a02cbf\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" Feb 26 11:24:11 crc kubenswrapper[4724]: I0226 11:24:11.980662 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55df853e-3e28-4871-8b98-ac9bc1a02cbf-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p\" (UID: \"55df853e-3e28-4871-8b98-ac9bc1a02cbf\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" Feb 26 11:24:12 crc kubenswrapper[4724]: I0226 11:24:12.002372 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n56wd\" (UniqueName: \"kubernetes.io/projected/55df853e-3e28-4871-8b98-ac9bc1a02cbf-kube-api-access-n56wd\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p\" (UID: \"55df853e-3e28-4871-8b98-ac9bc1a02cbf\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" Feb 26 11:24:12 crc kubenswrapper[4724]: I0226 11:24:12.298564 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" Feb 26 11:24:12 crc kubenswrapper[4724]: I0226 11:24:12.541893 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p"] Feb 26 11:24:12 crc kubenswrapper[4724]: I0226 11:24:12.655387 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" event={"ID":"55df853e-3e28-4871-8b98-ac9bc1a02cbf","Type":"ContainerStarted","Data":"f1b45dce2664ac5877cf070d66c19b4f442b84f23883cfeccd36d1d0ed489a8d"} Feb 26 11:24:13 crc kubenswrapper[4724]: I0226 11:24:13.665199 4724 generic.go:334] "Generic (PLEG): container finished" podID="55df853e-3e28-4871-8b98-ac9bc1a02cbf" containerID="fa9b7da40332d2fa1a469b1fb3b13f9031ed4c253c35b737434f56202902d152" exitCode=0 Feb 26 11:24:13 crc kubenswrapper[4724]: I0226 11:24:13.665293 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" event={"ID":"55df853e-3e28-4871-8b98-ac9bc1a02cbf","Type":"ContainerDied","Data":"fa9b7da40332d2fa1a469b1fb3b13f9031ed4c253c35b737434f56202902d152"} Feb 26 11:24:16 crc kubenswrapper[4724]: I0226 11:24:16.684409 4724 generic.go:334] "Generic (PLEG): container finished" podID="55df853e-3e28-4871-8b98-ac9bc1a02cbf" containerID="616df0ca88f39607c0f4272092df12f3cd2d91614bcd0ebcd5eba8abb3c6c752" exitCode=0 Feb 26 11:24:16 crc kubenswrapper[4724]: I0226 11:24:16.684513 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" event={"ID":"55df853e-3e28-4871-8b98-ac9bc1a02cbf","Type":"ContainerDied","Data":"616df0ca88f39607c0f4272092df12f3cd2d91614bcd0ebcd5eba8abb3c6c752"} Feb 26 11:24:16 crc kubenswrapper[4724]: I0226 11:24:16.905997 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:24:16 crc kubenswrapper[4724]: I0226 11:24:16.906300 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:24:17 crc kubenswrapper[4724]: I0226 11:24:17.694728 4724 generic.go:334] "Generic (PLEG): container finished" podID="55df853e-3e28-4871-8b98-ac9bc1a02cbf" containerID="8d1f9a5048a63ff4e1a9448152b5c6eabc043c8c6dc213a9d5db73e51cedbba2" exitCode=0 Feb 26 11:24:17 crc kubenswrapper[4724]: I0226 11:24:17.694780 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" event={"ID":"55df853e-3e28-4871-8b98-ac9bc1a02cbf","Type":"ContainerDied","Data":"8d1f9a5048a63ff4e1a9448152b5c6eabc043c8c6dc213a9d5db73e51cedbba2"} Feb 26 11:24:18 crc kubenswrapper[4724]: I0226 11:24:18.916010 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" Feb 26 11:24:19 crc kubenswrapper[4724]: I0226 11:24:19.072151 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55df853e-3e28-4871-8b98-ac9bc1a02cbf-util\") pod \"55df853e-3e28-4871-8b98-ac9bc1a02cbf\" (UID: \"55df853e-3e28-4871-8b98-ac9bc1a02cbf\") " Feb 26 11:24:19 crc kubenswrapper[4724]: I0226 11:24:19.072328 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55df853e-3e28-4871-8b98-ac9bc1a02cbf-bundle\") pod \"55df853e-3e28-4871-8b98-ac9bc1a02cbf\" (UID: \"55df853e-3e28-4871-8b98-ac9bc1a02cbf\") " Feb 26 11:24:19 crc kubenswrapper[4724]: I0226 11:24:19.072402 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n56wd\" (UniqueName: \"kubernetes.io/projected/55df853e-3e28-4871-8b98-ac9bc1a02cbf-kube-api-access-n56wd\") pod \"55df853e-3e28-4871-8b98-ac9bc1a02cbf\" (UID: \"55df853e-3e28-4871-8b98-ac9bc1a02cbf\") " Feb 26 11:24:19 crc kubenswrapper[4724]: I0226 11:24:19.073798 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55df853e-3e28-4871-8b98-ac9bc1a02cbf-bundle" (OuterVolumeSpecName: "bundle") pod "55df853e-3e28-4871-8b98-ac9bc1a02cbf" (UID: "55df853e-3e28-4871-8b98-ac9bc1a02cbf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:24:19 crc kubenswrapper[4724]: I0226 11:24:19.078357 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55df853e-3e28-4871-8b98-ac9bc1a02cbf-kube-api-access-n56wd" (OuterVolumeSpecName: "kube-api-access-n56wd") pod "55df853e-3e28-4871-8b98-ac9bc1a02cbf" (UID: "55df853e-3e28-4871-8b98-ac9bc1a02cbf"). InnerVolumeSpecName "kube-api-access-n56wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:24:19 crc kubenswrapper[4724]: I0226 11:24:19.092818 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55df853e-3e28-4871-8b98-ac9bc1a02cbf-util" (OuterVolumeSpecName: "util") pod "55df853e-3e28-4871-8b98-ac9bc1a02cbf" (UID: "55df853e-3e28-4871-8b98-ac9bc1a02cbf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:24:19 crc kubenswrapper[4724]: I0226 11:24:19.174776 4724 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55df853e-3e28-4871-8b98-ac9bc1a02cbf-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:19 crc kubenswrapper[4724]: I0226 11:24:19.174828 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n56wd\" (UniqueName: \"kubernetes.io/projected/55df853e-3e28-4871-8b98-ac9bc1a02cbf-kube-api-access-n56wd\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:19 crc kubenswrapper[4724]: I0226 11:24:19.174839 4724 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55df853e-3e28-4871-8b98-ac9bc1a02cbf-util\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:19 crc kubenswrapper[4724]: I0226 11:24:19.716206 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" event={"ID":"55df853e-3e28-4871-8b98-ac9bc1a02cbf","Type":"ContainerDied","Data":"f1b45dce2664ac5877cf070d66c19b4f442b84f23883cfeccd36d1d0ed489a8d"} Feb 26 11:24:19 crc kubenswrapper[4724]: I0226 11:24:19.716264 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1b45dce2664ac5877cf070d66c19b4f442b84f23883cfeccd36d1d0ed489a8d" Feb 26 11:24:19 crc kubenswrapper[4724]: I0226 11:24:19.716371 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.757206 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs"] Feb 26 11:24:29 crc kubenswrapper[4724]: E0226 11:24:29.758130 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55df853e-3e28-4871-8b98-ac9bc1a02cbf" containerName="util" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.758149 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="55df853e-3e28-4871-8b98-ac9bc1a02cbf" containerName="util" Feb 26 11:24:29 crc kubenswrapper[4724]: E0226 11:24:29.758196 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55df853e-3e28-4871-8b98-ac9bc1a02cbf" containerName="pull" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.758205 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="55df853e-3e28-4871-8b98-ac9bc1a02cbf" containerName="pull" Feb 26 11:24:29 crc kubenswrapper[4724]: E0226 11:24:29.758217 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55df853e-3e28-4871-8b98-ac9bc1a02cbf" containerName="extract" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.758225 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="55df853e-3e28-4871-8b98-ac9bc1a02cbf" containerName="extract" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.758359 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="55df853e-3e28-4871-8b98-ac9bc1a02cbf" containerName="extract" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.758866 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.763552 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.764250 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.765097 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-pv9x9" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.767271 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.771378 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.790150 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs"] Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.824996 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c749ff83-c2b1-49fc-b99a-1a8f7bda31fa-webhook-cert\") pod \"metallb-operator-controller-manager-64754968d5-4ktxs\" (UID: \"c749ff83-c2b1-49fc-b99a-1a8f7bda31fa\") " pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.825096 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c749ff83-c2b1-49fc-b99a-1a8f7bda31fa-apiservice-cert\") pod \"metallb-operator-controller-manager-64754968d5-4ktxs\" (UID: \"c749ff83-c2b1-49fc-b99a-1a8f7bda31fa\") " pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.825125 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brr7s\" (UniqueName: \"kubernetes.io/projected/c749ff83-c2b1-49fc-b99a-1a8f7bda31fa-kube-api-access-brr7s\") pod \"metallb-operator-controller-manager-64754968d5-4ktxs\" (UID: \"c749ff83-c2b1-49fc-b99a-1a8f7bda31fa\") " pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.926594 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c749ff83-c2b1-49fc-b99a-1a8f7bda31fa-webhook-cert\") pod \"metallb-operator-controller-manager-64754968d5-4ktxs\" (UID: \"c749ff83-c2b1-49fc-b99a-1a8f7bda31fa\") " pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.926733 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c749ff83-c2b1-49fc-b99a-1a8f7bda31fa-apiservice-cert\") pod \"metallb-operator-controller-manager-64754968d5-4ktxs\" (UID: \"c749ff83-c2b1-49fc-b99a-1a8f7bda31fa\") " pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.926763 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brr7s\" (UniqueName: \"kubernetes.io/projected/c749ff83-c2b1-49fc-b99a-1a8f7bda31fa-kube-api-access-brr7s\") pod \"metallb-operator-controller-manager-64754968d5-4ktxs\" (UID: \"c749ff83-c2b1-49fc-b99a-1a8f7bda31fa\") " pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.933265 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c749ff83-c2b1-49fc-b99a-1a8f7bda31fa-webhook-cert\") pod \"metallb-operator-controller-manager-64754968d5-4ktxs\" (UID: \"c749ff83-c2b1-49fc-b99a-1a8f7bda31fa\") " pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" Feb 26 11:24:29 crc kubenswrapper[4724]: I0226 11:24:29.934100 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c749ff83-c2b1-49fc-b99a-1a8f7bda31fa-apiservice-cert\") pod \"metallb-operator-controller-manager-64754968d5-4ktxs\" (UID: \"c749ff83-c2b1-49fc-b99a-1a8f7bda31fa\") " pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.015062 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brr7s\" (UniqueName: \"kubernetes.io/projected/c749ff83-c2b1-49fc-b99a-1a8f7bda31fa-kube-api-access-brr7s\") pod \"metallb-operator-controller-manager-64754968d5-4ktxs\" (UID: \"c749ff83-c2b1-49fc-b99a-1a8f7bda31fa\") " pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.079920 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.297608 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2"] Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.298640 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.302635 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-sqs4b" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.302883 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.303000 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.330851 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c07d2449-8a13-4ae2-832b-30904057f00c-apiservice-cert\") pod \"metallb-operator-webhook-server-6c46bf8994-k9qf2\" (UID: \"c07d2449-8a13-4ae2-832b-30904057f00c\") " pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.330942 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c07d2449-8a13-4ae2-832b-30904057f00c-webhook-cert\") pod \"metallb-operator-webhook-server-6c46bf8994-k9qf2\" (UID: \"c07d2449-8a13-4ae2-832b-30904057f00c\") " pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.331068 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-825xh\" (UniqueName: \"kubernetes.io/projected/c07d2449-8a13-4ae2-832b-30904057f00c-kube-api-access-825xh\") pod \"metallb-operator-webhook-server-6c46bf8994-k9qf2\" (UID: \"c07d2449-8a13-4ae2-832b-30904057f00c\") " pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.350409 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2"] Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.432871 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-825xh\" (UniqueName: \"kubernetes.io/projected/c07d2449-8a13-4ae2-832b-30904057f00c-kube-api-access-825xh\") pod \"metallb-operator-webhook-server-6c46bf8994-k9qf2\" (UID: \"c07d2449-8a13-4ae2-832b-30904057f00c\") " pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.432947 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c07d2449-8a13-4ae2-832b-30904057f00c-apiservice-cert\") pod \"metallb-operator-webhook-server-6c46bf8994-k9qf2\" (UID: \"c07d2449-8a13-4ae2-832b-30904057f00c\") " pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.432996 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c07d2449-8a13-4ae2-832b-30904057f00c-webhook-cert\") pod \"metallb-operator-webhook-server-6c46bf8994-k9qf2\" (UID: \"c07d2449-8a13-4ae2-832b-30904057f00c\") " pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.441728 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c07d2449-8a13-4ae2-832b-30904057f00c-apiservice-cert\") pod \"metallb-operator-webhook-server-6c46bf8994-k9qf2\" (UID: \"c07d2449-8a13-4ae2-832b-30904057f00c\") " pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.442675 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c07d2449-8a13-4ae2-832b-30904057f00c-webhook-cert\") pod \"metallb-operator-webhook-server-6c46bf8994-k9qf2\" (UID: \"c07d2449-8a13-4ae2-832b-30904057f00c\") " pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.456165 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-825xh\" (UniqueName: \"kubernetes.io/projected/c07d2449-8a13-4ae2-832b-30904057f00c-kube-api-access-825xh\") pod \"metallb-operator-webhook-server-6c46bf8994-k9qf2\" (UID: \"c07d2449-8a13-4ae2-832b-30904057f00c\") " pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.533519 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs"] Feb 26 11:24:30 crc kubenswrapper[4724]: W0226 11:24:30.544397 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc749ff83_c2b1_49fc_b99a_1a8f7bda31fa.slice/crio-dcfdf693002fc1bcdf52c152b2dcae1cf3182342b895ddf4329d1526648cfd90 WatchSource:0}: Error finding container dcfdf693002fc1bcdf52c152b2dcae1cf3182342b895ddf4329d1526648cfd90: Status 404 returned error can't find the container with id dcfdf693002fc1bcdf52c152b2dcae1cf3182342b895ddf4329d1526648cfd90 Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.649786 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" Feb 26 11:24:30 crc kubenswrapper[4724]: I0226 11:24:30.782561 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" event={"ID":"c749ff83-c2b1-49fc-b99a-1a8f7bda31fa","Type":"ContainerStarted","Data":"dcfdf693002fc1bcdf52c152b2dcae1cf3182342b895ddf4329d1526648cfd90"} Feb 26 11:24:31 crc kubenswrapper[4724]: I0226 11:24:31.051238 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2"] Feb 26 11:24:31 crc kubenswrapper[4724]: W0226 11:24:31.060690 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc07d2449_8a13_4ae2_832b_30904057f00c.slice/crio-a559d6c99d541bace0116fdb6e666924ed64f8f76577367b4adf6054092e9c65 WatchSource:0}: Error finding container a559d6c99d541bace0116fdb6e666924ed64f8f76577367b4adf6054092e9c65: Status 404 returned error can't find the container with id a559d6c99d541bace0116fdb6e666924ed64f8f76577367b4adf6054092e9c65 Feb 26 11:24:31 crc kubenswrapper[4724]: I0226 11:24:31.789957 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" event={"ID":"c07d2449-8a13-4ae2-832b-30904057f00c","Type":"ContainerStarted","Data":"a559d6c99d541bace0116fdb6e666924ed64f8f76577367b4adf6054092e9c65"} Feb 26 11:24:35 crc kubenswrapper[4724]: I0226 11:24:35.825459 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" event={"ID":"c749ff83-c2b1-49fc-b99a-1a8f7bda31fa","Type":"ContainerStarted","Data":"dd0e02379fc4fd2646bbcb0a48cfdfb9a9fc47dfd6844f215fea5e3d945b6349"} Feb 26 11:24:35 crc kubenswrapper[4724]: I0226 11:24:35.826338 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" Feb 26 11:24:35 crc kubenswrapper[4724]: I0226 11:24:35.854025 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" podStartSLOduration=2.103727234 podStartE2EDuration="6.853999819s" podCreationTimestamp="2026-02-26 11:24:29 +0000 UTC" firstStartedPulling="2026-02-26 11:24:30.546826329 +0000 UTC m=+1137.202565434" lastFinishedPulling="2026-02-26 11:24:35.297098894 +0000 UTC m=+1141.952838019" observedRunningTime="2026-02-26 11:24:35.850374456 +0000 UTC m=+1142.506113581" watchObservedRunningTime="2026-02-26 11:24:35.853999819 +0000 UTC m=+1142.509738954" Feb 26 11:24:41 crc kubenswrapper[4724]: I0226 11:24:41.882974 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" event={"ID":"c07d2449-8a13-4ae2-832b-30904057f00c","Type":"ContainerStarted","Data":"39fd2e7edc9f2f8fed869116951e59871295fc085d0b742bcb6908cf88cc4167"} Feb 26 11:24:41 crc kubenswrapper[4724]: I0226 11:24:41.883571 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" Feb 26 11:24:43 crc kubenswrapper[4724]: I0226 11:24:43.622951 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" podStartSLOduration=3.879061413 podStartE2EDuration="13.622919346s" podCreationTimestamp="2026-02-26 11:24:30 +0000 UTC" firstStartedPulling="2026-02-26 11:24:31.065023098 +0000 UTC m=+1137.720762213" lastFinishedPulling="2026-02-26 11:24:40.808881031 +0000 UTC m=+1147.464620146" observedRunningTime="2026-02-26 11:24:41.917078087 +0000 UTC m=+1148.572817212" watchObservedRunningTime="2026-02-26 11:24:43.622919346 +0000 UTC m=+1150.278658471" Feb 26 11:24:43 crc kubenswrapper[4724]: I0226 11:24:43.625226 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x6v5x"] Feb 26 11:24:43 crc kubenswrapper[4724]: I0226 11:24:43.626828 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:43 crc kubenswrapper[4724]: I0226 11:24:43.640963 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6v5x"] Feb 26 11:24:43 crc kubenswrapper[4724]: I0226 11:24:43.702105 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-catalog-content\") pod \"certified-operators-x6v5x\" (UID: \"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0\") " pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:43 crc kubenswrapper[4724]: I0226 11:24:43.702680 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8ftx\" (UniqueName: \"kubernetes.io/projected/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-kube-api-access-v8ftx\") pod \"certified-operators-x6v5x\" (UID: \"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0\") " pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:43 crc kubenswrapper[4724]: I0226 11:24:43.702757 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-utilities\") pod \"certified-operators-x6v5x\" (UID: \"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0\") " pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:43 crc kubenswrapper[4724]: I0226 11:24:43.803330 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-catalog-content\") pod \"certified-operators-x6v5x\" (UID: \"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0\") " pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:43 crc kubenswrapper[4724]: I0226 11:24:43.803390 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8ftx\" (UniqueName: \"kubernetes.io/projected/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-kube-api-access-v8ftx\") pod \"certified-operators-x6v5x\" (UID: \"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0\") " pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:43 crc kubenswrapper[4724]: I0226 11:24:43.803433 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-utilities\") pod \"certified-operators-x6v5x\" (UID: \"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0\") " pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:43 crc kubenswrapper[4724]: I0226 11:24:43.804198 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-utilities\") pod \"certified-operators-x6v5x\" (UID: \"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0\") " pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:43 crc kubenswrapper[4724]: I0226 11:24:43.804711 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-catalog-content\") pod \"certified-operators-x6v5x\" (UID: \"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0\") " pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:43 crc kubenswrapper[4724]: I0226 11:24:43.833963 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8ftx\" (UniqueName: \"kubernetes.io/projected/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-kube-api-access-v8ftx\") pod \"certified-operators-x6v5x\" (UID: \"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0\") " pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:43 crc kubenswrapper[4724]: I0226 11:24:43.950925 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:44 crc kubenswrapper[4724]: I0226 11:24:44.301010 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6v5x"] Feb 26 11:24:44 crc kubenswrapper[4724]: I0226 11:24:44.906279 4724 generic.go:334] "Generic (PLEG): container finished" podID="c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0" containerID="1c05403e950541638e05fbb9fab341baa9a5356d71056c445a99d10885d313c8" exitCode=0 Feb 26 11:24:44 crc kubenswrapper[4724]: I0226 11:24:44.906525 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6v5x" event={"ID":"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0","Type":"ContainerDied","Data":"1c05403e950541638e05fbb9fab341baa9a5356d71056c445a99d10885d313c8"} Feb 26 11:24:44 crc kubenswrapper[4724]: I0226 11:24:44.906635 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6v5x" event={"ID":"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0","Type":"ContainerStarted","Data":"03fa4627838e9954c3dc89ac145a2890e65f339462c5e69263c799252a84a7c1"} Feb 26 11:24:45 crc kubenswrapper[4724]: I0226 11:24:45.917832 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6v5x" event={"ID":"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0","Type":"ContainerStarted","Data":"d4960b89b2777d096057871a9113514c37b133c5029cbcc5726ff0d358d7036e"} Feb 26 11:24:47 crc kubenswrapper[4724]: I0226 11:24:47.063969 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:24:47 crc kubenswrapper[4724]: I0226 11:24:47.064020 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:24:47 crc kubenswrapper[4724]: I0226 11:24:47.064126 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:24:47 crc kubenswrapper[4724]: I0226 11:24:47.084380 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ba5115481d1102dd3adf13dea4151bf50f3cbd49195796f340f8393348a53ce"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:24:47 crc kubenswrapper[4724]: I0226 11:24:47.084487 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://9ba5115481d1102dd3adf13dea4151bf50f3cbd49195796f340f8393348a53ce" gracePeriod=600 Feb 26 11:24:47 crc kubenswrapper[4724]: E0226 11:24:47.246384 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2405c92_e87c_4e60_ac28_0cd51800d9df.slice/crio-9ba5115481d1102dd3adf13dea4151bf50f3cbd49195796f340f8393348a53ce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2405c92_e87c_4e60_ac28_0cd51800d9df.slice/crio-conmon-9ba5115481d1102dd3adf13dea4151bf50f3cbd49195796f340f8393348a53ce.scope\": RecentStats: unable to find data in memory cache]" Feb 26 11:24:48 crc kubenswrapper[4724]: I0226 11:24:48.085586 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="9ba5115481d1102dd3adf13dea4151bf50f3cbd49195796f340f8393348a53ce" exitCode=0 Feb 26 11:24:48 crc kubenswrapper[4724]: I0226 11:24:48.085631 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"9ba5115481d1102dd3adf13dea4151bf50f3cbd49195796f340f8393348a53ce"} Feb 26 11:24:48 crc kubenswrapper[4724]: I0226 11:24:48.085979 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"89545c6222687528337cf32ba9bda30e19443137c7e0933c297f827f49d03a36"} Feb 26 11:24:48 crc kubenswrapper[4724]: I0226 11:24:48.086011 4724 scope.go:117] "RemoveContainer" containerID="4ea38ea9f17bd357f830c4f1610289188452d159aa12b5f949dbdd14483c4545" Feb 26 11:24:48 crc kubenswrapper[4724]: I0226 11:24:48.092820 4724 generic.go:334] "Generic (PLEG): container finished" podID="c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0" containerID="d4960b89b2777d096057871a9113514c37b133c5029cbcc5726ff0d358d7036e" exitCode=0 Feb 26 11:24:48 crc kubenswrapper[4724]: I0226 11:24:48.092859 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6v5x" event={"ID":"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0","Type":"ContainerDied","Data":"d4960b89b2777d096057871a9113514c37b133c5029cbcc5726ff0d358d7036e"} Feb 26 11:24:50 crc kubenswrapper[4724]: I0226 11:24:50.110921 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6v5x" event={"ID":"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0","Type":"ContainerStarted","Data":"1dfc202772d6c221a13a058efac2f613d94d37aceef560c543a8e273100c2dc3"} Feb 26 11:24:50 crc kubenswrapper[4724]: I0226 11:24:50.128358 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x6v5x" podStartSLOduration=2.743687773 podStartE2EDuration="7.128341054s" podCreationTimestamp="2026-02-26 11:24:43 +0000 UTC" firstStartedPulling="2026-02-26 11:24:44.909570663 +0000 UTC m=+1151.565309778" lastFinishedPulling="2026-02-26 11:24:49.294223944 +0000 UTC m=+1155.949963059" observedRunningTime="2026-02-26 11:24:50.12737993 +0000 UTC m=+1156.783119065" watchObservedRunningTime="2026-02-26 11:24:50.128341054 +0000 UTC m=+1156.784080159" Feb 26 11:24:53 crc kubenswrapper[4724]: I0226 11:24:53.952060 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:53 crc kubenswrapper[4724]: I0226 11:24:53.952755 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:54 crc kubenswrapper[4724]: I0226 11:24:54.010928 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:54 crc kubenswrapper[4724]: I0226 11:24:54.176409 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:54 crc kubenswrapper[4724]: I0226 11:24:54.845273 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x6v5x"] Feb 26 11:24:56 crc kubenswrapper[4724]: I0226 11:24:56.160926 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x6v5x" podUID="c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0" containerName="registry-server" containerID="cri-o://1dfc202772d6c221a13a058efac2f613d94d37aceef560c543a8e273100c2dc3" gracePeriod=2 Feb 26 11:24:56 crc kubenswrapper[4724]: I0226 11:24:56.559691 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:56 crc kubenswrapper[4724]: I0226 11:24:56.674231 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8ftx\" (UniqueName: \"kubernetes.io/projected/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-kube-api-access-v8ftx\") pod \"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0\" (UID: \"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0\") " Feb 26 11:24:56 crc kubenswrapper[4724]: I0226 11:24:56.674307 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-catalog-content\") pod \"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0\" (UID: \"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0\") " Feb 26 11:24:56 crc kubenswrapper[4724]: I0226 11:24:56.674406 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-utilities\") pod \"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0\" (UID: \"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0\") " Feb 26 11:24:56 crc kubenswrapper[4724]: I0226 11:24:56.675618 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-utilities" (OuterVolumeSpecName: "utilities") pod "c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0" (UID: "c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:24:56 crc kubenswrapper[4724]: I0226 11:24:56.684054 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-kube-api-access-v8ftx" (OuterVolumeSpecName: "kube-api-access-v8ftx") pod "c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0" (UID: "c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0"). InnerVolumeSpecName "kube-api-access-v8ftx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:24:56 crc kubenswrapper[4724]: I0226 11:24:56.775583 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:56 crc kubenswrapper[4724]: I0226 11:24:56.775654 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8ftx\" (UniqueName: \"kubernetes.io/projected/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-kube-api-access-v8ftx\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:57 crc kubenswrapper[4724]: I0226 11:24:57.169392 4724 generic.go:334] "Generic (PLEG): container finished" podID="c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0" containerID="1dfc202772d6c221a13a058efac2f613d94d37aceef560c543a8e273100c2dc3" exitCode=0 Feb 26 11:24:57 crc kubenswrapper[4724]: I0226 11:24:57.169444 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6v5x" event={"ID":"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0","Type":"ContainerDied","Data":"1dfc202772d6c221a13a058efac2f613d94d37aceef560c543a8e273100c2dc3"} Feb 26 11:24:57 crc kubenswrapper[4724]: I0226 11:24:57.169489 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6v5x" Feb 26 11:24:57 crc kubenswrapper[4724]: I0226 11:24:57.169501 4724 scope.go:117] "RemoveContainer" containerID="1dfc202772d6c221a13a058efac2f613d94d37aceef560c543a8e273100c2dc3" Feb 26 11:24:57 crc kubenswrapper[4724]: I0226 11:24:57.169490 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6v5x" event={"ID":"c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0","Type":"ContainerDied","Data":"03fa4627838e9954c3dc89ac145a2890e65f339462c5e69263c799252a84a7c1"} Feb 26 11:24:57 crc kubenswrapper[4724]: I0226 11:24:57.197104 4724 scope.go:117] "RemoveContainer" containerID="d4960b89b2777d096057871a9113514c37b133c5029cbcc5726ff0d358d7036e" Feb 26 11:24:57 crc kubenswrapper[4724]: I0226 11:24:57.217878 4724 scope.go:117] "RemoveContainer" containerID="1c05403e950541638e05fbb9fab341baa9a5356d71056c445a99d10885d313c8" Feb 26 11:24:57 crc kubenswrapper[4724]: I0226 11:24:57.239860 4724 scope.go:117] "RemoveContainer" containerID="1dfc202772d6c221a13a058efac2f613d94d37aceef560c543a8e273100c2dc3" Feb 26 11:24:57 crc kubenswrapper[4724]: E0226 11:24:57.240804 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dfc202772d6c221a13a058efac2f613d94d37aceef560c543a8e273100c2dc3\": container with ID starting with 1dfc202772d6c221a13a058efac2f613d94d37aceef560c543a8e273100c2dc3 not found: ID does not exist" containerID="1dfc202772d6c221a13a058efac2f613d94d37aceef560c543a8e273100c2dc3" Feb 26 11:24:57 crc kubenswrapper[4724]: I0226 11:24:57.240852 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dfc202772d6c221a13a058efac2f613d94d37aceef560c543a8e273100c2dc3"} err="failed to get container status \"1dfc202772d6c221a13a058efac2f613d94d37aceef560c543a8e273100c2dc3\": rpc error: code = NotFound desc = could not find container \"1dfc202772d6c221a13a058efac2f613d94d37aceef560c543a8e273100c2dc3\": container with ID starting with 1dfc202772d6c221a13a058efac2f613d94d37aceef560c543a8e273100c2dc3 not found: ID does not exist" Feb 26 11:24:57 crc kubenswrapper[4724]: I0226 11:24:57.240881 4724 scope.go:117] "RemoveContainer" containerID="d4960b89b2777d096057871a9113514c37b133c5029cbcc5726ff0d358d7036e" Feb 26 11:24:57 crc kubenswrapper[4724]: E0226 11:24:57.241169 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4960b89b2777d096057871a9113514c37b133c5029cbcc5726ff0d358d7036e\": container with ID starting with d4960b89b2777d096057871a9113514c37b133c5029cbcc5726ff0d358d7036e not found: ID does not exist" containerID="d4960b89b2777d096057871a9113514c37b133c5029cbcc5726ff0d358d7036e" Feb 26 11:24:57 crc kubenswrapper[4724]: I0226 11:24:57.241224 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4960b89b2777d096057871a9113514c37b133c5029cbcc5726ff0d358d7036e"} err="failed to get container status \"d4960b89b2777d096057871a9113514c37b133c5029cbcc5726ff0d358d7036e\": rpc error: code = NotFound desc = could not find container \"d4960b89b2777d096057871a9113514c37b133c5029cbcc5726ff0d358d7036e\": container with ID starting with d4960b89b2777d096057871a9113514c37b133c5029cbcc5726ff0d358d7036e not found: ID does not exist" Feb 26 11:24:57 crc kubenswrapper[4724]: I0226 11:24:57.241251 4724 scope.go:117] "RemoveContainer" containerID="1c05403e950541638e05fbb9fab341baa9a5356d71056c445a99d10885d313c8" Feb 26 11:24:57 crc kubenswrapper[4724]: E0226 11:24:57.241557 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c05403e950541638e05fbb9fab341baa9a5356d71056c445a99d10885d313c8\": container with ID starting with 1c05403e950541638e05fbb9fab341baa9a5356d71056c445a99d10885d313c8 not found: ID does not exist" containerID="1c05403e950541638e05fbb9fab341baa9a5356d71056c445a99d10885d313c8" Feb 26 11:24:57 crc kubenswrapper[4724]: I0226 11:24:57.241579 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c05403e950541638e05fbb9fab341baa9a5356d71056c445a99d10885d313c8"} err="failed to get container status \"1c05403e950541638e05fbb9fab341baa9a5356d71056c445a99d10885d313c8\": rpc error: code = NotFound desc = could not find container \"1c05403e950541638e05fbb9fab341baa9a5356d71056c445a99d10885d313c8\": container with ID starting with 1c05403e950541638e05fbb9fab341baa9a5356d71056c445a99d10885d313c8 not found: ID does not exist" Feb 26 11:24:57 crc kubenswrapper[4724]: I0226 11:24:57.843079 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0" (UID: "c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:24:57 crc kubenswrapper[4724]: I0226 11:24:57.887776 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:24:58 crc kubenswrapper[4724]: I0226 11:24:58.086668 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x6v5x"] Feb 26 11:24:58 crc kubenswrapper[4724]: I0226 11:24:58.091000 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x6v5x"] Feb 26 11:24:59 crc kubenswrapper[4724]: I0226 11:24:59.984011 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0" path="/var/lib/kubelet/pods/c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0/volumes" Feb 26 11:25:00 crc kubenswrapper[4724]: I0226 11:25:00.659766 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6c46bf8994-k9qf2" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.083352 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-64754968d5-4ktxs" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.191537 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-47752"] Feb 26 11:25:10 crc kubenswrapper[4724]: E0226 11:25:10.191802 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0" containerName="extract-utilities" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.191815 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0" containerName="extract-utilities" Feb 26 11:25:10 crc kubenswrapper[4724]: E0226 11:25:10.191824 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0" containerName="registry-server" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.191830 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0" containerName="registry-server" Feb 26 11:25:10 crc kubenswrapper[4724]: E0226 11:25:10.191842 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0" containerName="extract-content" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.191850 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0" containerName="extract-content" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.191969 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ac5cb5-dfb4-4234-ab0f-2cf680f948f0" containerName="registry-server" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.192835 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.226737 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-47752"] Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.362379 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-utilities\") pod \"redhat-marketplace-47752\" (UID: \"fb89dd88-1f9e-4f23-9b12-dc85e159ab16\") " pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.362452 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqxdj\" (UniqueName: \"kubernetes.io/projected/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-kube-api-access-dqxdj\") pod \"redhat-marketplace-47752\" (UID: \"fb89dd88-1f9e-4f23-9b12-dc85e159ab16\") " pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.362526 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-catalog-content\") pod \"redhat-marketplace-47752\" (UID: \"fb89dd88-1f9e-4f23-9b12-dc85e159ab16\") " pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.464940 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-catalog-content\") pod \"redhat-marketplace-47752\" (UID: \"fb89dd88-1f9e-4f23-9b12-dc85e159ab16\") " pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.465098 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-utilities\") pod \"redhat-marketplace-47752\" (UID: \"fb89dd88-1f9e-4f23-9b12-dc85e159ab16\") " pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.465136 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqxdj\" (UniqueName: \"kubernetes.io/projected/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-kube-api-access-dqxdj\") pod \"redhat-marketplace-47752\" (UID: \"fb89dd88-1f9e-4f23-9b12-dc85e159ab16\") " pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.465756 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-catalog-content\") pod \"redhat-marketplace-47752\" (UID: \"fb89dd88-1f9e-4f23-9b12-dc85e159ab16\") " pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.466253 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-utilities\") pod \"redhat-marketplace-47752\" (UID: \"fb89dd88-1f9e-4f23-9b12-dc85e159ab16\") " pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.491092 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqxdj\" (UniqueName: \"kubernetes.io/projected/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-kube-api-access-dqxdj\") pod \"redhat-marketplace-47752\" (UID: \"fb89dd88-1f9e-4f23-9b12-dc85e159ab16\") " pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.518570 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.820606 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-47752"] Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.937192 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-xv452"] Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.938114 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xv452" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.953616 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.956540 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lhjwh" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.961161 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-b86hc"] Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.972903 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.979664 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.979819 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 26 11:25:10 crc kubenswrapper[4724]: I0226 11:25:10.996267 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-xv452"] Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.079908 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvl6v\" (UniqueName: \"kubernetes.io/projected/933b3336-9cea-4b27-92e3-3fcf69076040-kube-api-access-jvl6v\") pod \"frr-k8s-webhook-server-7f989f654f-xv452\" (UID: \"933b3336-9cea-4b27-92e3-3fcf69076040\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xv452" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.079984 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d848b417-9306-4564-b059-0dc84bd7ec1a-frr-sockets\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.080035 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnkwk\" (UniqueName: \"kubernetes.io/projected/d848b417-9306-4564-b059-0dc84bd7ec1a-kube-api-access-wnkwk\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.080065 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d848b417-9306-4564-b059-0dc84bd7ec1a-frr-startup\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.080097 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d848b417-9306-4564-b059-0dc84bd7ec1a-reloader\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.080148 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/933b3336-9cea-4b27-92e3-3fcf69076040-cert\") pod \"frr-k8s-webhook-server-7f989f654f-xv452\" (UID: \"933b3336-9cea-4b27-92e3-3fcf69076040\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xv452" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.080171 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d848b417-9306-4564-b059-0dc84bd7ec1a-metrics\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.080252 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d848b417-9306-4564-b059-0dc84bd7ec1a-frr-conf\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.080296 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d848b417-9306-4564-b059-0dc84bd7ec1a-metrics-certs\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.183916 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/933b3336-9cea-4b27-92e3-3fcf69076040-cert\") pod \"frr-k8s-webhook-server-7f989f654f-xv452\" (UID: \"933b3336-9cea-4b27-92e3-3fcf69076040\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xv452" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.183985 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d848b417-9306-4564-b059-0dc84bd7ec1a-metrics\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.184022 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d848b417-9306-4564-b059-0dc84bd7ec1a-frr-conf\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.184061 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d848b417-9306-4564-b059-0dc84bd7ec1a-metrics-certs\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.184103 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvl6v\" (UniqueName: \"kubernetes.io/projected/933b3336-9cea-4b27-92e3-3fcf69076040-kube-api-access-jvl6v\") pod \"frr-k8s-webhook-server-7f989f654f-xv452\" (UID: \"933b3336-9cea-4b27-92e3-3fcf69076040\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xv452" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.184139 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d848b417-9306-4564-b059-0dc84bd7ec1a-frr-sockets\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.184159 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnkwk\" (UniqueName: \"kubernetes.io/projected/d848b417-9306-4564-b059-0dc84bd7ec1a-kube-api-access-wnkwk\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.184219 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d848b417-9306-4564-b059-0dc84bd7ec1a-frr-startup\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.184254 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d848b417-9306-4564-b059-0dc84bd7ec1a-reloader\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.185155 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d848b417-9306-4564-b059-0dc84bd7ec1a-reloader\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.186161 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d848b417-9306-4564-b059-0dc84bd7ec1a-metrics\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.186476 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d848b417-9306-4564-b059-0dc84bd7ec1a-frr-conf\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: E0226 11:25:11.186604 4724 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 26 11:25:11 crc kubenswrapper[4724]: E0226 11:25:11.186683 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d848b417-9306-4564-b059-0dc84bd7ec1a-metrics-certs podName:d848b417-9306-4564-b059-0dc84bd7ec1a nodeName:}" failed. No retries permitted until 2026-02-26 11:25:11.686656727 +0000 UTC m=+1178.342396042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d848b417-9306-4564-b059-0dc84bd7ec1a-metrics-certs") pod "frr-k8s-b86hc" (UID: "d848b417-9306-4564-b059-0dc84bd7ec1a") : secret "frr-k8s-certs-secret" not found Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.187231 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d848b417-9306-4564-b059-0dc84bd7ec1a-frr-sockets\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.188262 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d848b417-9306-4564-b059-0dc84bd7ec1a-frr-startup\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.191986 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/933b3336-9cea-4b27-92e3-3fcf69076040-cert\") pod \"frr-k8s-webhook-server-7f989f654f-xv452\" (UID: \"933b3336-9cea-4b27-92e3-3fcf69076040\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xv452" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.220673 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvl6v\" (UniqueName: \"kubernetes.io/projected/933b3336-9cea-4b27-92e3-3fcf69076040-kube-api-access-jvl6v\") pod \"frr-k8s-webhook-server-7f989f654f-xv452\" (UID: \"933b3336-9cea-4b27-92e3-3fcf69076040\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xv452" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.232235 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnkwk\" (UniqueName: \"kubernetes.io/projected/d848b417-9306-4564-b059-0dc84bd7ec1a-kube-api-access-wnkwk\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.255908 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xv452" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.353711 4724 generic.go:334] "Generic (PLEG): container finished" podID="fb89dd88-1f9e-4f23-9b12-dc85e159ab16" containerID="e5f0c1aeb9bfe6e49859cc904f6bb78a38a42711fb678b661fd2587c59b9ae6a" exitCode=0 Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.362511 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.365249 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-5vsqn"] Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.367217 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47752" event={"ID":"fb89dd88-1f9e-4f23-9b12-dc85e159ab16","Type":"ContainerDied","Data":"e5f0c1aeb9bfe6e49859cc904f6bb78a38a42711fb678b661fd2587c59b9ae6a"} Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.367256 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47752" event={"ID":"fb89dd88-1f9e-4f23-9b12-dc85e159ab16","Type":"ContainerStarted","Data":"d027b0479b255b1792b569437802d574e45107882499fbee5e1205615193bce8"} Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.367351 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5vsqn" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.383402 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.383833 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.384037 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ptdjs" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.393582 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.393637 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-cg9xd"] Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.394904 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-cg9xd" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.408796 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4-metrics-certs\") pod \"controller-86ddb6bd46-cg9xd\" (UID: \"665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4\") " pod="metallb-system/controller-86ddb6bd46-cg9xd" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.408856 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5488s\" (UniqueName: \"kubernetes.io/projected/665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4-kube-api-access-5488s\") pod \"controller-86ddb6bd46-cg9xd\" (UID: \"665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4\") " pod="metallb-system/controller-86ddb6bd46-cg9xd" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.408882 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9b29feac-9647-448f-8c83-e33894da59dd-metallb-excludel2\") pod \"speaker-5vsqn\" (UID: \"9b29feac-9647-448f-8c83-e33894da59dd\") " pod="metallb-system/speaker-5vsqn" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.408909 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns9fg\" (UniqueName: \"kubernetes.io/projected/9b29feac-9647-448f-8c83-e33894da59dd-kube-api-access-ns9fg\") pod \"speaker-5vsqn\" (UID: \"9b29feac-9647-448f-8c83-e33894da59dd\") " pod="metallb-system/speaker-5vsqn" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.408929 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4-cert\") pod \"controller-86ddb6bd46-cg9xd\" (UID: \"665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4\") " pod="metallb-system/controller-86ddb6bd46-cg9xd" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.408955 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9b29feac-9647-448f-8c83-e33894da59dd-memberlist\") pod \"speaker-5vsqn\" (UID: \"9b29feac-9647-448f-8c83-e33894da59dd\") " pod="metallb-system/speaker-5vsqn" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.408972 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b29feac-9647-448f-8c83-e33894da59dd-metrics-certs\") pod \"speaker-5vsqn\" (UID: \"9b29feac-9647-448f-8c83-e33894da59dd\") " pod="metallb-system/speaker-5vsqn" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.409334 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-cg9xd"] Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.414869 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.509428 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4-metrics-certs\") pod \"controller-86ddb6bd46-cg9xd\" (UID: \"665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4\") " pod="metallb-system/controller-86ddb6bd46-cg9xd" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.509495 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5488s\" (UniqueName: \"kubernetes.io/projected/665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4-kube-api-access-5488s\") pod \"controller-86ddb6bd46-cg9xd\" (UID: \"665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4\") " pod="metallb-system/controller-86ddb6bd46-cg9xd" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.509532 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9b29feac-9647-448f-8c83-e33894da59dd-metallb-excludel2\") pod \"speaker-5vsqn\" (UID: \"9b29feac-9647-448f-8c83-e33894da59dd\") " pod="metallb-system/speaker-5vsqn" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.509567 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns9fg\" (UniqueName: \"kubernetes.io/projected/9b29feac-9647-448f-8c83-e33894da59dd-kube-api-access-ns9fg\") pod \"speaker-5vsqn\" (UID: \"9b29feac-9647-448f-8c83-e33894da59dd\") " pod="metallb-system/speaker-5vsqn" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.509598 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4-cert\") pod \"controller-86ddb6bd46-cg9xd\" (UID: \"665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4\") " pod="metallb-system/controller-86ddb6bd46-cg9xd" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.509638 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9b29feac-9647-448f-8c83-e33894da59dd-memberlist\") pod \"speaker-5vsqn\" (UID: \"9b29feac-9647-448f-8c83-e33894da59dd\") " pod="metallb-system/speaker-5vsqn" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.509661 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b29feac-9647-448f-8c83-e33894da59dd-metrics-certs\") pod \"speaker-5vsqn\" (UID: \"9b29feac-9647-448f-8c83-e33894da59dd\") " pod="metallb-system/speaker-5vsqn" Feb 26 11:25:11 crc kubenswrapper[4724]: E0226 11:25:11.509819 4724 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 26 11:25:11 crc kubenswrapper[4724]: E0226 11:25:11.509881 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b29feac-9647-448f-8c83-e33894da59dd-metrics-certs podName:9b29feac-9647-448f-8c83-e33894da59dd nodeName:}" failed. No retries permitted until 2026-02-26 11:25:12.009863164 +0000 UTC m=+1178.665602279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b29feac-9647-448f-8c83-e33894da59dd-metrics-certs") pod "speaker-5vsqn" (UID: "9b29feac-9647-448f-8c83-e33894da59dd") : secret "speaker-certs-secret" not found Feb 26 11:25:11 crc kubenswrapper[4724]: E0226 11:25:11.510043 4724 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 26 11:25:11 crc kubenswrapper[4724]: E0226 11:25:11.510067 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b29feac-9647-448f-8c83-e33894da59dd-memberlist podName:9b29feac-9647-448f-8c83-e33894da59dd nodeName:}" failed. No retries permitted until 2026-02-26 11:25:12.010060049 +0000 UTC m=+1178.665799164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9b29feac-9647-448f-8c83-e33894da59dd-memberlist") pod "speaker-5vsqn" (UID: "9b29feac-9647-448f-8c83-e33894da59dd") : secret "metallb-memberlist" not found Feb 26 11:25:11 crc kubenswrapper[4724]: E0226 11:25:11.510481 4724 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 26 11:25:11 crc kubenswrapper[4724]: E0226 11:25:11.510565 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4-metrics-certs podName:665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4 nodeName:}" failed. No retries permitted until 2026-02-26 11:25:12.010539211 +0000 UTC m=+1178.666278326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4-metrics-certs") pod "controller-86ddb6bd46-cg9xd" (UID: "665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4") : secret "controller-certs-secret" not found Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.510852 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9b29feac-9647-448f-8c83-e33894da59dd-metallb-excludel2\") pod \"speaker-5vsqn\" (UID: \"9b29feac-9647-448f-8c83-e33894da59dd\") " pod="metallb-system/speaker-5vsqn" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.522411 4724 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.526809 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4-cert\") pod \"controller-86ddb6bd46-cg9xd\" (UID: \"665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4\") " pod="metallb-system/controller-86ddb6bd46-cg9xd" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.554362 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5488s\" (UniqueName: \"kubernetes.io/projected/665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4-kube-api-access-5488s\") pod \"controller-86ddb6bd46-cg9xd\" (UID: \"665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4\") " pod="metallb-system/controller-86ddb6bd46-cg9xd" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.565305 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns9fg\" (UniqueName: \"kubernetes.io/projected/9b29feac-9647-448f-8c83-e33894da59dd-kube-api-access-ns9fg\") pod \"speaker-5vsqn\" (UID: \"9b29feac-9647-448f-8c83-e33894da59dd\") " pod="metallb-system/speaker-5vsqn" Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.714672 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d848b417-9306-4564-b059-0dc84bd7ec1a-metrics-certs\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:11 crc kubenswrapper[4724]: E0226 11:25:11.714987 4724 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 26 11:25:11 crc kubenswrapper[4724]: E0226 11:25:11.715078 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d848b417-9306-4564-b059-0dc84bd7ec1a-metrics-certs podName:d848b417-9306-4564-b059-0dc84bd7ec1a nodeName:}" failed. No retries permitted until 2026-02-26 11:25:12.715050576 +0000 UTC m=+1179.370789691 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d848b417-9306-4564-b059-0dc84bd7ec1a-metrics-certs") pod "frr-k8s-b86hc" (UID: "d848b417-9306-4564-b059-0dc84bd7ec1a") : secret "frr-k8s-certs-secret" not found Feb 26 11:25:11 crc kubenswrapper[4724]: I0226 11:25:11.900719 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-xv452"] Feb 26 11:25:12 crc kubenswrapper[4724]: I0226 11:25:12.018760 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9b29feac-9647-448f-8c83-e33894da59dd-memberlist\") pod \"speaker-5vsqn\" (UID: \"9b29feac-9647-448f-8c83-e33894da59dd\") " pod="metallb-system/speaker-5vsqn" Feb 26 11:25:12 crc kubenswrapper[4724]: I0226 11:25:12.019148 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b29feac-9647-448f-8c83-e33894da59dd-metrics-certs\") pod \"speaker-5vsqn\" (UID: \"9b29feac-9647-448f-8c83-e33894da59dd\") " pod="metallb-system/speaker-5vsqn" Feb 26 11:25:12 crc kubenswrapper[4724]: I0226 11:25:12.019230 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4-metrics-certs\") pod \"controller-86ddb6bd46-cg9xd\" (UID: \"665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4\") " pod="metallb-system/controller-86ddb6bd46-cg9xd" Feb 26 11:25:12 crc kubenswrapper[4724]: E0226 11:25:12.019487 4724 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 26 11:25:12 crc kubenswrapper[4724]: E0226 11:25:12.019574 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b29feac-9647-448f-8c83-e33894da59dd-memberlist podName:9b29feac-9647-448f-8c83-e33894da59dd nodeName:}" failed. No retries permitted until 2026-02-26 11:25:13.019553657 +0000 UTC m=+1179.675292792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9b29feac-9647-448f-8c83-e33894da59dd-memberlist") pod "speaker-5vsqn" (UID: "9b29feac-9647-448f-8c83-e33894da59dd") : secret "metallb-memberlist" not found Feb 26 11:25:12 crc kubenswrapper[4724]: I0226 11:25:12.023469 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4-metrics-certs\") pod \"controller-86ddb6bd46-cg9xd\" (UID: \"665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4\") " pod="metallb-system/controller-86ddb6bd46-cg9xd" Feb 26 11:25:12 crc kubenswrapper[4724]: I0226 11:25:12.026051 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b29feac-9647-448f-8c83-e33894da59dd-metrics-certs\") pod \"speaker-5vsqn\" (UID: \"9b29feac-9647-448f-8c83-e33894da59dd\") " pod="metallb-system/speaker-5vsqn" Feb 26 11:25:12 crc kubenswrapper[4724]: I0226 11:25:12.056247 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-cg9xd" Feb 26 11:25:12 crc kubenswrapper[4724]: I0226 11:25:12.363239 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xv452" event={"ID":"933b3336-9cea-4b27-92e3-3fcf69076040","Type":"ContainerStarted","Data":"c5cfed036211af0b0e37384ab8466cbb623417d8c9d19183efcfdbb0a7757cd6"} Feb 26 11:25:12 crc kubenswrapper[4724]: I0226 11:25:12.423831 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-cg9xd"] Feb 26 11:25:12 crc kubenswrapper[4724]: W0226 11:25:12.428904 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod665cc442_6cd7_4069_b0b4_2e2ee8a0b7d4.slice/crio-64abf939a10f7eb54a556b26260fc7f18c0f787f1522af27413b02490d45e931 WatchSource:0}: Error finding container 64abf939a10f7eb54a556b26260fc7f18c0f787f1522af27413b02490d45e931: Status 404 returned error can't find the container with id 64abf939a10f7eb54a556b26260fc7f18c0f787f1522af27413b02490d45e931 Feb 26 11:25:12 crc kubenswrapper[4724]: I0226 11:25:12.728393 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d848b417-9306-4564-b059-0dc84bd7ec1a-metrics-certs\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:12 crc kubenswrapper[4724]: I0226 11:25:12.736827 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d848b417-9306-4564-b059-0dc84bd7ec1a-metrics-certs\") pod \"frr-k8s-b86hc\" (UID: \"d848b417-9306-4564-b059-0dc84bd7ec1a\") " pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:12 crc kubenswrapper[4724]: I0226 11:25:12.793808 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:13 crc kubenswrapper[4724]: I0226 11:25:13.037490 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9b29feac-9647-448f-8c83-e33894da59dd-memberlist\") pod \"speaker-5vsqn\" (UID: \"9b29feac-9647-448f-8c83-e33894da59dd\") " pod="metallb-system/speaker-5vsqn" Feb 26 11:25:13 crc kubenswrapper[4724]: I0226 11:25:13.040803 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9b29feac-9647-448f-8c83-e33894da59dd-memberlist\") pod \"speaker-5vsqn\" (UID: \"9b29feac-9647-448f-8c83-e33894da59dd\") " pod="metallb-system/speaker-5vsqn" Feb 26 11:25:13 crc kubenswrapper[4724]: I0226 11:25:13.239542 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5vsqn" Feb 26 11:25:13 crc kubenswrapper[4724]: W0226 11:25:13.266016 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b29feac_9647_448f_8c83_e33894da59dd.slice/crio-787a9edbf674b0ec5df04656f93d5d5dec0cc46b8a69be108e167f5dc8ca30e1 WatchSource:0}: Error finding container 787a9edbf674b0ec5df04656f93d5d5dec0cc46b8a69be108e167f5dc8ca30e1: Status 404 returned error can't find the container with id 787a9edbf674b0ec5df04656f93d5d5dec0cc46b8a69be108e167f5dc8ca30e1 Feb 26 11:25:13 crc kubenswrapper[4724]: I0226 11:25:13.372132 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-cg9xd" event={"ID":"665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4","Type":"ContainerStarted","Data":"ef9c1a2e52a983319e2c5b73ad6e641d4abadc70c879fd1389afc33035cd8ef4"} Feb 26 11:25:13 crc kubenswrapper[4724]: I0226 11:25:13.372204 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-cg9xd" event={"ID":"665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4","Type":"ContainerStarted","Data":"e9905c9c4d0a511361fea1b02e6383dae866846bbb07c82596dcbb2ad988d638"} Feb 26 11:25:13 crc kubenswrapper[4724]: I0226 11:25:13.372222 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-cg9xd" event={"ID":"665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4","Type":"ContainerStarted","Data":"64abf939a10f7eb54a556b26260fc7f18c0f787f1522af27413b02490d45e931"} Feb 26 11:25:13 crc kubenswrapper[4724]: I0226 11:25:13.372264 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-cg9xd" Feb 26 11:25:13 crc kubenswrapper[4724]: I0226 11:25:13.374552 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5vsqn" event={"ID":"9b29feac-9647-448f-8c83-e33894da59dd","Type":"ContainerStarted","Data":"787a9edbf674b0ec5df04656f93d5d5dec0cc46b8a69be108e167f5dc8ca30e1"} Feb 26 11:25:13 crc kubenswrapper[4724]: I0226 11:25:13.376439 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b86hc" event={"ID":"d848b417-9306-4564-b059-0dc84bd7ec1a","Type":"ContainerStarted","Data":"c8ca8f5d6995d348d1c4c50d9d1536f1d2d7991e12876565985ee9ecf51b8bda"} Feb 26 11:25:13 crc kubenswrapper[4724]: I0226 11:25:13.379021 4724 generic.go:334] "Generic (PLEG): container finished" podID="fb89dd88-1f9e-4f23-9b12-dc85e159ab16" containerID="3bf8c50b2e7688d339704534cec0145b50fe43b24b307a08ad7136adbbc1467e" exitCode=0 Feb 26 11:25:13 crc kubenswrapper[4724]: I0226 11:25:13.379095 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47752" event={"ID":"fb89dd88-1f9e-4f23-9b12-dc85e159ab16","Type":"ContainerDied","Data":"3bf8c50b2e7688d339704534cec0145b50fe43b24b307a08ad7136adbbc1467e"} Feb 26 11:25:13 crc kubenswrapper[4724]: I0226 11:25:13.407160 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-cg9xd" podStartSLOduration=2.407145084 podStartE2EDuration="2.407145084s" podCreationTimestamp="2026-02-26 11:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:25:13.403307526 +0000 UTC m=+1180.059046641" watchObservedRunningTime="2026-02-26 11:25:13.407145084 +0000 UTC m=+1180.062884199" Feb 26 11:25:14 crc kubenswrapper[4724]: I0226 11:25:14.406929 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47752" event={"ID":"fb89dd88-1f9e-4f23-9b12-dc85e159ab16","Type":"ContainerStarted","Data":"631b4e0d3f1136fe00d6e49e73c80d84d1cc2474537488b34625df695bccca39"} Feb 26 11:25:14 crc kubenswrapper[4724]: I0226 11:25:14.423205 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5vsqn" event={"ID":"9b29feac-9647-448f-8c83-e33894da59dd","Type":"ContainerStarted","Data":"fe9d08bed8036b304f6c2addc7f05fff94de18ec92d3f18423899fa146ff7102"} Feb 26 11:25:14 crc kubenswrapper[4724]: I0226 11:25:14.423253 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5vsqn" event={"ID":"9b29feac-9647-448f-8c83-e33894da59dd","Type":"ContainerStarted","Data":"6cf2f31249fa55988cf1c4af0096e7b7ba9dcead7d60a10a05218664dfdf149c"} Feb 26 11:25:14 crc kubenswrapper[4724]: I0226 11:25:14.427161 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5vsqn" Feb 26 11:25:14 crc kubenswrapper[4724]: I0226 11:25:14.467666 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-47752" podStartSLOduration=1.911554308 podStartE2EDuration="4.467641746s" podCreationTimestamp="2026-02-26 11:25:10 +0000 UTC" firstStartedPulling="2026-02-26 11:25:11.36199685 +0000 UTC m=+1178.017735965" lastFinishedPulling="2026-02-26 11:25:13.918084288 +0000 UTC m=+1180.573823403" observedRunningTime="2026-02-26 11:25:14.44697167 +0000 UTC m=+1181.102710785" watchObservedRunningTime="2026-02-26 11:25:14.467641746 +0000 UTC m=+1181.123380851" Feb 26 11:25:14 crc kubenswrapper[4724]: I0226 11:25:14.491995 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-5vsqn" podStartSLOduration=3.491975245 podStartE2EDuration="3.491975245s" podCreationTimestamp="2026-02-26 11:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:25:14.490262042 +0000 UTC m=+1181.146001177" watchObservedRunningTime="2026-02-26 11:25:14.491975245 +0000 UTC m=+1181.147714370" Feb 26 11:25:20 crc kubenswrapper[4724]: I0226 11:25:20.519010 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:20 crc kubenswrapper[4724]: I0226 11:25:20.519678 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:20 crc kubenswrapper[4724]: I0226 11:25:20.571136 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:21 crc kubenswrapper[4724]: I0226 11:25:21.545573 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:21 crc kubenswrapper[4724]: I0226 11:25:21.600169 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-47752"] Feb 26 11:25:22 crc kubenswrapper[4724]: I0226 11:25:22.063967 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-cg9xd" Feb 26 11:25:23 crc kubenswrapper[4724]: I0226 11:25:23.246692 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-5vsqn" Feb 26 11:25:23 crc kubenswrapper[4724]: I0226 11:25:23.515964 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-47752" podUID="fb89dd88-1f9e-4f23-9b12-dc85e159ab16" containerName="registry-server" containerID="cri-o://631b4e0d3f1136fe00d6e49e73c80d84d1cc2474537488b34625df695bccca39" gracePeriod=2 Feb 26 11:25:24 crc kubenswrapper[4724]: I0226 11:25:24.526574 4724 generic.go:334] "Generic (PLEG): container finished" podID="fb89dd88-1f9e-4f23-9b12-dc85e159ab16" containerID="631b4e0d3f1136fe00d6e49e73c80d84d1cc2474537488b34625df695bccca39" exitCode=0 Feb 26 11:25:24 crc kubenswrapper[4724]: I0226 11:25:24.526681 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47752" event={"ID":"fb89dd88-1f9e-4f23-9b12-dc85e159ab16","Type":"ContainerDied","Data":"631b4e0d3f1136fe00d6e49e73c80d84d1cc2474537488b34625df695bccca39"} Feb 26 11:25:24 crc kubenswrapper[4724]: I0226 11:25:24.527014 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47752" event={"ID":"fb89dd88-1f9e-4f23-9b12-dc85e159ab16","Type":"ContainerDied","Data":"d027b0479b255b1792b569437802d574e45107882499fbee5e1205615193bce8"} Feb 26 11:25:24 crc kubenswrapper[4724]: I0226 11:25:24.527039 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d027b0479b255b1792b569437802d574e45107882499fbee5e1205615193bce8" Feb 26 11:25:24 crc kubenswrapper[4724]: I0226 11:25:24.528990 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:24 crc kubenswrapper[4724]: I0226 11:25:24.567536 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-utilities\") pod \"fb89dd88-1f9e-4f23-9b12-dc85e159ab16\" (UID: \"fb89dd88-1f9e-4f23-9b12-dc85e159ab16\") " Feb 26 11:25:24 crc kubenswrapper[4724]: I0226 11:25:24.567624 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-catalog-content\") pod \"fb89dd88-1f9e-4f23-9b12-dc85e159ab16\" (UID: \"fb89dd88-1f9e-4f23-9b12-dc85e159ab16\") " Feb 26 11:25:24 crc kubenswrapper[4724]: I0226 11:25:24.567667 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqxdj\" (UniqueName: \"kubernetes.io/projected/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-kube-api-access-dqxdj\") pod \"fb89dd88-1f9e-4f23-9b12-dc85e159ab16\" (UID: \"fb89dd88-1f9e-4f23-9b12-dc85e159ab16\") " Feb 26 11:25:24 crc kubenswrapper[4724]: I0226 11:25:24.568703 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-utilities" (OuterVolumeSpecName: "utilities") pod "fb89dd88-1f9e-4f23-9b12-dc85e159ab16" (UID: "fb89dd88-1f9e-4f23-9b12-dc85e159ab16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:25:24 crc kubenswrapper[4724]: I0226 11:25:24.578668 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-kube-api-access-dqxdj" (OuterVolumeSpecName: "kube-api-access-dqxdj") pod "fb89dd88-1f9e-4f23-9b12-dc85e159ab16" (UID: "fb89dd88-1f9e-4f23-9b12-dc85e159ab16"). InnerVolumeSpecName "kube-api-access-dqxdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:25:24 crc kubenswrapper[4724]: I0226 11:25:24.606106 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb89dd88-1f9e-4f23-9b12-dc85e159ab16" (UID: "fb89dd88-1f9e-4f23-9b12-dc85e159ab16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:25:24 crc kubenswrapper[4724]: I0226 11:25:24.669230 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:24 crc kubenswrapper[4724]: I0226 11:25:24.669276 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:24 crc kubenswrapper[4724]: I0226 11:25:24.669335 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqxdj\" (UniqueName: \"kubernetes.io/projected/fb89dd88-1f9e-4f23-9b12-dc85e159ab16-kube-api-access-dqxdj\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:25 crc kubenswrapper[4724]: I0226 11:25:25.533758 4724 generic.go:334] "Generic (PLEG): container finished" podID="d848b417-9306-4564-b059-0dc84bd7ec1a" containerID="26038e00bc14bdda8bf4063bd060926f8fea10aefaadc0d4e51aaeda4fbddde5" exitCode=0 Feb 26 11:25:25 crc kubenswrapper[4724]: I0226 11:25:25.533858 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b86hc" event={"ID":"d848b417-9306-4564-b059-0dc84bd7ec1a","Type":"ContainerDied","Data":"26038e00bc14bdda8bf4063bd060926f8fea10aefaadc0d4e51aaeda4fbddde5"} Feb 26 11:25:25 crc kubenswrapper[4724]: I0226 11:25:25.536739 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47752" Feb 26 11:25:25 crc kubenswrapper[4724]: I0226 11:25:25.536972 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xv452" event={"ID":"933b3336-9cea-4b27-92e3-3fcf69076040","Type":"ContainerStarted","Data":"beb24d3b45b6267cc19d21f5748e7aa4d8403c404984aac116ef5d071501db66"} Feb 26 11:25:25 crc kubenswrapper[4724]: I0226 11:25:25.625384 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xv452" podStartSLOduration=2.845002496 podStartE2EDuration="15.625342807s" podCreationTimestamp="2026-02-26 11:25:10 +0000 UTC" firstStartedPulling="2026-02-26 11:25:11.924501427 +0000 UTC m=+1178.580240542" lastFinishedPulling="2026-02-26 11:25:24.704841738 +0000 UTC m=+1191.360580853" observedRunningTime="2026-02-26 11:25:25.608730904 +0000 UTC m=+1192.264470029" watchObservedRunningTime="2026-02-26 11:25:25.625342807 +0000 UTC m=+1192.281081932" Feb 26 11:25:25 crc kubenswrapper[4724]: I0226 11:25:25.628754 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-47752"] Feb 26 11:25:25 crc kubenswrapper[4724]: I0226 11:25:25.642301 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-47752"] Feb 26 11:25:25 crc kubenswrapper[4724]: I0226 11:25:25.991436 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb89dd88-1f9e-4f23-9b12-dc85e159ab16" path="/var/lib/kubelet/pods/fb89dd88-1f9e-4f23-9b12-dc85e159ab16/volumes" Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.389361 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6wd2c"] Feb 26 11:25:26 crc kubenswrapper[4724]: E0226 11:25:26.389644 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb89dd88-1f9e-4f23-9b12-dc85e159ab16" containerName="extract-content" Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.389673 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb89dd88-1f9e-4f23-9b12-dc85e159ab16" containerName="extract-content" Feb 26 11:25:26 crc kubenswrapper[4724]: E0226 11:25:26.389689 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb89dd88-1f9e-4f23-9b12-dc85e159ab16" containerName="registry-server" Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.389697 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb89dd88-1f9e-4f23-9b12-dc85e159ab16" containerName="registry-server" Feb 26 11:25:26 crc kubenswrapper[4724]: E0226 11:25:26.389710 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb89dd88-1f9e-4f23-9b12-dc85e159ab16" containerName="extract-utilities" Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.389721 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb89dd88-1f9e-4f23-9b12-dc85e159ab16" containerName="extract-utilities" Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.389845 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb89dd88-1f9e-4f23-9b12-dc85e159ab16" containerName="registry-server" Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.390337 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6wd2c" Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.394169 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.394206 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.394406 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wwktn" Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.421271 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6wd2c"] Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.499008 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flrmd\" (UniqueName: \"kubernetes.io/projected/dad4b354-1f73-488e-9163-b2c72b8d10d1-kube-api-access-flrmd\") pod \"openstack-operator-index-6wd2c\" (UID: \"dad4b354-1f73-488e-9163-b2c72b8d10d1\") " pod="openstack-operators/openstack-operator-index-6wd2c" Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.546940 4724 generic.go:334] "Generic (PLEG): container finished" podID="d848b417-9306-4564-b059-0dc84bd7ec1a" containerID="b236ace7a69febc0491631cd02de8c7eea5cd3f2100464b7299dbe2c3314288a" exitCode=0 Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.548149 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b86hc" event={"ID":"d848b417-9306-4564-b059-0dc84bd7ec1a","Type":"ContainerDied","Data":"b236ace7a69febc0491631cd02de8c7eea5cd3f2100464b7299dbe2c3314288a"} Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.548279 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xv452" Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.601072 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flrmd\" (UniqueName: \"kubernetes.io/projected/dad4b354-1f73-488e-9163-b2c72b8d10d1-kube-api-access-flrmd\") pod \"openstack-operator-index-6wd2c\" (UID: \"dad4b354-1f73-488e-9163-b2c72b8d10d1\") " pod="openstack-operators/openstack-operator-index-6wd2c" Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.631812 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flrmd\" (UniqueName: \"kubernetes.io/projected/dad4b354-1f73-488e-9163-b2c72b8d10d1-kube-api-access-flrmd\") pod \"openstack-operator-index-6wd2c\" (UID: \"dad4b354-1f73-488e-9163-b2c72b8d10d1\") " pod="openstack-operators/openstack-operator-index-6wd2c" Feb 26 11:25:26 crc kubenswrapper[4724]: I0226 11:25:26.706490 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6wd2c" Feb 26 11:25:27 crc kubenswrapper[4724]: I0226 11:25:27.037209 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6wd2c"] Feb 26 11:25:27 crc kubenswrapper[4724]: I0226 11:25:27.557677 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6wd2c" event={"ID":"dad4b354-1f73-488e-9163-b2c72b8d10d1","Type":"ContainerStarted","Data":"8b789d1e0804acb8eafb6e188565473580d192402da509ca51f1c14919cc614d"} Feb 26 11:25:27 crc kubenswrapper[4724]: I0226 11:25:27.561638 4724 generic.go:334] "Generic (PLEG): container finished" podID="d848b417-9306-4564-b059-0dc84bd7ec1a" containerID="b95e3be7253d01dc0cecb4673bde6b96f93520309d098d90f7fe19cda4eb1348" exitCode=0 Feb 26 11:25:27 crc kubenswrapper[4724]: I0226 11:25:27.561744 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b86hc" event={"ID":"d848b417-9306-4564-b059-0dc84bd7ec1a","Type":"ContainerDied","Data":"b95e3be7253d01dc0cecb4673bde6b96f93520309d098d90f7fe19cda4eb1348"} Feb 26 11:25:28 crc kubenswrapper[4724]: I0226 11:25:28.570595 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b86hc" event={"ID":"d848b417-9306-4564-b059-0dc84bd7ec1a","Type":"ContainerStarted","Data":"19843f282e3ad157c7a357927b0bd9b395262354ea03f997fb79926d597a8de7"} Feb 26 11:25:28 crc kubenswrapper[4724]: I0226 11:25:28.570885 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b86hc" event={"ID":"d848b417-9306-4564-b059-0dc84bd7ec1a","Type":"ContainerStarted","Data":"990a08e0dcc2034346c604c8aef46b9ceb60030a2989049e148217de7bcb15ab"} Feb 26 11:25:29 crc kubenswrapper[4724]: I0226 11:25:29.257638 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6wd2c"] Feb 26 11:25:29 crc kubenswrapper[4724]: I0226 11:25:29.863298 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5f2tw"] Feb 26 11:25:29 crc kubenswrapper[4724]: I0226 11:25:29.867109 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5f2tw" Feb 26 11:25:29 crc kubenswrapper[4724]: I0226 11:25:29.871912 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5f2tw"] Feb 26 11:25:29 crc kubenswrapper[4724]: I0226 11:25:29.956655 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k44zx\" (UniqueName: \"kubernetes.io/projected/ee48a99c-cb5f-4564-9631-daeae942461e-kube-api-access-k44zx\") pod \"openstack-operator-index-5f2tw\" (UID: \"ee48a99c-cb5f-4564-9631-daeae942461e\") " pod="openstack-operators/openstack-operator-index-5f2tw" Feb 26 11:25:30 crc kubenswrapper[4724]: I0226 11:25:30.058642 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k44zx\" (UniqueName: \"kubernetes.io/projected/ee48a99c-cb5f-4564-9631-daeae942461e-kube-api-access-k44zx\") pod \"openstack-operator-index-5f2tw\" (UID: \"ee48a99c-cb5f-4564-9631-daeae942461e\") " pod="openstack-operators/openstack-operator-index-5f2tw" Feb 26 11:25:30 crc kubenswrapper[4724]: I0226 11:25:30.081345 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k44zx\" (UniqueName: \"kubernetes.io/projected/ee48a99c-cb5f-4564-9631-daeae942461e-kube-api-access-k44zx\") pod \"openstack-operator-index-5f2tw\" (UID: \"ee48a99c-cb5f-4564-9631-daeae942461e\") " pod="openstack-operators/openstack-operator-index-5f2tw" Feb 26 11:25:30 crc kubenswrapper[4724]: I0226 11:25:30.187919 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5f2tw" Feb 26 11:25:32 crc kubenswrapper[4724]: I0226 11:25:32.648456 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5f2tw"] Feb 26 11:25:32 crc kubenswrapper[4724]: I0226 11:25:32.669683 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b86hc" event={"ID":"d848b417-9306-4564-b059-0dc84bd7ec1a","Type":"ContainerStarted","Data":"f9b329cf55b5abd939ef987e8bf7abc221b421ab288a4634afb497fbbb1a2155"} Feb 26 11:25:33 crc kubenswrapper[4724]: I0226 11:25:33.679953 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-6wd2c" podUID="dad4b354-1f73-488e-9163-b2c72b8d10d1" containerName="registry-server" containerID="cri-o://631a0d4bf2c1024fab84235d5d2349d8f95971ab85256dcffd6df72348f5675c" gracePeriod=2 Feb 26 11:25:33 crc kubenswrapper[4724]: I0226 11:25:33.680479 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6wd2c" event={"ID":"dad4b354-1f73-488e-9163-b2c72b8d10d1","Type":"ContainerStarted","Data":"631a0d4bf2c1024fab84235d5d2349d8f95971ab85256dcffd6df72348f5675c"} Feb 26 11:25:33 crc kubenswrapper[4724]: I0226 11:25:33.692253 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b86hc" event={"ID":"d848b417-9306-4564-b059-0dc84bd7ec1a","Type":"ContainerStarted","Data":"9790da61f793084d082a21bdb2d3ae14d013773405cf399deb5f4f15cc36c936"} Feb 26 11:25:33 crc kubenswrapper[4724]: I0226 11:25:33.692307 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b86hc" event={"ID":"d848b417-9306-4564-b059-0dc84bd7ec1a","Type":"ContainerStarted","Data":"44b9ee5a140193aee94ad5559ff34b7186a7cc4104d083f2237a84b90f6ff829"} Feb 26 11:25:33 crc kubenswrapper[4724]: I0226 11:25:33.692321 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b86hc" event={"ID":"d848b417-9306-4564-b059-0dc84bd7ec1a","Type":"ContainerStarted","Data":"fdf83f7ae062b9f0c9f9d10bdbb063b27a31464f8f0687abc359b8d2e83f59b0"} Feb 26 11:25:33 crc kubenswrapper[4724]: I0226 11:25:33.694317 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:33 crc kubenswrapper[4724]: I0226 11:25:33.698926 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5f2tw" event={"ID":"ee48a99c-cb5f-4564-9631-daeae942461e","Type":"ContainerStarted","Data":"8775074ef5e6d35861bc55d352f63d9982ea8327216c278bad77d3ec8ef8d79e"} Feb 26 11:25:33 crc kubenswrapper[4724]: I0226 11:25:33.698977 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5f2tw" event={"ID":"ee48a99c-cb5f-4564-9631-daeae942461e","Type":"ContainerStarted","Data":"14cf13afecd95b3b33a7196f5930d8793431876a081b4996d009caa683303448"} Feb 26 11:25:33 crc kubenswrapper[4724]: I0226 11:25:33.786245 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-b86hc" podStartSLOduration=12.00467558 podStartE2EDuration="23.78622735s" podCreationTimestamp="2026-02-26 11:25:10 +0000 UTC" firstStartedPulling="2026-02-26 11:25:12.947965236 +0000 UTC m=+1179.603704351" lastFinishedPulling="2026-02-26 11:25:24.729517006 +0000 UTC m=+1191.385256121" observedRunningTime="2026-02-26 11:25:33.784615949 +0000 UTC m=+1200.440355074" watchObservedRunningTime="2026-02-26 11:25:33.78622735 +0000 UTC m=+1200.441966465" Feb 26 11:25:33 crc kubenswrapper[4724]: I0226 11:25:33.788456 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6wd2c" podStartSLOduration=2.411580033 podStartE2EDuration="7.788441996s" podCreationTimestamp="2026-02-26 11:25:26 +0000 UTC" firstStartedPulling="2026-02-26 11:25:27.058428822 +0000 UTC m=+1193.714167937" lastFinishedPulling="2026-02-26 11:25:32.435290785 +0000 UTC m=+1199.091029900" observedRunningTime="2026-02-26 11:25:33.741941053 +0000 UTC m=+1200.397680168" watchObservedRunningTime="2026-02-26 11:25:33.788441996 +0000 UTC m=+1200.444181121" Feb 26 11:25:33 crc kubenswrapper[4724]: I0226 11:25:33.942551 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5f2tw" podStartSLOduration=4.772739506 podStartE2EDuration="4.942512628s" podCreationTimestamp="2026-02-26 11:25:29 +0000 UTC" firstStartedPulling="2026-02-26 11:25:32.677745116 +0000 UTC m=+1199.333484241" lastFinishedPulling="2026-02-26 11:25:32.847518238 +0000 UTC m=+1199.503257363" observedRunningTime="2026-02-26 11:25:33.925889465 +0000 UTC m=+1200.581628600" watchObservedRunningTime="2026-02-26 11:25:33.942512628 +0000 UTC m=+1200.598251763" Feb 26 11:25:33 crc kubenswrapper[4724]: I0226 11:25:33.970547 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c2rtz"] Feb 26 11:25:33 crc kubenswrapper[4724]: I0226 11:25:33.976500 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.003587 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d57efcae-a1f4-46d5-b050-20d34411342f-utilities\") pod \"community-operators-c2rtz\" (UID: \"d57efcae-a1f4-46d5-b050-20d34411342f\") " pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.003919 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d57efcae-a1f4-46d5-b050-20d34411342f-catalog-content\") pod \"community-operators-c2rtz\" (UID: \"d57efcae-a1f4-46d5-b050-20d34411342f\") " pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.003965 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrzsp\" (UniqueName: \"kubernetes.io/projected/d57efcae-a1f4-46d5-b050-20d34411342f-kube-api-access-mrzsp\") pod \"community-operators-c2rtz\" (UID: \"d57efcae-a1f4-46d5-b050-20d34411342f\") " pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.014788 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2rtz"] Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.108502 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d57efcae-a1f4-46d5-b050-20d34411342f-utilities\") pod \"community-operators-c2rtz\" (UID: \"d57efcae-a1f4-46d5-b050-20d34411342f\") " pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.108657 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d57efcae-a1f4-46d5-b050-20d34411342f-catalog-content\") pod \"community-operators-c2rtz\" (UID: \"d57efcae-a1f4-46d5-b050-20d34411342f\") " pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.108783 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrzsp\" (UniqueName: \"kubernetes.io/projected/d57efcae-a1f4-46d5-b050-20d34411342f-kube-api-access-mrzsp\") pod \"community-operators-c2rtz\" (UID: \"d57efcae-a1f4-46d5-b050-20d34411342f\") " pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.109707 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d57efcae-a1f4-46d5-b050-20d34411342f-utilities\") pod \"community-operators-c2rtz\" (UID: \"d57efcae-a1f4-46d5-b050-20d34411342f\") " pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.110345 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d57efcae-a1f4-46d5-b050-20d34411342f-catalog-content\") pod \"community-operators-c2rtz\" (UID: \"d57efcae-a1f4-46d5-b050-20d34411342f\") " pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.148755 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrzsp\" (UniqueName: \"kubernetes.io/projected/d57efcae-a1f4-46d5-b050-20d34411342f-kube-api-access-mrzsp\") pod \"community-operators-c2rtz\" (UID: \"d57efcae-a1f4-46d5-b050-20d34411342f\") " pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.358678 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.537807 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6wd2c" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.691244 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2rtz"] Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.726470 4724 generic.go:334] "Generic (PLEG): container finished" podID="dad4b354-1f73-488e-9163-b2c72b8d10d1" containerID="631a0d4bf2c1024fab84235d5d2349d8f95971ab85256dcffd6df72348f5675c" exitCode=0 Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.726552 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6wd2c" event={"ID":"dad4b354-1f73-488e-9163-b2c72b8d10d1","Type":"ContainerDied","Data":"631a0d4bf2c1024fab84235d5d2349d8f95971ab85256dcffd6df72348f5675c"} Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.726590 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6wd2c" event={"ID":"dad4b354-1f73-488e-9163-b2c72b8d10d1","Type":"ContainerDied","Data":"8b789d1e0804acb8eafb6e188565473580d192402da509ca51f1c14919cc614d"} Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.726611 4724 scope.go:117] "RemoveContainer" containerID="631a0d4bf2c1024fab84235d5d2349d8f95971ab85256dcffd6df72348f5675c" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.726742 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6wd2c" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.730207 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2rtz" event={"ID":"d57efcae-a1f4-46d5-b050-20d34411342f","Type":"ContainerStarted","Data":"e8de99c5c88c59452647514f46d76af9c981600063166abe182cda119932337e"} Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.734886 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flrmd\" (UniqueName: \"kubernetes.io/projected/dad4b354-1f73-488e-9163-b2c72b8d10d1-kube-api-access-flrmd\") pod \"dad4b354-1f73-488e-9163-b2c72b8d10d1\" (UID: \"dad4b354-1f73-488e-9163-b2c72b8d10d1\") " Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.745209 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad4b354-1f73-488e-9163-b2c72b8d10d1-kube-api-access-flrmd" (OuterVolumeSpecName: "kube-api-access-flrmd") pod "dad4b354-1f73-488e-9163-b2c72b8d10d1" (UID: "dad4b354-1f73-488e-9163-b2c72b8d10d1"). InnerVolumeSpecName "kube-api-access-flrmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.770404 4724 scope.go:117] "RemoveContainer" containerID="631a0d4bf2c1024fab84235d5d2349d8f95971ab85256dcffd6df72348f5675c" Feb 26 11:25:34 crc kubenswrapper[4724]: E0226 11:25:34.772573 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"631a0d4bf2c1024fab84235d5d2349d8f95971ab85256dcffd6df72348f5675c\": container with ID starting with 631a0d4bf2c1024fab84235d5d2349d8f95971ab85256dcffd6df72348f5675c not found: ID does not exist" containerID="631a0d4bf2c1024fab84235d5d2349d8f95971ab85256dcffd6df72348f5675c" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.772617 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"631a0d4bf2c1024fab84235d5d2349d8f95971ab85256dcffd6df72348f5675c"} err="failed to get container status \"631a0d4bf2c1024fab84235d5d2349d8f95971ab85256dcffd6df72348f5675c\": rpc error: code = NotFound desc = could not find container \"631a0d4bf2c1024fab84235d5d2349d8f95971ab85256dcffd6df72348f5675c\": container with ID starting with 631a0d4bf2c1024fab84235d5d2349d8f95971ab85256dcffd6df72348f5675c not found: ID does not exist" Feb 26 11:25:34 crc kubenswrapper[4724]: I0226 11:25:34.836494 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flrmd\" (UniqueName: \"kubernetes.io/projected/dad4b354-1f73-488e-9163-b2c72b8d10d1-kube-api-access-flrmd\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:35 crc kubenswrapper[4724]: I0226 11:25:35.152551 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6wd2c"] Feb 26 11:25:35 crc kubenswrapper[4724]: I0226 11:25:35.160302 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-6wd2c"] Feb 26 11:25:35 crc kubenswrapper[4724]: I0226 11:25:35.742966 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2rtz" event={"ID":"d57efcae-a1f4-46d5-b050-20d34411342f","Type":"ContainerDied","Data":"23bc5dfe995cf63a1cf40cd31ce1ea8ca59a9b3578a40bedac855e9aab74cd6e"} Feb 26 11:25:35 crc kubenswrapper[4724]: I0226 11:25:35.742900 4724 generic.go:334] "Generic (PLEG): container finished" podID="d57efcae-a1f4-46d5-b050-20d34411342f" containerID="23bc5dfe995cf63a1cf40cd31ce1ea8ca59a9b3578a40bedac855e9aab74cd6e" exitCode=0 Feb 26 11:25:35 crc kubenswrapper[4724]: I0226 11:25:35.985747 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad4b354-1f73-488e-9163-b2c72b8d10d1" path="/var/lib/kubelet/pods/dad4b354-1f73-488e-9163-b2c72b8d10d1/volumes" Feb 26 11:25:37 crc kubenswrapper[4724]: I0226 11:25:37.768054 4724 generic.go:334] "Generic (PLEG): container finished" podID="d57efcae-a1f4-46d5-b050-20d34411342f" containerID="c8048a923988fb4f8a040624300ff775e9fb84450ba66ebe5943ff853f6e6475" exitCode=0 Feb 26 11:25:37 crc kubenswrapper[4724]: I0226 11:25:37.768130 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2rtz" event={"ID":"d57efcae-a1f4-46d5-b050-20d34411342f","Type":"ContainerDied","Data":"c8048a923988fb4f8a040624300ff775e9fb84450ba66ebe5943ff853f6e6475"} Feb 26 11:25:37 crc kubenswrapper[4724]: I0226 11:25:37.794931 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:37 crc kubenswrapper[4724]: I0226 11:25:37.842734 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:38 crc kubenswrapper[4724]: I0226 11:25:38.778402 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2rtz" event={"ID":"d57efcae-a1f4-46d5-b050-20d34411342f","Type":"ContainerStarted","Data":"b4e0e648579a6d29e1fb76f61cba9efaca77ed771535b65d52ae4f53b6a02ebd"} Feb 26 11:25:38 crc kubenswrapper[4724]: I0226 11:25:38.783268 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-b86hc" Feb 26 11:25:38 crc kubenswrapper[4724]: I0226 11:25:38.799745 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c2rtz" podStartSLOduration=3.317936008 podStartE2EDuration="5.799726355s" podCreationTimestamp="2026-02-26 11:25:33 +0000 UTC" firstStartedPulling="2026-02-26 11:25:35.745538639 +0000 UTC m=+1202.401277754" lastFinishedPulling="2026-02-26 11:25:38.227328986 +0000 UTC m=+1204.883068101" observedRunningTime="2026-02-26 11:25:38.794749248 +0000 UTC m=+1205.450488373" watchObservedRunningTime="2026-02-26 11:25:38.799726355 +0000 UTC m=+1205.455465470" Feb 26 11:25:40 crc kubenswrapper[4724]: I0226 11:25:40.188988 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-5f2tw" Feb 26 11:25:40 crc kubenswrapper[4724]: I0226 11:25:40.189368 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-5f2tw" Feb 26 11:25:40 crc kubenswrapper[4724]: I0226 11:25:40.253151 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-5f2tw" Feb 26 11:25:40 crc kubenswrapper[4724]: I0226 11:25:40.824491 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-5f2tw" Feb 26 11:25:41 crc kubenswrapper[4724]: I0226 11:25:41.261259 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xv452" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.093339 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7"] Feb 26 11:25:43 crc kubenswrapper[4724]: E0226 11:25:43.093949 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad4b354-1f73-488e-9163-b2c72b8d10d1" containerName="registry-server" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.093970 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad4b354-1f73-488e-9163-b2c72b8d10d1" containerName="registry-server" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.094132 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad4b354-1f73-488e-9163-b2c72b8d10d1" containerName="registry-server" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.095244 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.101356 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jd7js" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.117914 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7"] Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.227654 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-util\") pod \"93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7\" (UID: \"3d0adab1-1760-4649-9b4a-63dbe6bf84a2\") " pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.227816 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcmqr\" (UniqueName: \"kubernetes.io/projected/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-kube-api-access-lcmqr\") pod \"93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7\" (UID: \"3d0adab1-1760-4649-9b4a-63dbe6bf84a2\") " pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.227969 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-bundle\") pod \"93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7\" (UID: \"3d0adab1-1760-4649-9b4a-63dbe6bf84a2\") " pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.329614 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-bundle\") pod \"93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7\" (UID: \"3d0adab1-1760-4649-9b4a-63dbe6bf84a2\") " pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.329688 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-util\") pod \"93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7\" (UID: \"3d0adab1-1760-4649-9b4a-63dbe6bf84a2\") " pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.329761 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcmqr\" (UniqueName: \"kubernetes.io/projected/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-kube-api-access-lcmqr\") pod \"93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7\" (UID: \"3d0adab1-1760-4649-9b4a-63dbe6bf84a2\") " pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.330215 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-bundle\") pod \"93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7\" (UID: \"3d0adab1-1760-4649-9b4a-63dbe6bf84a2\") " pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.330301 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-util\") pod \"93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7\" (UID: \"3d0adab1-1760-4649-9b4a-63dbe6bf84a2\") " pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.354061 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcmqr\" (UniqueName: \"kubernetes.io/projected/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-kube-api-access-lcmqr\") pod \"93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7\" (UID: \"3d0adab1-1760-4649-9b4a-63dbe6bf84a2\") " pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.416975 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.632445 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7"] Feb 26 11:25:43 crc kubenswrapper[4724]: W0226 11:25:43.644069 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d0adab1_1760_4649_9b4a_63dbe6bf84a2.slice/crio-bdcf0559d72e6d068732aad15c68b834b27b5e1b47f4913a1fb68f7e975b887d WatchSource:0}: Error finding container bdcf0559d72e6d068732aad15c68b834b27b5e1b47f4913a1fb68f7e975b887d: Status 404 returned error can't find the container with id bdcf0559d72e6d068732aad15c68b834b27b5e1b47f4913a1fb68f7e975b887d Feb 26 11:25:43 crc kubenswrapper[4724]: I0226 11:25:43.816487 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" event={"ID":"3d0adab1-1760-4649-9b4a-63dbe6bf84a2","Type":"ContainerStarted","Data":"bdcf0559d72e6d068732aad15c68b834b27b5e1b47f4913a1fb68f7e975b887d"} Feb 26 11:25:44 crc kubenswrapper[4724]: I0226 11:25:44.358921 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:44 crc kubenswrapper[4724]: I0226 11:25:44.359248 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:44 crc kubenswrapper[4724]: I0226 11:25:44.401153 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:44 crc kubenswrapper[4724]: I0226 11:25:44.824550 4724 generic.go:334] "Generic (PLEG): container finished" podID="3d0adab1-1760-4649-9b4a-63dbe6bf84a2" containerID="a2cf067e68142b8ff19a366661513364c21a2824dd980e4cfc2572ddfd3cac54" exitCode=0 Feb 26 11:25:44 crc kubenswrapper[4724]: I0226 11:25:44.824613 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" event={"ID":"3d0adab1-1760-4649-9b4a-63dbe6bf84a2","Type":"ContainerDied","Data":"a2cf067e68142b8ff19a366661513364c21a2824dd980e4cfc2572ddfd3cac54"} Feb 26 11:25:44 crc kubenswrapper[4724]: I0226 11:25:44.872427 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:46 crc kubenswrapper[4724]: I0226 11:25:46.841446 4724 generic.go:334] "Generic (PLEG): container finished" podID="3d0adab1-1760-4649-9b4a-63dbe6bf84a2" containerID="1eefe902e1f0753ecdd02f84128a175d17997e6cfb90397440b156d873a25af8" exitCode=0 Feb 26 11:25:46 crc kubenswrapper[4724]: I0226 11:25:46.842002 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" event={"ID":"3d0adab1-1760-4649-9b4a-63dbe6bf84a2","Type":"ContainerDied","Data":"1eefe902e1f0753ecdd02f84128a175d17997e6cfb90397440b156d873a25af8"} Feb 26 11:25:48 crc kubenswrapper[4724]: I0226 11:25:48.042116 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2rtz"] Feb 26 11:25:48 crc kubenswrapper[4724]: I0226 11:25:48.042625 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c2rtz" podUID="d57efcae-a1f4-46d5-b050-20d34411342f" containerName="registry-server" containerID="cri-o://b4e0e648579a6d29e1fb76f61cba9efaca77ed771535b65d52ae4f53b6a02ebd" gracePeriod=2 Feb 26 11:25:48 crc kubenswrapper[4724]: I0226 11:25:48.857033 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" event={"ID":"3d0adab1-1760-4649-9b4a-63dbe6bf84a2","Type":"ContainerStarted","Data":"a5da0d0494dbaffa4a3fd44fb875c3c069644135b77bdd0802d823aeb1aa3109"} Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.551375 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.619453 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d57efcae-a1f4-46d5-b050-20d34411342f-utilities\") pod \"d57efcae-a1f4-46d5-b050-20d34411342f\" (UID: \"d57efcae-a1f4-46d5-b050-20d34411342f\") " Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.619553 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrzsp\" (UniqueName: \"kubernetes.io/projected/d57efcae-a1f4-46d5-b050-20d34411342f-kube-api-access-mrzsp\") pod \"d57efcae-a1f4-46d5-b050-20d34411342f\" (UID: \"d57efcae-a1f4-46d5-b050-20d34411342f\") " Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.619631 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d57efcae-a1f4-46d5-b050-20d34411342f-catalog-content\") pod \"d57efcae-a1f4-46d5-b050-20d34411342f\" (UID: \"d57efcae-a1f4-46d5-b050-20d34411342f\") " Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.620291 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d57efcae-a1f4-46d5-b050-20d34411342f-utilities" (OuterVolumeSpecName: "utilities") pod "d57efcae-a1f4-46d5-b050-20d34411342f" (UID: "d57efcae-a1f4-46d5-b050-20d34411342f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.627085 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d57efcae-a1f4-46d5-b050-20d34411342f-kube-api-access-mrzsp" (OuterVolumeSpecName: "kube-api-access-mrzsp") pod "d57efcae-a1f4-46d5-b050-20d34411342f" (UID: "d57efcae-a1f4-46d5-b050-20d34411342f"). InnerVolumeSpecName "kube-api-access-mrzsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.674582 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d57efcae-a1f4-46d5-b050-20d34411342f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d57efcae-a1f4-46d5-b050-20d34411342f" (UID: "d57efcae-a1f4-46d5-b050-20d34411342f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.721457 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d57efcae-a1f4-46d5-b050-20d34411342f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.721504 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d57efcae-a1f4-46d5-b050-20d34411342f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.721520 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrzsp\" (UniqueName: \"kubernetes.io/projected/d57efcae-a1f4-46d5-b050-20d34411342f-kube-api-access-mrzsp\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.865193 4724 generic.go:334] "Generic (PLEG): container finished" podID="d57efcae-a1f4-46d5-b050-20d34411342f" containerID="b4e0e648579a6d29e1fb76f61cba9efaca77ed771535b65d52ae4f53b6a02ebd" exitCode=0 Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.865266 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2rtz" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.865284 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2rtz" event={"ID":"d57efcae-a1f4-46d5-b050-20d34411342f","Type":"ContainerDied","Data":"b4e0e648579a6d29e1fb76f61cba9efaca77ed771535b65d52ae4f53b6a02ebd"} Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.865317 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2rtz" event={"ID":"d57efcae-a1f4-46d5-b050-20d34411342f","Type":"ContainerDied","Data":"e8de99c5c88c59452647514f46d76af9c981600063166abe182cda119932337e"} Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.865336 4724 scope.go:117] "RemoveContainer" containerID="b4e0e648579a6d29e1fb76f61cba9efaca77ed771535b65d52ae4f53b6a02ebd" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.868411 4724 generic.go:334] "Generic (PLEG): container finished" podID="3d0adab1-1760-4649-9b4a-63dbe6bf84a2" containerID="a5da0d0494dbaffa4a3fd44fb875c3c069644135b77bdd0802d823aeb1aa3109" exitCode=0 Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.868459 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" event={"ID":"3d0adab1-1760-4649-9b4a-63dbe6bf84a2","Type":"ContainerDied","Data":"a5da0d0494dbaffa4a3fd44fb875c3c069644135b77bdd0802d823aeb1aa3109"} Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.890075 4724 scope.go:117] "RemoveContainer" containerID="c8048a923988fb4f8a040624300ff775e9fb84450ba66ebe5943ff853f6e6475" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.913408 4724 scope.go:117] "RemoveContainer" containerID="23bc5dfe995cf63a1cf40cd31ce1ea8ca59a9b3578a40bedac855e9aab74cd6e" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.915031 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2rtz"] Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.920679 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c2rtz"] Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.930711 4724 scope.go:117] "RemoveContainer" containerID="b4e0e648579a6d29e1fb76f61cba9efaca77ed771535b65d52ae4f53b6a02ebd" Feb 26 11:25:49 crc kubenswrapper[4724]: E0226 11:25:49.931144 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e0e648579a6d29e1fb76f61cba9efaca77ed771535b65d52ae4f53b6a02ebd\": container with ID starting with b4e0e648579a6d29e1fb76f61cba9efaca77ed771535b65d52ae4f53b6a02ebd not found: ID does not exist" containerID="b4e0e648579a6d29e1fb76f61cba9efaca77ed771535b65d52ae4f53b6a02ebd" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.931211 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e0e648579a6d29e1fb76f61cba9efaca77ed771535b65d52ae4f53b6a02ebd"} err="failed to get container status \"b4e0e648579a6d29e1fb76f61cba9efaca77ed771535b65d52ae4f53b6a02ebd\": rpc error: code = NotFound desc = could not find container \"b4e0e648579a6d29e1fb76f61cba9efaca77ed771535b65d52ae4f53b6a02ebd\": container with ID starting with b4e0e648579a6d29e1fb76f61cba9efaca77ed771535b65d52ae4f53b6a02ebd not found: ID does not exist" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.931243 4724 scope.go:117] "RemoveContainer" containerID="c8048a923988fb4f8a040624300ff775e9fb84450ba66ebe5943ff853f6e6475" Feb 26 11:25:49 crc kubenswrapper[4724]: E0226 11:25:49.931622 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8048a923988fb4f8a040624300ff775e9fb84450ba66ebe5943ff853f6e6475\": container with ID starting with c8048a923988fb4f8a040624300ff775e9fb84450ba66ebe5943ff853f6e6475 not found: ID does not exist" containerID="c8048a923988fb4f8a040624300ff775e9fb84450ba66ebe5943ff853f6e6475" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.931662 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8048a923988fb4f8a040624300ff775e9fb84450ba66ebe5943ff853f6e6475"} err="failed to get container status \"c8048a923988fb4f8a040624300ff775e9fb84450ba66ebe5943ff853f6e6475\": rpc error: code = NotFound desc = could not find container \"c8048a923988fb4f8a040624300ff775e9fb84450ba66ebe5943ff853f6e6475\": container with ID starting with c8048a923988fb4f8a040624300ff775e9fb84450ba66ebe5943ff853f6e6475 not found: ID does not exist" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.931693 4724 scope.go:117] "RemoveContainer" containerID="23bc5dfe995cf63a1cf40cd31ce1ea8ca59a9b3578a40bedac855e9aab74cd6e" Feb 26 11:25:49 crc kubenswrapper[4724]: E0226 11:25:49.931898 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23bc5dfe995cf63a1cf40cd31ce1ea8ca59a9b3578a40bedac855e9aab74cd6e\": container with ID starting with 23bc5dfe995cf63a1cf40cd31ce1ea8ca59a9b3578a40bedac855e9aab74cd6e not found: ID does not exist" containerID="23bc5dfe995cf63a1cf40cd31ce1ea8ca59a9b3578a40bedac855e9aab74cd6e" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.931923 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23bc5dfe995cf63a1cf40cd31ce1ea8ca59a9b3578a40bedac855e9aab74cd6e"} err="failed to get container status \"23bc5dfe995cf63a1cf40cd31ce1ea8ca59a9b3578a40bedac855e9aab74cd6e\": rpc error: code = NotFound desc = could not find container \"23bc5dfe995cf63a1cf40cd31ce1ea8ca59a9b3578a40bedac855e9aab74cd6e\": container with ID starting with 23bc5dfe995cf63a1cf40cd31ce1ea8ca59a9b3578a40bedac855e9aab74cd6e not found: ID does not exist" Feb 26 11:25:49 crc kubenswrapper[4724]: I0226 11:25:49.982821 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d57efcae-a1f4-46d5-b050-20d34411342f" path="/var/lib/kubelet/pods/d57efcae-a1f4-46d5-b050-20d34411342f/volumes" Feb 26 11:25:51 crc kubenswrapper[4724]: I0226 11:25:51.103253 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" Feb 26 11:25:51 crc kubenswrapper[4724]: I0226 11:25:51.242865 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-util\") pod \"3d0adab1-1760-4649-9b4a-63dbe6bf84a2\" (UID: \"3d0adab1-1760-4649-9b4a-63dbe6bf84a2\") " Feb 26 11:25:51 crc kubenswrapper[4724]: I0226 11:25:51.243034 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-bundle\") pod \"3d0adab1-1760-4649-9b4a-63dbe6bf84a2\" (UID: \"3d0adab1-1760-4649-9b4a-63dbe6bf84a2\") " Feb 26 11:25:51 crc kubenswrapper[4724]: I0226 11:25:51.243090 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcmqr\" (UniqueName: \"kubernetes.io/projected/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-kube-api-access-lcmqr\") pod \"3d0adab1-1760-4649-9b4a-63dbe6bf84a2\" (UID: \"3d0adab1-1760-4649-9b4a-63dbe6bf84a2\") " Feb 26 11:25:51 crc kubenswrapper[4724]: I0226 11:25:51.243899 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-bundle" (OuterVolumeSpecName: "bundle") pod "3d0adab1-1760-4649-9b4a-63dbe6bf84a2" (UID: "3d0adab1-1760-4649-9b4a-63dbe6bf84a2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:25:51 crc kubenswrapper[4724]: I0226 11:25:51.248046 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-kube-api-access-lcmqr" (OuterVolumeSpecName: "kube-api-access-lcmqr") pod "3d0adab1-1760-4649-9b4a-63dbe6bf84a2" (UID: "3d0adab1-1760-4649-9b4a-63dbe6bf84a2"). InnerVolumeSpecName "kube-api-access-lcmqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:25:51 crc kubenswrapper[4724]: I0226 11:25:51.256407 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-util" (OuterVolumeSpecName: "util") pod "3d0adab1-1760-4649-9b4a-63dbe6bf84a2" (UID: "3d0adab1-1760-4649-9b4a-63dbe6bf84a2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:25:51 crc kubenswrapper[4724]: I0226 11:25:51.344800 4724 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:51 crc kubenswrapper[4724]: I0226 11:25:51.344843 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcmqr\" (UniqueName: \"kubernetes.io/projected/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-kube-api-access-lcmqr\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:51 crc kubenswrapper[4724]: I0226 11:25:51.344856 4724 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d0adab1-1760-4649-9b4a-63dbe6bf84a2-util\") on node \"crc\" DevicePath \"\"" Feb 26 11:25:51 crc kubenswrapper[4724]: I0226 11:25:51.882691 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" event={"ID":"3d0adab1-1760-4649-9b4a-63dbe6bf84a2","Type":"ContainerDied","Data":"bdcf0559d72e6d068732aad15c68b834b27b5e1b47f4913a1fb68f7e975b887d"} Feb 26 11:25:51 crc kubenswrapper[4724]: I0226 11:25:51.882738 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdcf0559d72e6d068732aad15c68b834b27b5e1b47f4913a1fb68f7e975b887d" Feb 26 11:25:51 crc kubenswrapper[4724]: I0226 11:25:51.882742 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7" Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.280637 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-76b6d74844-bpg9d"] Feb 26 11:25:54 crc kubenswrapper[4724]: E0226 11:25:54.281273 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57efcae-a1f4-46d5-b050-20d34411342f" containerName="registry-server" Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.281290 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57efcae-a1f4-46d5-b050-20d34411342f" containerName="registry-server" Feb 26 11:25:54 crc kubenswrapper[4724]: E0226 11:25:54.281304 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57efcae-a1f4-46d5-b050-20d34411342f" containerName="extract-content" Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.281312 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57efcae-a1f4-46d5-b050-20d34411342f" containerName="extract-content" Feb 26 11:25:54 crc kubenswrapper[4724]: E0226 11:25:54.281331 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0adab1-1760-4649-9b4a-63dbe6bf84a2" containerName="extract" Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.281340 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0adab1-1760-4649-9b4a-63dbe6bf84a2" containerName="extract" Feb 26 11:25:54 crc kubenswrapper[4724]: E0226 11:25:54.281353 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0adab1-1760-4649-9b4a-63dbe6bf84a2" containerName="util" Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.281363 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0adab1-1760-4649-9b4a-63dbe6bf84a2" containerName="util" Feb 26 11:25:54 crc kubenswrapper[4724]: E0226 11:25:54.281375 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0adab1-1760-4649-9b4a-63dbe6bf84a2" containerName="pull" Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.281383 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0adab1-1760-4649-9b4a-63dbe6bf84a2" containerName="pull" Feb 26 11:25:54 crc kubenswrapper[4724]: E0226 11:25:54.281398 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57efcae-a1f4-46d5-b050-20d34411342f" containerName="extract-utilities" Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.281408 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57efcae-a1f4-46d5-b050-20d34411342f" containerName="extract-utilities" Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.281550 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0adab1-1760-4649-9b4a-63dbe6bf84a2" containerName="extract" Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.281565 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57efcae-a1f4-46d5-b050-20d34411342f" containerName="registry-server" Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.282073 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-76b6d74844-bpg9d" Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.286607 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-t6zr7" Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.304769 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-76b6d74844-bpg9d"] Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.387925 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwf7f\" (UniqueName: \"kubernetes.io/projected/20b666d6-e71f-4bdb-b71d-44ac3a0c74c6-kube-api-access-nwf7f\") pod \"openstack-operator-controller-init-76b6d74844-bpg9d\" (UID: \"20b666d6-e71f-4bdb-b71d-44ac3a0c74c6\") " pod="openstack-operators/openstack-operator-controller-init-76b6d74844-bpg9d" Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.489613 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwf7f\" (UniqueName: \"kubernetes.io/projected/20b666d6-e71f-4bdb-b71d-44ac3a0c74c6-kube-api-access-nwf7f\") pod \"openstack-operator-controller-init-76b6d74844-bpg9d\" (UID: \"20b666d6-e71f-4bdb-b71d-44ac3a0c74c6\") " pod="openstack-operators/openstack-operator-controller-init-76b6d74844-bpg9d" Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.518297 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwf7f\" (UniqueName: \"kubernetes.io/projected/20b666d6-e71f-4bdb-b71d-44ac3a0c74c6-kube-api-access-nwf7f\") pod \"openstack-operator-controller-init-76b6d74844-bpg9d\" (UID: \"20b666d6-e71f-4bdb-b71d-44ac3a0c74c6\") " pod="openstack-operators/openstack-operator-controller-init-76b6d74844-bpg9d" Feb 26 11:25:54 crc kubenswrapper[4724]: I0226 11:25:54.598706 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-76b6d74844-bpg9d" Feb 26 11:25:55 crc kubenswrapper[4724]: I0226 11:25:55.068875 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-76b6d74844-bpg9d"] Feb 26 11:25:55 crc kubenswrapper[4724]: W0226 11:25:55.072416 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20b666d6_e71f_4bdb_b71d_44ac3a0c74c6.slice/crio-81f8a0c09c6168ff8be1fa38d6329fd13673e540b769f3e61bef7e172b7bb175 WatchSource:0}: Error finding container 81f8a0c09c6168ff8be1fa38d6329fd13673e540b769f3e61bef7e172b7bb175: Status 404 returned error can't find the container with id 81f8a0c09c6168ff8be1fa38d6329fd13673e540b769f3e61bef7e172b7bb175 Feb 26 11:25:56 crc kubenswrapper[4724]: I0226 11:25:56.028807 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-76b6d74844-bpg9d" event={"ID":"20b666d6-e71f-4bdb-b71d-44ac3a0c74c6","Type":"ContainerStarted","Data":"81f8a0c09c6168ff8be1fa38d6329fd13673e540b769f3e61bef7e172b7bb175"} Feb 26 11:26:00 crc kubenswrapper[4724]: I0226 11:26:00.217513 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535086-qmcq9"] Feb 26 11:26:00 crc kubenswrapper[4724]: I0226 11:26:00.218748 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535086-qmcq9" Feb 26 11:26:00 crc kubenswrapper[4724]: I0226 11:26:00.220693 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:26:00 crc kubenswrapper[4724]: I0226 11:26:00.220974 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:26:00 crc kubenswrapper[4724]: I0226 11:26:00.223224 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:26:00 crc kubenswrapper[4724]: I0226 11:26:00.231673 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535086-qmcq9"] Feb 26 11:26:00 crc kubenswrapper[4724]: I0226 11:26:00.345611 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfrjd\" (UniqueName: \"kubernetes.io/projected/ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d-kube-api-access-mfrjd\") pod \"auto-csr-approver-29535086-qmcq9\" (UID: \"ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d\") " pod="openshift-infra/auto-csr-approver-29535086-qmcq9" Feb 26 11:26:00 crc kubenswrapper[4724]: I0226 11:26:00.446949 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfrjd\" (UniqueName: \"kubernetes.io/projected/ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d-kube-api-access-mfrjd\") pod \"auto-csr-approver-29535086-qmcq9\" (UID: \"ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d\") " pod="openshift-infra/auto-csr-approver-29535086-qmcq9" Feb 26 11:26:00 crc kubenswrapper[4724]: I0226 11:26:00.470120 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfrjd\" (UniqueName: \"kubernetes.io/projected/ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d-kube-api-access-mfrjd\") pod \"auto-csr-approver-29535086-qmcq9\" (UID: \"ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d\") " pod="openshift-infra/auto-csr-approver-29535086-qmcq9" Feb 26 11:26:00 crc kubenswrapper[4724]: I0226 11:26:00.537883 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535086-qmcq9" Feb 26 11:26:03 crc kubenswrapper[4724]: I0226 11:26:03.853201 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535086-qmcq9"] Feb 26 11:26:04 crc kubenswrapper[4724]: I0226 11:26:04.087361 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535086-qmcq9" event={"ID":"ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d","Type":"ContainerStarted","Data":"62afbc1037e823a355044789f7adead8e4db66c4a1516f41f572d0cddd509071"} Feb 26 11:26:05 crc kubenswrapper[4724]: I0226 11:26:05.097213 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-76b6d74844-bpg9d" event={"ID":"20b666d6-e71f-4bdb-b71d-44ac3a0c74c6","Type":"ContainerStarted","Data":"430df46104f24e578d04603de2117782c088277d43b507c16692acac663d88cb"} Feb 26 11:26:05 crc kubenswrapper[4724]: I0226 11:26:05.097794 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-76b6d74844-bpg9d" Feb 26 11:26:06 crc kubenswrapper[4724]: I0226 11:26:06.106026 4724 generic.go:334] "Generic (PLEG): container finished" podID="ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d" containerID="b97bda9188d95594555c0b39fe8cafd2b472fc84658d15791afe2455e81531aa" exitCode=0 Feb 26 11:26:06 crc kubenswrapper[4724]: I0226 11:26:06.106081 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535086-qmcq9" event={"ID":"ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d","Type":"ContainerDied","Data":"b97bda9188d95594555c0b39fe8cafd2b472fc84658d15791afe2455e81531aa"} Feb 26 11:26:06 crc kubenswrapper[4724]: I0226 11:26:06.123470 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-76b6d74844-bpg9d" podStartSLOduration=3.146553855 podStartE2EDuration="12.123444327s" podCreationTimestamp="2026-02-26 11:25:54 +0000 UTC" firstStartedPulling="2026-02-26 11:25:55.074810484 +0000 UTC m=+1221.730549599" lastFinishedPulling="2026-02-26 11:26:04.051700956 +0000 UTC m=+1230.707440071" observedRunningTime="2026-02-26 11:26:05.139901364 +0000 UTC m=+1231.795640499" watchObservedRunningTime="2026-02-26 11:26:06.123444327 +0000 UTC m=+1232.779183452" Feb 26 11:26:07 crc kubenswrapper[4724]: I0226 11:26:07.375852 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535086-qmcq9" Feb 26 11:26:07 crc kubenswrapper[4724]: I0226 11:26:07.463755 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfrjd\" (UniqueName: \"kubernetes.io/projected/ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d-kube-api-access-mfrjd\") pod \"ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d\" (UID: \"ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d\") " Feb 26 11:26:07 crc kubenswrapper[4724]: I0226 11:26:07.471775 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d-kube-api-access-mfrjd" (OuterVolumeSpecName: "kube-api-access-mfrjd") pod "ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d" (UID: "ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d"). InnerVolumeSpecName "kube-api-access-mfrjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:26:07 crc kubenswrapper[4724]: I0226 11:26:07.565372 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfrjd\" (UniqueName: \"kubernetes.io/projected/ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d-kube-api-access-mfrjd\") on node \"crc\" DevicePath \"\"" Feb 26 11:26:08 crc kubenswrapper[4724]: I0226 11:26:08.120546 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535086-qmcq9" event={"ID":"ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d","Type":"ContainerDied","Data":"62afbc1037e823a355044789f7adead8e4db66c4a1516f41f572d0cddd509071"} Feb 26 11:26:08 crc kubenswrapper[4724]: I0226 11:26:08.120593 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62afbc1037e823a355044789f7adead8e4db66c4a1516f41f572d0cddd509071" Feb 26 11:26:08 crc kubenswrapper[4724]: I0226 11:26:08.120638 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535086-qmcq9" Feb 26 11:26:08 crc kubenswrapper[4724]: I0226 11:26:08.452287 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535080-vg4gp"] Feb 26 11:26:08 crc kubenswrapper[4724]: I0226 11:26:08.457887 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535080-vg4gp"] Feb 26 11:26:09 crc kubenswrapper[4724]: I0226 11:26:09.983723 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1048467b-0158-4faa-b646-9ca7667afae5" path="/var/lib/kubelet/pods/1048467b-0158-4faa-b646-9ca7667afae5/volumes" Feb 26 11:26:14 crc kubenswrapper[4724]: I0226 11:26:14.603980 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-76b6d74844-bpg9d" Feb 26 11:26:33 crc kubenswrapper[4724]: I0226 11:26:33.968011 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-sbkcr"] Feb 26 11:26:33 crc kubenswrapper[4724]: E0226 11:26:33.968980 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d" containerName="oc" Feb 26 11:26:33 crc kubenswrapper[4724]: I0226 11:26:33.968998 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d" containerName="oc" Feb 26 11:26:33 crc kubenswrapper[4724]: I0226 11:26:33.969139 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d" containerName="oc" Feb 26 11:26:33 crc kubenswrapper[4724]: I0226 11:26:33.969706 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-sbkcr" Feb 26 11:26:33 crc kubenswrapper[4724]: I0226 11:26:33.973526 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-q9sb4"] Feb 26 11:26:33 crc kubenswrapper[4724]: I0226 11:26:33.974466 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-q9sb4" Feb 26 11:26:33 crc kubenswrapper[4724]: I0226 11:26:33.983430 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-k28ls" Feb 26 11:26:33 crc kubenswrapper[4724]: I0226 11:26:33.983433 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-2nwbd" Feb 26 11:26:33 crc kubenswrapper[4724]: I0226 11:26:33.995675 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-p58p8"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.003800 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-p58p8" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.010085 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-2vfzj" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.039314 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-q9sb4"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.051135 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-sbkcr"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.072933 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gjx9\" (UniqueName: \"kubernetes.io/projected/f0ccafa2-8b59-49e6-b881-ffaee0c98646-kube-api-access-7gjx9\") pod \"barbican-operator-controller-manager-868647ff47-q9sb4\" (UID: \"f0ccafa2-8b59-49e6-b881-ffaee0c98646\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-q9sb4" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.073009 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwqwr\" (UniqueName: \"kubernetes.io/projected/0cfee1c3-df60-4944-a16e-e01dd310f2c4-kube-api-access-hwqwr\") pod \"cinder-operator-controller-manager-55d77d7b5c-sbkcr\" (UID: \"0cfee1c3-df60-4944-a16e-e01dd310f2c4\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-sbkcr" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.073046 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjvk5\" (UniqueName: \"kubernetes.io/projected/9ae55185-83a5-47ea-b54f-01b31471f512-kube-api-access-xjvk5\") pod \"designate-operator-controller-manager-6d8bf5c495-p58p8\" (UID: \"9ae55185-83a5-47ea-b54f-01b31471f512\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-p58p8" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.088884 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-p58p8"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.108241 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-6qq4t"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.109136 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-6qq4t" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.114415 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-wxw2f"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.114988 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-mbxs8" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.115190 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wxw2f" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.117383 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-5s4ws" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.152256 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-6qq4t"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.157270 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-wxw2f"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.173998 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tj89\" (UniqueName: \"kubernetes.io/projected/1c97807f-b47f-4762-80d8-a296d8108e19-kube-api-access-8tj89\") pod \"glance-operator-controller-manager-784b5bb6c5-6qq4t\" (UID: \"1c97807f-b47f-4762-80d8-a296d8108e19\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-6qq4t" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.174061 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gjx9\" (UniqueName: \"kubernetes.io/projected/f0ccafa2-8b59-49e6-b881-ffaee0c98646-kube-api-access-7gjx9\") pod \"barbican-operator-controller-manager-868647ff47-q9sb4\" (UID: \"f0ccafa2-8b59-49e6-b881-ffaee0c98646\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-q9sb4" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.174106 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwqwr\" (UniqueName: \"kubernetes.io/projected/0cfee1c3-df60-4944-a16e-e01dd310f2c4-kube-api-access-hwqwr\") pod \"cinder-operator-controller-manager-55d77d7b5c-sbkcr\" (UID: \"0cfee1c3-df60-4944-a16e-e01dd310f2c4\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-sbkcr" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.174151 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjvk5\" (UniqueName: \"kubernetes.io/projected/9ae55185-83a5-47ea-b54f-01b31471f512-kube-api-access-xjvk5\") pod \"designate-operator-controller-manager-6d8bf5c495-p58p8\" (UID: \"9ae55185-83a5-47ea-b54f-01b31471f512\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-p58p8" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.174201 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-792cv\" (UniqueName: \"kubernetes.io/projected/bc959a10-5f94-4d38-87d2-dda60f8ae078-kube-api-access-792cv\") pod \"heat-operator-controller-manager-69f49c598c-wxw2f\" (UID: \"bc959a10-5f94-4d38-87d2-dda60f8ae078\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wxw2f" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.178322 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6zmmb"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.179513 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6zmmb" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.182933 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-g2xmv" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.195682 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.196568 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.198511 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.200313 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8kw9s" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.210267 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6zmmb"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.227024 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwqwr\" (UniqueName: \"kubernetes.io/projected/0cfee1c3-df60-4944-a16e-e01dd310f2c4-kube-api-access-hwqwr\") pod \"cinder-operator-controller-manager-55d77d7b5c-sbkcr\" (UID: \"0cfee1c3-df60-4944-a16e-e01dd310f2c4\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-sbkcr" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.227030 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gjx9\" (UniqueName: \"kubernetes.io/projected/f0ccafa2-8b59-49e6-b881-ffaee0c98646-kube-api-access-7gjx9\") pod \"barbican-operator-controller-manager-868647ff47-q9sb4\" (UID: \"f0ccafa2-8b59-49e6-b881-ffaee0c98646\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-q9sb4" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.230729 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjvk5\" (UniqueName: \"kubernetes.io/projected/9ae55185-83a5-47ea-b54f-01b31471f512-kube-api-access-xjvk5\") pod \"designate-operator-controller-manager-6d8bf5c495-p58p8\" (UID: \"9ae55185-83a5-47ea-b54f-01b31471f512\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-p58p8" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.234008 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.243387 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-pxtxm"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.244160 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pxtxm" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.247850 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5t9s4" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.250845 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-pxtxm"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.256466 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-rqsqh"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.257491 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-rqsqh" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.269497 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-rqsqh"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.269940 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-bfrsl"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.270800 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bfrsl" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.284078 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8wv8t" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.284254 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-bz2s7" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.295579 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tj89\" (UniqueName: \"kubernetes.io/projected/1c97807f-b47f-4762-80d8-a296d8108e19-kube-api-access-8tj89\") pod \"glance-operator-controller-manager-784b5bb6c5-6qq4t\" (UID: \"1c97807f-b47f-4762-80d8-a296d8108e19\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-6qq4t" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.295880 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert\") pod \"infra-operator-controller-manager-79d975b745-8n9qj\" (UID: \"2714a834-e9ca-40b1-a73c-2b890783f29e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.295981 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-792cv\" (UniqueName: \"kubernetes.io/projected/bc959a10-5f94-4d38-87d2-dda60f8ae078-kube-api-access-792cv\") pod \"heat-operator-controller-manager-69f49c598c-wxw2f\" (UID: \"bc959a10-5f94-4d38-87d2-dda60f8ae078\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wxw2f" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.296071 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dj2w\" (UniqueName: \"kubernetes.io/projected/2714a834-e9ca-40b1-a73c-2b890783f29e-kube-api-access-7dj2w\") pod \"infra-operator-controller-manager-79d975b745-8n9qj\" (UID: \"2714a834-e9ca-40b1-a73c-2b890783f29e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.296159 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szcw7\" (UniqueName: \"kubernetes.io/projected/731a1439-aa83-4119-ae37-23f526e6e73a-kube-api-access-szcw7\") pod \"horizon-operator-controller-manager-5b9b8895d5-6zmmb\" (UID: \"731a1439-aa83-4119-ae37-23f526e6e73a\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6zmmb" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.295914 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-sbkcr" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.304146 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-bfrsl"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.392498 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-q9sb4" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.392835 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tj89\" (UniqueName: \"kubernetes.io/projected/1c97807f-b47f-4762-80d8-a296d8108e19-kube-api-access-8tj89\") pod \"glance-operator-controller-manager-784b5bb6c5-6qq4t\" (UID: \"1c97807f-b47f-4762-80d8-a296d8108e19\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-6qq4t" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.399601 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zr7b\" (UniqueName: \"kubernetes.io/projected/5495b5cf-e7d2-4fbf-98a4-ca9fc0276a73-kube-api-access-7zr7b\") pod \"manila-operator-controller-manager-67d996989d-bfrsl\" (UID: \"5495b5cf-e7d2-4fbf-98a4-ca9fc0276a73\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-bfrsl" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.399670 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert\") pod \"infra-operator-controller-manager-79d975b745-8n9qj\" (UID: \"2714a834-e9ca-40b1-a73c-2b890783f29e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.399722 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dj2w\" (UniqueName: \"kubernetes.io/projected/2714a834-e9ca-40b1-a73c-2b890783f29e-kube-api-access-7dj2w\") pod \"infra-operator-controller-manager-79d975b745-8n9qj\" (UID: \"2714a834-e9ca-40b1-a73c-2b890783f29e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.399752 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5gnd\" (UniqueName: \"kubernetes.io/projected/2178a458-4e8c-4d30-bbdb-8a0ef864fd80-kube-api-access-w5gnd\") pod \"ironic-operator-controller-manager-554564d7fc-pxtxm\" (UID: \"2178a458-4e8c-4d30-bbdb-8a0ef864fd80\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pxtxm" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.399787 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szcw7\" (UniqueName: \"kubernetes.io/projected/731a1439-aa83-4119-ae37-23f526e6e73a-kube-api-access-szcw7\") pod \"horizon-operator-controller-manager-5b9b8895d5-6zmmb\" (UID: \"731a1439-aa83-4119-ae37-23f526e6e73a\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6zmmb" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.399824 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tltw7\" (UniqueName: \"kubernetes.io/projected/d2b50788-4e25-4589-84b8-00851a2a18b7-kube-api-access-tltw7\") pod \"keystone-operator-controller-manager-b4d948c87-rqsqh\" (UID: \"d2b50788-4e25-4589-84b8-00851a2a18b7\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-rqsqh" Feb 26 11:26:34 crc kubenswrapper[4724]: E0226 11:26:34.400001 4724 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 11:26:34 crc kubenswrapper[4724]: E0226 11:26:34.400056 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert podName:2714a834-e9ca-40b1-a73c-2b890783f29e nodeName:}" failed. No retries permitted until 2026-02-26 11:26:34.900034002 +0000 UTC m=+1261.555773117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert") pod "infra-operator-controller-manager-79d975b745-8n9qj" (UID: "2714a834-e9ca-40b1-a73c-2b890783f29e") : secret "infra-operator-webhook-server-cert" not found Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.401489 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-792cv\" (UniqueName: \"kubernetes.io/projected/bc959a10-5f94-4d38-87d2-dda60f8ae078-kube-api-access-792cv\") pod \"heat-operator-controller-manager-69f49c598c-wxw2f\" (UID: \"bc959a10-5f94-4d38-87d2-dda60f8ae078\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wxw2f" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.418259 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.419354 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.419965 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-p58p8" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.437110 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.437875 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.448672 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-zjsjm" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.460964 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-k75jd"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.461787 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k75jd" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.472916 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-6qq4t" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.481530 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-l46xs" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.481772 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cgpfj" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.483116 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dj2w\" (UniqueName: \"kubernetes.io/projected/2714a834-e9ca-40b1-a73c-2b890783f29e-kube-api-access-7dj2w\") pod \"infra-operator-controller-manager-79d975b745-8n9qj\" (UID: \"2714a834-e9ca-40b1-a73c-2b890783f29e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.487512 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.498170 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szcw7\" (UniqueName: \"kubernetes.io/projected/731a1439-aa83-4119-ae37-23f526e6e73a-kube-api-access-szcw7\") pod \"horizon-operator-controller-manager-5b9b8895d5-6zmmb\" (UID: \"731a1439-aa83-4119-ae37-23f526e6e73a\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6zmmb" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.500547 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zr7b\" (UniqueName: \"kubernetes.io/projected/5495b5cf-e7d2-4fbf-98a4-ca9fc0276a73-kube-api-access-7zr7b\") pod \"manila-operator-controller-manager-67d996989d-bfrsl\" (UID: \"5495b5cf-e7d2-4fbf-98a4-ca9fc0276a73\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-bfrsl" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.500645 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56zjb\" (UniqueName: \"kubernetes.io/projected/da86929c-f438-4994-80be-1a7aa3b7b76e-kube-api-access-56zjb\") pod \"mariadb-operator-controller-manager-6994f66f48-vp8fp\" (UID: \"da86929c-f438-4994-80be-1a7aa3b7b76e\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.500687 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5gnd\" (UniqueName: \"kubernetes.io/projected/2178a458-4e8c-4d30-bbdb-8a0ef864fd80-kube-api-access-w5gnd\") pod \"ironic-operator-controller-manager-554564d7fc-pxtxm\" (UID: \"2178a458-4e8c-4d30-bbdb-8a0ef864fd80\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pxtxm" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.500717 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpdfz\" (UniqueName: \"kubernetes.io/projected/193a7bdd-a3a7-493d-8c99-a04d591e3a19-kube-api-access-cpdfz\") pod \"neutron-operator-controller-manager-6bd4687957-wjjjc\" (UID: \"193a7bdd-a3a7-493d-8c99-a04d591e3a19\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.500762 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tltw7\" (UniqueName: \"kubernetes.io/projected/d2b50788-4e25-4589-84b8-00851a2a18b7-kube-api-access-tltw7\") pod \"keystone-operator-controller-manager-b4d948c87-rqsqh\" (UID: \"d2b50788-4e25-4589-84b8-00851a2a18b7\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-rqsqh" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.503555 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wxw2f" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.511275 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.528595 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-k75jd"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.529053 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6zmmb" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.605763 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c48j\" (UniqueName: \"kubernetes.io/projected/9bcf19f6-1ed9-4315-a263-1bd5c8da7774-kube-api-access-5c48j\") pod \"nova-operator-controller-manager-567668f5cf-k75jd\" (UID: \"9bcf19f6-1ed9-4315-a263-1bd5c8da7774\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k75jd" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.606257 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56zjb\" (UniqueName: \"kubernetes.io/projected/da86929c-f438-4994-80be-1a7aa3b7b76e-kube-api-access-56zjb\") pod \"mariadb-operator-controller-manager-6994f66f48-vp8fp\" (UID: \"da86929c-f438-4994-80be-1a7aa3b7b76e\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.606303 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpdfz\" (UniqueName: \"kubernetes.io/projected/193a7bdd-a3a7-493d-8c99-a04d591e3a19-kube-api-access-cpdfz\") pod \"neutron-operator-controller-manager-6bd4687957-wjjjc\" (UID: \"193a7bdd-a3a7-493d-8c99-a04d591e3a19\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.635757 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rjxxw"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.639835 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rjxxw" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.645810 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5gnd\" (UniqueName: \"kubernetes.io/projected/2178a458-4e8c-4d30-bbdb-8a0ef864fd80-kube-api-access-w5gnd\") pod \"ironic-operator-controller-manager-554564d7fc-pxtxm\" (UID: \"2178a458-4e8c-4d30-bbdb-8a0ef864fd80\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pxtxm" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.642582 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zr7b\" (UniqueName: \"kubernetes.io/projected/5495b5cf-e7d2-4fbf-98a4-ca9fc0276a73-kube-api-access-7zr7b\") pod \"manila-operator-controller-manager-67d996989d-bfrsl\" (UID: \"5495b5cf-e7d2-4fbf-98a4-ca9fc0276a73\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-bfrsl" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.662816 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpdfz\" (UniqueName: \"kubernetes.io/projected/193a7bdd-a3a7-493d-8c99-a04d591e3a19-kube-api-access-cpdfz\") pod \"neutron-operator-controller-manager-6bd4687957-wjjjc\" (UID: \"193a7bdd-a3a7-493d-8c99-a04d591e3a19\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.682897 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tltw7\" (UniqueName: \"kubernetes.io/projected/d2b50788-4e25-4589-84b8-00851a2a18b7-kube-api-access-tltw7\") pod \"keystone-operator-controller-manager-b4d948c87-rqsqh\" (UID: \"d2b50788-4e25-4589-84b8-00851a2a18b7\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-rqsqh" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.685384 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-l4zcj" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.687464 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56zjb\" (UniqueName: \"kubernetes.io/projected/da86929c-f438-4994-80be-1a7aa3b7b76e-kube-api-access-56zjb\") pod \"mariadb-operator-controller-manager-6994f66f48-vp8fp\" (UID: \"da86929c-f438-4994-80be-1a7aa3b7b76e\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.707695 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c48j\" (UniqueName: \"kubernetes.io/projected/9bcf19f6-1ed9-4315-a263-1bd5c8da7774-kube-api-access-5c48j\") pod \"nova-operator-controller-manager-567668f5cf-k75jd\" (UID: \"9bcf19f6-1ed9-4315-a263-1bd5c8da7774\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k75jd" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.708425 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9spx\" (UniqueName: \"kubernetes.io/projected/5b790d8b-575d-462e-a9b1-512d91261517-kube-api-access-n9spx\") pod \"octavia-operator-controller-manager-659dc6bbfc-rjxxw\" (UID: \"5b790d8b-575d-462e-a9b1-512d91261517\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rjxxw" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.724234 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pxtxm" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.754413 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-rqsqh" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.780934 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bfrsl" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.781952 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c48j\" (UniqueName: \"kubernetes.io/projected/9bcf19f6-1ed9-4315-a263-1bd5c8da7774-kube-api-access-5c48j\") pod \"nova-operator-controller-manager-567668f5cf-k75jd\" (UID: \"9bcf19f6-1ed9-4315-a263-1bd5c8da7774\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k75jd" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.786407 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.789362 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.811661 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9spx\" (UniqueName: \"kubernetes.io/projected/5b790d8b-575d-462e-a9b1-512d91261517-kube-api-access-n9spx\") pod \"octavia-operator-controller-manager-659dc6bbfc-rjxxw\" (UID: \"5b790d8b-575d-462e-a9b1-512d91261517\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rjxxw" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.814038 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fvhkj" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.814448 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.861574 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.862609 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.876372 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9spx\" (UniqueName: \"kubernetes.io/projected/5b790d8b-575d-462e-a9b1-512d91261517-kube-api-access-n9spx\") pod \"octavia-operator-controller-manager-659dc6bbfc-rjxxw\" (UID: \"5b790d8b-575d-462e-a9b1-512d91261517\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rjxxw" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.880328 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rjxxw"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.917273 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l"] Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.920615 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l\" (UID: \"39700bc5-43f0-49b6-b510-523322e34eb5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.920707 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmz98\" (UniqueName: \"kubernetes.io/projected/39700bc5-43f0-49b6-b510-523322e34eb5-kube-api-access-gmz98\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l\" (UID: \"39700bc5-43f0-49b6-b510-523322e34eb5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.920773 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert\") pod \"infra-operator-controller-manager-79d975b745-8n9qj\" (UID: \"2714a834-e9ca-40b1-a73c-2b890783f29e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" Feb 26 11:26:34 crc kubenswrapper[4724]: E0226 11:26:34.920995 4724 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 11:26:34 crc kubenswrapper[4724]: E0226 11:26:34.921083 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert podName:2714a834-e9ca-40b1-a73c-2b890783f29e nodeName:}" failed. No retries permitted until 2026-02-26 11:26:35.921061526 +0000 UTC m=+1262.576800641 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert") pod "infra-operator-controller-manager-79d975b745-8n9qj" (UID: "2714a834-e9ca-40b1-a73c-2b890783f29e") : secret "infra-operator-webhook-server-cert" not found Feb 26 11:26:34 crc kubenswrapper[4724]: I0226 11:26:34.940082 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k75jd" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.028833 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l\" (UID: \"39700bc5-43f0-49b6-b510-523322e34eb5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.028914 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmz98\" (UniqueName: \"kubernetes.io/projected/39700bc5-43f0-49b6-b510-523322e34eb5-kube-api-access-gmz98\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l\" (UID: \"39700bc5-43f0-49b6-b510-523322e34eb5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:26:35 crc kubenswrapper[4724]: E0226 11:26:35.029368 4724 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:26:35 crc kubenswrapper[4724]: E0226 11:26:35.029410 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert podName:39700bc5-43f0-49b6-b510-523322e34eb5 nodeName:}" failed. No retries permitted until 2026-02-26 11:26:35.52939523 +0000 UTC m=+1262.185134345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" (UID: "39700bc5-43f0-49b6-b510-523322e34eb5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.029622 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-pwg6d"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.030420 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pwg6d" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.036384 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-cx685" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.036915 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rjxxw" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.042240 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.043147 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.065645 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-ntcpr"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.066576 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ntcpr" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.092376 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-l69jr" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.092459 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-5hqhk" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.103436 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-pwg6d"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.122156 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.123107 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.131657 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8xfp\" (UniqueName: \"kubernetes.io/projected/cd71c91d-33bb-4eae-9f27-84f39ef7653d-kube-api-access-b8xfp\") pod \"ovn-operator-controller-manager-5955d8c787-pwg6d\" (UID: \"cd71c91d-33bb-4eae-9f27-84f39ef7653d\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pwg6d" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.131707 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjvwq\" (UniqueName: \"kubernetes.io/projected/b14d5ade-65f3-4402-bacd-5acc8ef39ce5-kube-api-access-vjvwq\") pod \"telemetry-operator-controller-manager-589c568786-qhw9r\" (UID: \"b14d5ade-65f3-4402-bacd-5acc8ef39ce5\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.139439 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-7nj2c" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.146357 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmz98\" (UniqueName: \"kubernetes.io/projected/39700bc5-43f0-49b6-b510-523322e34eb5-kube-api-access-gmz98\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l\" (UID: \"39700bc5-43f0-49b6-b510-523322e34eb5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.160672 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk997\" (UniqueName: \"kubernetes.io/projected/d90588ea-6237-4fd0-a321-9c6db1e07525-kube-api-access-gk997\") pod \"swift-operator-controller-manager-68f46476f-ntcpr\" (UID: \"d90588ea-6237-4fd0-a321-9c6db1e07525\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-ntcpr" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.235471 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.238061 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.266078 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-ntcpr"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.277220 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwv8c\" (UniqueName: \"kubernetes.io/projected/d49437c7-7f60-4304-b216-dcf93e31be87-kube-api-access-rwv8c\") pod \"placement-operator-controller-manager-8497b45c89-6j4fp\" (UID: \"d49437c7-7f60-4304-b216-dcf93e31be87\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.277305 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8xfp\" (UniqueName: \"kubernetes.io/projected/cd71c91d-33bb-4eae-9f27-84f39ef7653d-kube-api-access-b8xfp\") pod \"ovn-operator-controller-manager-5955d8c787-pwg6d\" (UID: \"cd71c91d-33bb-4eae-9f27-84f39ef7653d\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pwg6d" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.277325 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjvwq\" (UniqueName: \"kubernetes.io/projected/b14d5ade-65f3-4402-bacd-5acc8ef39ce5-kube-api-access-vjvwq\") pod \"telemetry-operator-controller-manager-589c568786-qhw9r\" (UID: \"b14d5ade-65f3-4402-bacd-5acc8ef39ce5\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.277351 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk997\" (UniqueName: \"kubernetes.io/projected/d90588ea-6237-4fd0-a321-9c6db1e07525-kube-api-access-gk997\") pod \"swift-operator-controller-manager-68f46476f-ntcpr\" (UID: \"d90588ea-6237-4fd0-a321-9c6db1e07525\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-ntcpr" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.364364 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8xfp\" (UniqueName: \"kubernetes.io/projected/cd71c91d-33bb-4eae-9f27-84f39ef7653d-kube-api-access-b8xfp\") pod \"ovn-operator-controller-manager-5955d8c787-pwg6d\" (UID: \"cd71c91d-33bb-4eae-9f27-84f39ef7653d\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pwg6d" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.380344 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwv8c\" (UniqueName: \"kubernetes.io/projected/d49437c7-7f60-4304-b216-dcf93e31be87-kube-api-access-rwv8c\") pod \"placement-operator-controller-manager-8497b45c89-6j4fp\" (UID: \"d49437c7-7f60-4304-b216-dcf93e31be87\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.397747 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk997\" (UniqueName: \"kubernetes.io/projected/d90588ea-6237-4fd0-a321-9c6db1e07525-kube-api-access-gk997\") pod \"swift-operator-controller-manager-68f46476f-ntcpr\" (UID: \"d90588ea-6237-4fd0-a321-9c6db1e07525\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-ntcpr" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.441503 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.442360 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.443619 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjvwq\" (UniqueName: \"kubernetes.io/projected/b14d5ade-65f3-4402-bacd-5acc8ef39ce5-kube-api-access-vjvwq\") pod \"telemetry-operator-controller-manager-589c568786-qhw9r\" (UID: \"b14d5ade-65f3-4402-bacd-5acc8ef39ce5\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.449717 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pwg6d" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.474392 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-287gn" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.479163 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-pgtk4"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.480032 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwv8c\" (UniqueName: \"kubernetes.io/projected/d49437c7-7f60-4304-b216-dcf93e31be87-kube-api-access-rwv8c\") pod \"placement-operator-controller-manager-8497b45c89-6j4fp\" (UID: \"d49437c7-7f60-4304-b216-dcf93e31be87\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.480215 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pgtk4" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.482896 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxt4p\" (UniqueName: \"kubernetes.io/projected/897f5a3f-a04e-4725-8a9a-0ce91c8bb372-kube-api-access-vxt4p\") pod \"test-operator-controller-manager-5dc6794d5b-f8k6f\" (UID: \"897f5a3f-a04e-4725-8a9a-0ce91c8bb372\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.487753 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-rsj9l" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.499344 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.514631 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.531927 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-pgtk4"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.583890 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l\" (UID: \"39700bc5-43f0-49b6-b510-523322e34eb5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.583927 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxt4p\" (UniqueName: \"kubernetes.io/projected/897f5a3f-a04e-4725-8a9a-0ce91c8bb372-kube-api-access-vxt4p\") pod \"test-operator-controller-manager-5dc6794d5b-f8k6f\" (UID: \"897f5a3f-a04e-4725-8a9a-0ce91c8bb372\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f" Feb 26 11:26:35 crc kubenswrapper[4724]: E0226 11:26:35.590340 4724 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:26:35 crc kubenswrapper[4724]: E0226 11:26:35.590449 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert podName:39700bc5-43f0-49b6-b510-523322e34eb5 nodeName:}" failed. No retries permitted until 2026-02-26 11:26:36.590422993 +0000 UTC m=+1263.246162108 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" (UID: "39700bc5-43f0-49b6-b510-523322e34eb5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.612065 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ntcpr" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.621235 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxt4p\" (UniqueName: \"kubernetes.io/projected/897f5a3f-a04e-4725-8a9a-0ce91c8bb372-kube-api-access-vxt4p\") pod \"test-operator-controller-manager-5dc6794d5b-f8k6f\" (UID: \"897f5a3f-a04e-4725-8a9a-0ce91c8bb372\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.647378 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r" Feb 26 11:26:35 crc kubenswrapper[4724]: W0226 11:26:35.662162 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cfee1c3_df60_4944_a16e_e01dd310f2c4.slice/crio-46d85c717a572527f181088d59ddce88872ff09eca1536eb9e24576f1a72f31e WatchSource:0}: Error finding container 46d85c717a572527f181088d59ddce88872ff09eca1536eb9e24576f1a72f31e: Status 404 returned error can't find the container with id 46d85c717a572527f181088d59ddce88872ff09eca1536eb9e24576f1a72f31e Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.691386 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.703325 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88k8j\" (UniqueName: \"kubernetes.io/projected/d04a7b9b-e3a4-4876-bb57-10d86295d9c0-kube-api-access-88k8j\") pod \"watcher-operator-controller-manager-bccc79885-pgtk4\" (UID: \"d04a7b9b-e3a4-4876-bb57-10d86295d9c0\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pgtk4" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.706791 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.714416 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.714858 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.715157 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-hfrm6" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.719407 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.769264 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-sbkcr"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.807585 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88k8j\" (UniqueName: \"kubernetes.io/projected/d04a7b9b-e3a4-4876-bb57-10d86295d9c0-kube-api-access-88k8j\") pod \"watcher-operator-controller-manager-bccc79885-pgtk4\" (UID: \"d04a7b9b-e3a4-4876-bb57-10d86295d9c0\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pgtk4" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.816886 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.833949 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2r984"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.843887 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2r984" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.853739 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-scmg6" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.857156 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2r984"] Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.911350 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2dt2\" (UniqueName: \"kubernetes.io/projected/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-kube-api-access-g2dt2\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.911502 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.911583 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:35 crc kubenswrapper[4724]: I0226 11:26:35.921837 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88k8j\" (UniqueName: \"kubernetes.io/projected/d04a7b9b-e3a4-4876-bb57-10d86295d9c0-kube-api-access-88k8j\") pod \"watcher-operator-controller-manager-bccc79885-pgtk4\" (UID: \"d04a7b9b-e3a4-4876-bb57-10d86295d9c0\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pgtk4" Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.015078 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6jt5\" (UniqueName: \"kubernetes.io/projected/dc7781b3-4d7b-4855-8e76-bb3ad2028a9c-kube-api-access-f6jt5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2r984\" (UID: \"dc7781b3-4d7b-4855-8e76-bb3ad2028a9c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2r984" Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.015576 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2dt2\" (UniqueName: \"kubernetes.io/projected/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-kube-api-access-g2dt2\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.015632 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.015679 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert\") pod \"infra-operator-controller-manager-79d975b745-8n9qj\" (UID: \"2714a834-e9ca-40b1-a73c-2b890783f29e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.015709 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:36 crc kubenswrapper[4724]: E0226 11:26:36.015860 4724 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 11:26:36 crc kubenswrapper[4724]: E0226 11:26:36.015919 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs podName:48de473d-2e43-44ee-b0d1-db2c8e11fc2b nodeName:}" failed. No retries permitted until 2026-02-26 11:26:36.515895879 +0000 UTC m=+1263.171634994 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs") pod "openstack-operator-controller-manager-75d9b57894-2862v" (UID: "48de473d-2e43-44ee-b0d1-db2c8e11fc2b") : secret "webhook-server-cert" not found Feb 26 11:26:36 crc kubenswrapper[4724]: E0226 11:26:36.016248 4724 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 11:26:36 crc kubenswrapper[4724]: E0226 11:26:36.016279 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert podName:2714a834-e9ca-40b1-a73c-2b890783f29e nodeName:}" failed. No retries permitted until 2026-02-26 11:26:38.016271049 +0000 UTC m=+1264.672010164 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert") pod "infra-operator-controller-manager-79d975b745-8n9qj" (UID: "2714a834-e9ca-40b1-a73c-2b890783f29e") : secret "infra-operator-webhook-server-cert" not found Feb 26 11:26:36 crc kubenswrapper[4724]: E0226 11:26:36.016308 4724 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 11:26:36 crc kubenswrapper[4724]: E0226 11:26:36.016441 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs podName:48de473d-2e43-44ee-b0d1-db2c8e11fc2b nodeName:}" failed. No retries permitted until 2026-02-26 11:26:36.516408552 +0000 UTC m=+1263.172147837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs") pod "openstack-operator-controller-manager-75d9b57894-2862v" (UID: "48de473d-2e43-44ee-b0d1-db2c8e11fc2b") : secret "metrics-server-cert" not found Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.053370 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pgtk4" Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.095821 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2dt2\" (UniqueName: \"kubernetes.io/projected/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-kube-api-access-g2dt2\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.119074 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6jt5\" (UniqueName: \"kubernetes.io/projected/dc7781b3-4d7b-4855-8e76-bb3ad2028a9c-kube-api-access-f6jt5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2r984\" (UID: \"dc7781b3-4d7b-4855-8e76-bb3ad2028a9c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2r984" Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.161705 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6jt5\" (UniqueName: \"kubernetes.io/projected/dc7781b3-4d7b-4855-8e76-bb3ad2028a9c-kube-api-access-f6jt5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2r984\" (UID: \"dc7781b3-4d7b-4855-8e76-bb3ad2028a9c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2r984" Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.297275 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2r984" Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.342364 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-q9sb4"] Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.445060 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-q9sb4" event={"ID":"f0ccafa2-8b59-49e6-b881-ffaee0c98646","Type":"ContainerStarted","Data":"b98234a8744fced2a03727d34edfbc5150a536eb66e32f165b84e9e7c970c383"} Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.446080 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-sbkcr" event={"ID":"0cfee1c3-df60-4944-a16e-e01dd310f2c4","Type":"ContainerStarted","Data":"46d85c717a572527f181088d59ddce88872ff09eca1536eb9e24576f1a72f31e"} Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.534809 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.535294 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:36 crc kubenswrapper[4724]: E0226 11:26:36.535449 4724 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 11:26:36 crc kubenswrapper[4724]: E0226 11:26:36.535517 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs podName:48de473d-2e43-44ee-b0d1-db2c8e11fc2b nodeName:}" failed. No retries permitted until 2026-02-26 11:26:37.535491876 +0000 UTC m=+1264.191230991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs") pod "openstack-operator-controller-manager-75d9b57894-2862v" (UID: "48de473d-2e43-44ee-b0d1-db2c8e11fc2b") : secret "metrics-server-cert" not found Feb 26 11:26:36 crc kubenswrapper[4724]: E0226 11:26:36.535656 4724 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 11:26:36 crc kubenswrapper[4724]: E0226 11:26:36.535758 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs podName:48de473d-2e43-44ee-b0d1-db2c8e11fc2b nodeName:}" failed. No retries permitted until 2026-02-26 11:26:37.535720392 +0000 UTC m=+1264.191459507 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs") pod "openstack-operator-controller-manager-75d9b57894-2862v" (UID: "48de473d-2e43-44ee-b0d1-db2c8e11fc2b") : secret "webhook-server-cert" not found Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.538267 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-6qq4t"] Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.636868 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l\" (UID: \"39700bc5-43f0-49b6-b510-523322e34eb5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:26:36 crc kubenswrapper[4724]: E0226 11:26:36.637091 4724 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:26:36 crc kubenswrapper[4724]: E0226 11:26:36.637145 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert podName:39700bc5-43f0-49b6-b510-523322e34eb5 nodeName:}" failed. No retries permitted until 2026-02-26 11:26:38.637128029 +0000 UTC m=+1265.292867134 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" (UID: "39700bc5-43f0-49b6-b510-523322e34eb5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.669455 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-wxw2f"] Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.693487 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-p58p8"] Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.746386 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-pxtxm"] Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.757491 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6zmmb"] Feb 26 11:26:36 crc kubenswrapper[4724]: W0226 11:26:36.764468 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2b50788_4e25_4589_84b8_00851a2a18b7.slice/crio-194fcd48e1470e2e22e62dc016a0de56fa82d8bcc7f22b23d0e9be61c41debaa WatchSource:0}: Error finding container 194fcd48e1470e2e22e62dc016a0de56fa82d8bcc7f22b23d0e9be61c41debaa: Status 404 returned error can't find the container with id 194fcd48e1470e2e22e62dc016a0de56fa82d8bcc7f22b23d0e9be61c41debaa Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.768419 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-rqsqh"] Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.789257 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-bfrsl"] Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.802763 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp"] Feb 26 11:26:36 crc kubenswrapper[4724]: I0226 11:26:36.808149 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-k75jd"] Feb 26 11:26:36 crc kubenswrapper[4724]: W0226 11:26:36.819387 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bcf19f6_1ed9_4315_a263_1bd5c8da7774.slice/crio-01453565a1bf2874468017d83a7d709dca95a585750ee02949f5d7b51fa5334b WatchSource:0}: Error finding container 01453565a1bf2874468017d83a7d709dca95a585750ee02949f5d7b51fa5334b: Status 404 returned error can't find the container with id 01453565a1bf2874468017d83a7d709dca95a585750ee02949f5d7b51fa5334b Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.042826 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-pwg6d"] Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.098607 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rjxxw"] Feb 26 11:26:37 crc kubenswrapper[4724]: W0226 11:26:37.128998 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b790d8b_575d_462e_a9b1_512d91261517.slice/crio-c3c07b7669699734806a2cce94b3d12d2cc1800a27562a6d4d70dadeb7ea3222 WatchSource:0}: Error finding container c3c07b7669699734806a2cce94b3d12d2cc1800a27562a6d4d70dadeb7ea3222: Status 404 returned error can't find the container with id c3c07b7669699734806a2cce94b3d12d2cc1800a27562a6d4d70dadeb7ea3222 Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.176757 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-ntcpr"] Feb 26 11:26:37 crc kubenswrapper[4724]: W0226 11:26:37.180611 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd90588ea_6237_4fd0_a321_9c6db1e07525.slice/crio-eac821639faf753d233dba9f7f491872a406ce3470cf5c3bbe46642c21438480 WatchSource:0}: Error finding container eac821639faf753d233dba9f7f491872a406ce3470cf5c3bbe46642c21438480: Status 404 returned error can't find the container with id eac821639faf753d233dba9f7f491872a406ce3470cf5c3bbe46642c21438480 Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.196891 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-pgtk4"] Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.211791 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp"] Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.227249 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc"] Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.231257 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f"] Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.236026 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2r984"] Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.242356 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r"] Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.266879 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxt4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5dc6794d5b-f8k6f_openstack-operators(897f5a3f-a04e-4725-8a9a-0ce91c8bb372): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 11:26:37 crc kubenswrapper[4724]: W0226 11:26:37.268296 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod193a7bdd_a3a7_493d_8c99_a04d591e3a19.slice/crio-eec5fe4ab555ef7ebf942288f64370141a48b0a058c5a70d8667ca03c336ff86 WatchSource:0}: Error finding container eec5fe4ab555ef7ebf942288f64370141a48b0a058c5a70d8667ca03c336ff86: Status 404 returned error can't find the container with id eec5fe4ab555ef7ebf942288f64370141a48b0a058c5a70d8667ca03c336ff86 Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.268305 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f" podUID="897f5a3f-a04e-4725-8a9a-0ce91c8bb372" Feb 26 11:26:37 crc kubenswrapper[4724]: W0226 11:26:37.285328 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd49437c7_7f60_4304_b216_dcf93e31be87.slice/crio-a6f38be112bd8e935b526236bc5d07a4dce8c1a6a8e2d0693a8e20b0bc79032f WatchSource:0}: Error finding container a6f38be112bd8e935b526236bc5d07a4dce8c1a6a8e2d0693a8e20b0bc79032f: Status 404 returned error can't find the container with id a6f38be112bd8e935b526236bc5d07a4dce8c1a6a8e2d0693a8e20b0bc79032f Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.290037 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cpdfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6bd4687957-wjjjc_openstack-operators(193a7bdd-a3a7-493d-8c99-a04d591e3a19): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.291253 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc" podUID="193a7bdd-a3a7-493d-8c99-a04d591e3a19" Feb 26 11:26:37 crc kubenswrapper[4724]: W0226 11:26:37.305212 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc7781b3_4d7b_4855_8e76_bb3ad2028a9c.slice/crio-8a14c77674c2d6259d9a7e550b755fc88ac078398546d4af71f9a46c29e8fefd WatchSource:0}: Error finding container 8a14c77674c2d6259d9a7e550b755fc88ac078398546d4af71f9a46c29e8fefd: Status 404 returned error can't find the container with id 8a14c77674c2d6259d9a7e550b755fc88ac078398546d4af71f9a46c29e8fefd Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.307385 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rwv8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-6j4fp_openstack-operators(d49437c7-7f60-4304-b216-dcf93e31be87): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.309071 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f6jt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-2r984_openstack-operators(dc7781b3-4d7b-4855-8e76-bb3ad2028a9c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.309170 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp" podUID="d49437c7-7f60-4304-b216-dcf93e31be87" Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.309534 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vjvwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-589c568786-qhw9r_openstack-operators(b14d5ade-65f3-4402-bacd-5acc8ef39ce5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.311151 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2r984" podUID="dc7781b3-4d7b-4855-8e76-bb3ad2028a9c" Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.311232 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r" podUID="b14d5ade-65f3-4402-bacd-5acc8ef39ce5" Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.461642 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-6qq4t" event={"ID":"1c97807f-b47f-4762-80d8-a296d8108e19","Type":"ContainerStarted","Data":"48bdaed9fa327ee4fc2a3278dc3285e48f7b26dfa52457b4c15f4110c6a2e3c0"} Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.463206 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp" event={"ID":"d49437c7-7f60-4304-b216-dcf93e31be87","Type":"ContainerStarted","Data":"a6f38be112bd8e935b526236bc5d07a4dce8c1a6a8e2d0693a8e20b0bc79032f"} Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.468955 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp" podUID="d49437c7-7f60-4304-b216-dcf93e31be87" Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.469312 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rjxxw" event={"ID":"5b790d8b-575d-462e-a9b1-512d91261517","Type":"ContainerStarted","Data":"c3c07b7669699734806a2cce94b3d12d2cc1800a27562a6d4d70dadeb7ea3222"} Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.471542 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp" event={"ID":"da86929c-f438-4994-80be-1a7aa3b7b76e","Type":"ContainerStarted","Data":"735ffb6cbea23f12bf68e92eeb18269c33c64e6ea800d4200160cdf3029fb8bf"} Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.475240 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-p58p8" event={"ID":"9ae55185-83a5-47ea-b54f-01b31471f512","Type":"ContainerStarted","Data":"1147bc0949c5a5813ceab21ceaf18e7f69e99bebae00ad30adb442ecbd61ce32"} Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.478165 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pxtxm" event={"ID":"2178a458-4e8c-4d30-bbdb-8a0ef864fd80","Type":"ContainerStarted","Data":"556d25507a2affca6946c25bab44b0a9ac27e0364ecf679329a62b3efec73353"} Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.479406 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6zmmb" event={"ID":"731a1439-aa83-4119-ae37-23f526e6e73a","Type":"ContainerStarted","Data":"67d83bec00e0552a0d889fbac9f25abd171783939f614d24f6b5d4f6ac90144e"} Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.487730 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ntcpr" event={"ID":"d90588ea-6237-4fd0-a321-9c6db1e07525","Type":"ContainerStarted","Data":"eac821639faf753d233dba9f7f491872a406ce3470cf5c3bbe46642c21438480"} Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.495348 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2r984" event={"ID":"dc7781b3-4d7b-4855-8e76-bb3ad2028a9c","Type":"ContainerStarted","Data":"8a14c77674c2d6259d9a7e550b755fc88ac078398546d4af71f9a46c29e8fefd"} Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.497595 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2r984" podUID="dc7781b3-4d7b-4855-8e76-bb3ad2028a9c" Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.499141 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r" event={"ID":"b14d5ade-65f3-4402-bacd-5acc8ef39ce5","Type":"ContainerStarted","Data":"1493c8b934c0b970705db9ca92718550975f633ab28dda281fb9ab68ef15eb3d"} Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.500910 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wxw2f" event={"ID":"bc959a10-5f94-4d38-87d2-dda60f8ae078","Type":"ContainerStarted","Data":"e192695836081c035a451cc14310d456be1f8c02587d607a3c98d4670364643d"} Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.503021 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r" podUID="b14d5ade-65f3-4402-bacd-5acc8ef39ce5" Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.505420 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-rqsqh" event={"ID":"d2b50788-4e25-4589-84b8-00851a2a18b7","Type":"ContainerStarted","Data":"194fcd48e1470e2e22e62dc016a0de56fa82d8bcc7f22b23d0e9be61c41debaa"} Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.508928 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f" event={"ID":"897f5a3f-a04e-4725-8a9a-0ce91c8bb372","Type":"ContainerStarted","Data":"56e9f79b1e5fdf5d21159efd29f86e6e5af90454cc11cf9ca8120ca49dd620b9"} Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.511293 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f" podUID="897f5a3f-a04e-4725-8a9a-0ce91c8bb372" Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.512339 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bfrsl" event={"ID":"5495b5cf-e7d2-4fbf-98a4-ca9fc0276a73","Type":"ContainerStarted","Data":"c3babd080d03847d26826d95c615a990fb2787369e0581c9d5962aae4e553aab"} Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.514582 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pwg6d" event={"ID":"cd71c91d-33bb-4eae-9f27-84f39ef7653d","Type":"ContainerStarted","Data":"e38b6291579aebfc9096fa52908899d6ce648e9424d1e5bac3aac6b0ae380c28"} Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.519316 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k75jd" event={"ID":"9bcf19f6-1ed9-4315-a263-1bd5c8da7774","Type":"ContainerStarted","Data":"01453565a1bf2874468017d83a7d709dca95a585750ee02949f5d7b51fa5334b"} Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.530431 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pgtk4" event={"ID":"d04a7b9b-e3a4-4876-bb57-10d86295d9c0","Type":"ContainerStarted","Data":"dbb1fac70caff152bcbe08068f2cafe5904bed52b7a43670520f8ac3fde1968c"} Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.533744 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc" event={"ID":"193a7bdd-a3a7-493d-8c99-a04d591e3a19","Type":"ContainerStarted","Data":"eec5fe4ab555ef7ebf942288f64370141a48b0a058c5a70d8667ca03c336ff86"} Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.537947 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc" podUID="193a7bdd-a3a7-493d-8c99-a04d591e3a19" Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.590402 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:37 crc kubenswrapper[4724]: I0226 11:26:37.590503 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.591601 4724 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.591715 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs podName:48de473d-2e43-44ee-b0d1-db2c8e11fc2b nodeName:}" failed. No retries permitted until 2026-02-26 11:26:39.591687644 +0000 UTC m=+1266.247426789 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs") pod "openstack-operator-controller-manager-75d9b57894-2862v" (UID: "48de473d-2e43-44ee-b0d1-db2c8e11fc2b") : secret "webhook-server-cert" not found Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.591941 4724 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 11:26:37 crc kubenswrapper[4724]: E0226 11:26:37.592010 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs podName:48de473d-2e43-44ee-b0d1-db2c8e11fc2b nodeName:}" failed. No retries permitted until 2026-02-26 11:26:39.591988112 +0000 UTC m=+1266.247727297 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs") pod "openstack-operator-controller-manager-75d9b57894-2862v" (UID: "48de473d-2e43-44ee-b0d1-db2c8e11fc2b") : secret "metrics-server-cert" not found Feb 26 11:26:38 crc kubenswrapper[4724]: I0226 11:26:38.099492 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert\") pod \"infra-operator-controller-manager-79d975b745-8n9qj\" (UID: \"2714a834-e9ca-40b1-a73c-2b890783f29e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" Feb 26 11:26:38 crc kubenswrapper[4724]: E0226 11:26:38.099847 4724 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 11:26:38 crc kubenswrapper[4724]: E0226 11:26:38.100136 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert podName:2714a834-e9ca-40b1-a73c-2b890783f29e nodeName:}" failed. No retries permitted until 2026-02-26 11:26:42.099920812 +0000 UTC m=+1268.755659927 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert") pod "infra-operator-controller-manager-79d975b745-8n9qj" (UID: "2714a834-e9ca-40b1-a73c-2b890783f29e") : secret "infra-operator-webhook-server-cert" not found Feb 26 11:26:38 crc kubenswrapper[4724]: E0226 11:26:38.563827 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp" podUID="d49437c7-7f60-4304-b216-dcf93e31be87" Feb 26 11:26:38 crc kubenswrapper[4724]: E0226 11:26:38.563871 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r" podUID="b14d5ade-65f3-4402-bacd-5acc8ef39ce5" Feb 26 11:26:38 crc kubenswrapper[4724]: E0226 11:26:38.563865 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f" podUID="897f5a3f-a04e-4725-8a9a-0ce91c8bb372" Feb 26 11:26:38 crc kubenswrapper[4724]: E0226 11:26:38.563914 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2r984" podUID="dc7781b3-4d7b-4855-8e76-bb3ad2028a9c" Feb 26 11:26:38 crc kubenswrapper[4724]: E0226 11:26:38.565220 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc" podUID="193a7bdd-a3a7-493d-8c99-a04d591e3a19" Feb 26 11:26:38 crc kubenswrapper[4724]: I0226 11:26:38.710461 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l\" (UID: \"39700bc5-43f0-49b6-b510-523322e34eb5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:26:38 crc kubenswrapper[4724]: E0226 11:26:38.710585 4724 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:26:38 crc kubenswrapper[4724]: E0226 11:26:38.710643 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert podName:39700bc5-43f0-49b6-b510-523322e34eb5 nodeName:}" failed. No retries permitted until 2026-02-26 11:26:42.710618713 +0000 UTC m=+1269.366357828 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" (UID: "39700bc5-43f0-49b6-b510-523322e34eb5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:26:39 crc kubenswrapper[4724]: I0226 11:26:39.627929 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:39 crc kubenswrapper[4724]: I0226 11:26:39.628027 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:39 crc kubenswrapper[4724]: E0226 11:26:39.628220 4724 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 11:26:39 crc kubenswrapper[4724]: E0226 11:26:39.628214 4724 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 11:26:39 crc kubenswrapper[4724]: E0226 11:26:39.628293 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs podName:48de473d-2e43-44ee-b0d1-db2c8e11fc2b nodeName:}" failed. No retries permitted until 2026-02-26 11:26:43.628272916 +0000 UTC m=+1270.284012031 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs") pod "openstack-operator-controller-manager-75d9b57894-2862v" (UID: "48de473d-2e43-44ee-b0d1-db2c8e11fc2b") : secret "webhook-server-cert" not found Feb 26 11:26:39 crc kubenswrapper[4724]: E0226 11:26:39.628308 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs podName:48de473d-2e43-44ee-b0d1-db2c8e11fc2b nodeName:}" failed. No retries permitted until 2026-02-26 11:26:43.628302426 +0000 UTC m=+1270.284041541 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs") pod "openstack-operator-controller-manager-75d9b57894-2862v" (UID: "48de473d-2e43-44ee-b0d1-db2c8e11fc2b") : secret "metrics-server-cert" not found Feb 26 11:26:42 crc kubenswrapper[4724]: I0226 11:26:42.181644 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert\") pod \"infra-operator-controller-manager-79d975b745-8n9qj\" (UID: \"2714a834-e9ca-40b1-a73c-2b890783f29e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" Feb 26 11:26:42 crc kubenswrapper[4724]: E0226 11:26:42.181897 4724 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 11:26:42 crc kubenswrapper[4724]: E0226 11:26:42.182209 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert podName:2714a834-e9ca-40b1-a73c-2b890783f29e nodeName:}" failed. No retries permitted until 2026-02-26 11:26:50.182168297 +0000 UTC m=+1276.837907412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert") pod "infra-operator-controller-manager-79d975b745-8n9qj" (UID: "2714a834-e9ca-40b1-a73c-2b890783f29e") : secret "infra-operator-webhook-server-cert" not found Feb 26 11:26:42 crc kubenswrapper[4724]: I0226 11:26:42.790469 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l\" (UID: \"39700bc5-43f0-49b6-b510-523322e34eb5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:26:42 crc kubenswrapper[4724]: E0226 11:26:42.790710 4724 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:26:42 crc kubenswrapper[4724]: E0226 11:26:42.790810 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert podName:39700bc5-43f0-49b6-b510-523322e34eb5 nodeName:}" failed. No retries permitted until 2026-02-26 11:26:50.790782504 +0000 UTC m=+1277.446521619 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" (UID: "39700bc5-43f0-49b6-b510-523322e34eb5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:26:43 crc kubenswrapper[4724]: I0226 11:26:43.711594 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:43 crc kubenswrapper[4724]: I0226 11:26:43.711741 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:43 crc kubenswrapper[4724]: E0226 11:26:43.711942 4724 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 11:26:43 crc kubenswrapper[4724]: E0226 11:26:43.712019 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs podName:48de473d-2e43-44ee-b0d1-db2c8e11fc2b nodeName:}" failed. No retries permitted until 2026-02-26 11:26:51.711995088 +0000 UTC m=+1278.367734203 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs") pod "openstack-operator-controller-manager-75d9b57894-2862v" (UID: "48de473d-2e43-44ee-b0d1-db2c8e11fc2b") : secret "metrics-server-cert" not found Feb 26 11:26:43 crc kubenswrapper[4724]: E0226 11:26:43.712550 4724 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 11:26:43 crc kubenswrapper[4724]: E0226 11:26:43.712647 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs podName:48de473d-2e43-44ee-b0d1-db2c8e11fc2b nodeName:}" failed. No retries permitted until 2026-02-26 11:26:51.712636915 +0000 UTC m=+1278.368376030 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs") pod "openstack-operator-controller-manager-75d9b57894-2862v" (UID: "48de473d-2e43-44ee-b0d1-db2c8e11fc2b") : secret "webhook-server-cert" not found Feb 26 11:26:50 crc kubenswrapper[4724]: I0226 11:26:50.261263 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert\") pod \"infra-operator-controller-manager-79d975b745-8n9qj\" (UID: \"2714a834-e9ca-40b1-a73c-2b890783f29e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" Feb 26 11:26:50 crc kubenswrapper[4724]: I0226 11:26:50.270313 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2714a834-e9ca-40b1-a73c-2b890783f29e-cert\") pod \"infra-operator-controller-manager-79d975b745-8n9qj\" (UID: \"2714a834-e9ca-40b1-a73c-2b890783f29e\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" Feb 26 11:26:50 crc kubenswrapper[4724]: I0226 11:26:50.274015 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" Feb 26 11:26:50 crc kubenswrapper[4724]: E0226 11:26:50.581674 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 26 11:26:50 crc kubenswrapper[4724]: E0226 11:26:50.581962 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w5gnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-pxtxm_openstack-operators(2178a458-4e8c-4d30-bbdb-8a0ef864fd80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:26:50 crc kubenswrapper[4724]: E0226 11:26:50.583296 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pxtxm" podUID="2178a458-4e8c-4d30-bbdb-8a0ef864fd80" Feb 26 11:26:50 crc kubenswrapper[4724]: E0226 11:26:50.703576 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pxtxm" podUID="2178a458-4e8c-4d30-bbdb-8a0ef864fd80" Feb 26 11:26:50 crc kubenswrapper[4724]: I0226 11:26:50.880245 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l\" (UID: \"39700bc5-43f0-49b6-b510-523322e34eb5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:26:50 crc kubenswrapper[4724]: E0226 11:26:50.880484 4724 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:26:50 crc kubenswrapper[4724]: E0226 11:26:50.880600 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert podName:39700bc5-43f0-49b6-b510-523322e34eb5 nodeName:}" failed. No retries permitted until 2026-02-26 11:27:06.880567718 +0000 UTC m=+1293.536306893 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" (UID: "39700bc5-43f0-49b6-b510-523322e34eb5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 11:26:51 crc kubenswrapper[4724]: I0226 11:26:51.793575 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:51 crc kubenswrapper[4724]: I0226 11:26:51.793719 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:26:51 crc kubenswrapper[4724]: E0226 11:26:51.793884 4724 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 11:26:51 crc kubenswrapper[4724]: E0226 11:26:51.794430 4724 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 11:26:51 crc kubenswrapper[4724]: E0226 11:26:51.794493 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs podName:48de473d-2e43-44ee-b0d1-db2c8e11fc2b nodeName:}" failed. No retries permitted until 2026-02-26 11:27:07.794471496 +0000 UTC m=+1294.450210611 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs") pod "openstack-operator-controller-manager-75d9b57894-2862v" (UID: "48de473d-2e43-44ee-b0d1-db2c8e11fc2b") : secret "webhook-server-cert" not found Feb 26 11:26:51 crc kubenswrapper[4724]: E0226 11:26:51.794945 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs podName:48de473d-2e43-44ee-b0d1-db2c8e11fc2b nodeName:}" failed. No retries permitted until 2026-02-26 11:27:07.794930998 +0000 UTC m=+1294.450670123 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs") pod "openstack-operator-controller-manager-75d9b57894-2862v" (UID: "48de473d-2e43-44ee-b0d1-db2c8e11fc2b") : secret "metrics-server-cert" not found Feb 26 11:26:54 crc kubenswrapper[4724]: E0226 11:26:54.967899 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 26 11:26:54 crc kubenswrapper[4724]: E0226 11:26:54.968560 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-792cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-wxw2f_openstack-operators(bc959a10-5f94-4d38-87d2-dda60f8ae078): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:26:54 crc kubenswrapper[4724]: E0226 11:26:54.969783 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wxw2f" podUID="bc959a10-5f94-4d38-87d2-dda60f8ae078" Feb 26 11:26:55 crc kubenswrapper[4724]: E0226 11:26:55.730546 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wxw2f" podUID="bc959a10-5f94-4d38-87d2-dda60f8ae078" Feb 26 11:26:56 crc kubenswrapper[4724]: E0226 11:26:56.067323 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06" Feb 26 11:26:56 crc kubenswrapper[4724]: E0226 11:26:56.067576 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9spx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-659dc6bbfc-rjxxw_openstack-operators(5b790d8b-575d-462e-a9b1-512d91261517): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:26:56 crc kubenswrapper[4724]: E0226 11:26:56.069452 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rjxxw" podUID="5b790d8b-575d-462e-a9b1-512d91261517" Feb 26 11:26:56 crc kubenswrapper[4724]: E0226 11:26:56.644871 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 26 11:26:56 crc kubenswrapper[4724]: E0226 11:26:56.645470 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-56zjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-vp8fp_openstack-operators(da86929c-f438-4994-80be-1a7aa3b7b76e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:26:56 crc kubenswrapper[4724]: E0226 11:26:56.646798 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp" podUID="da86929c-f438-4994-80be-1a7aa3b7b76e" Feb 26 11:26:56 crc kubenswrapper[4724]: E0226 11:26:56.737074 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rjxxw" podUID="5b790d8b-575d-462e-a9b1-512d91261517" Feb 26 11:26:56 crc kubenswrapper[4724]: E0226 11:26:56.739848 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp" podUID="da86929c-f438-4994-80be-1a7aa3b7b76e" Feb 26 11:26:58 crc kubenswrapper[4724]: E0226 11:26:58.953880 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" Feb 26 11:26:58 crc kubenswrapper[4724]: E0226 11:26:58.954114 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7zr7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-67d996989d-bfrsl_openstack-operators(5495b5cf-e7d2-4fbf-98a4-ca9fc0276a73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:26:58 crc kubenswrapper[4724]: E0226 11:26:58.955242 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bfrsl" podUID="5495b5cf-e7d2-4fbf-98a4-ca9fc0276a73" Feb 26 11:26:59 crc kubenswrapper[4724]: E0226 11:26:59.512772 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:8f06b9963e5b324856ce8ed80872cf04fdfb299d4f5cf13cb1d26f4e69ed42be" Feb 26 11:26:59 crc kubenswrapper[4724]: E0226 11:26:59.513408 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:8f06b9963e5b324856ce8ed80872cf04fdfb299d4f5cf13cb1d26f4e69ed42be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8tj89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-784b5bb6c5-6qq4t_openstack-operators(1c97807f-b47f-4762-80d8-a296d8108e19): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:26:59 crc kubenswrapper[4724]: E0226 11:26:59.514617 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-6qq4t" podUID="1c97807f-b47f-4762-80d8-a296d8108e19" Feb 26 11:26:59 crc kubenswrapper[4724]: E0226 11:26:59.763502 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26\\\"\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bfrsl" podUID="5495b5cf-e7d2-4fbf-98a4-ca9fc0276a73" Feb 26 11:26:59 crc kubenswrapper[4724]: E0226 11:26:59.766146 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:8f06b9963e5b324856ce8ed80872cf04fdfb299d4f5cf13cb1d26f4e69ed42be\\\"\"" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-6qq4t" podUID="1c97807f-b47f-4762-80d8-a296d8108e19" Feb 26 11:27:00 crc kubenswrapper[4724]: E0226 11:27:00.140954 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 26 11:27:00 crc kubenswrapper[4724]: E0226 11:27:00.141198 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tltw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-rqsqh_openstack-operators(d2b50788-4e25-4589-84b8-00851a2a18b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:27:00 crc kubenswrapper[4724]: E0226 11:27:00.142294 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-rqsqh" podUID="d2b50788-4e25-4589-84b8-00851a2a18b7" Feb 26 11:27:00 crc kubenswrapper[4724]: E0226 11:27:00.674238 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 26 11:27:00 crc kubenswrapper[4724]: E0226 11:27:00.674466 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5c48j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-k75jd_openstack-operators(9bcf19f6-1ed9-4315-a263-1bd5c8da7774): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:27:00 crc kubenswrapper[4724]: E0226 11:27:00.675709 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k75jd" podUID="9bcf19f6-1ed9-4315-a263-1bd5c8da7774" Feb 26 11:27:00 crc kubenswrapper[4724]: E0226 11:27:00.772024 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k75jd" podUID="9bcf19f6-1ed9-4315-a263-1bd5c8da7774" Feb 26 11:27:00 crc kubenswrapper[4724]: E0226 11:27:00.772649 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-rqsqh" podUID="d2b50788-4e25-4589-84b8-00851a2a18b7" Feb 26 11:27:06 crc kubenswrapper[4724]: I0226 11:27:06.145174 4724 scope.go:117] "RemoveContainer" containerID="f0114419db2843b1f6baa899c2bb4ea118535df743b2164195d4fc6cdf0298bc" Feb 26 11:27:06 crc kubenswrapper[4724]: I0226 11:27:06.907587 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l\" (UID: \"39700bc5-43f0-49b6-b510-523322e34eb5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:27:06 crc kubenswrapper[4724]: I0226 11:27:06.921156 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39700bc5-43f0-49b6-b510-523322e34eb5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l\" (UID: \"39700bc5-43f0-49b6-b510-523322e34eb5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:27:07 crc kubenswrapper[4724]: I0226 11:27:07.017102 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:27:07 crc kubenswrapper[4724]: I0226 11:27:07.820363 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:27:07 crc kubenswrapper[4724]: I0226 11:27:07.820759 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:27:07 crc kubenswrapper[4724]: I0226 11:27:07.840931 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-webhook-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:27:07 crc kubenswrapper[4724]: I0226 11:27:07.840949 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48de473d-2e43-44ee-b0d1-db2c8e11fc2b-metrics-certs\") pod \"openstack-operator-controller-manager-75d9b57894-2862v\" (UID: \"48de473d-2e43-44ee-b0d1-db2c8e11fc2b\") " pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:27:07 crc kubenswrapper[4724]: I0226 11:27:07.906853 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:27:08 crc kubenswrapper[4724]: E0226 11:27:08.014248 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf" Feb 26 11:27:08 crc kubenswrapper[4724]: E0226 11:27:08.014648 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cpdfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6bd4687957-wjjjc_openstack-operators(193a7bdd-a3a7-493d-8c99-a04d591e3a19): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:27:08 crc kubenswrapper[4724]: E0226 11:27:08.016062 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc" podUID="193a7bdd-a3a7-493d-8c99-a04d591e3a19" Feb 26 11:27:08 crc kubenswrapper[4724]: I0226 11:27:08.912831 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj"] Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.174004 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l"] Feb 26 11:27:09 crc kubenswrapper[4724]: W0226 11:27:09.204050 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39700bc5_43f0_49b6_b510_523322e34eb5.slice/crio-8f0cdd10a240fd4d3d505b236e7c8b051c8456aff9f4945fcb39cd0c1eac8b57 WatchSource:0}: Error finding container 8f0cdd10a240fd4d3d505b236e7c8b051c8456aff9f4945fcb39cd0c1eac8b57: Status 404 returned error can't find the container with id 8f0cdd10a240fd4d3d505b236e7c8b051c8456aff9f4945fcb39cd0c1eac8b57 Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.365765 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v"] Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.885712 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pxtxm" event={"ID":"2178a458-4e8c-4d30-bbdb-8a0ef864fd80","Type":"ContainerStarted","Data":"3727a447b82836ed53a3251525dc97e7ccb1279555fdbc7e34157d087915c99e"} Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.886498 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pxtxm" Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.888495 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" event={"ID":"48de473d-2e43-44ee-b0d1-db2c8e11fc2b","Type":"ContainerStarted","Data":"fc37c61d33270238d0f58660fdb43e3f4fba991be8d861ed2af21c997ba26c70"} Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.888647 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" event={"ID":"48de473d-2e43-44ee-b0d1-db2c8e11fc2b","Type":"ContainerStarted","Data":"7514cafbb9273e25db3002b2e06ea90efbd910b4621eea344fd4b8438e0a0182"} Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.888809 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.891024 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" event={"ID":"2714a834-e9ca-40b1-a73c-2b890783f29e","Type":"ContainerStarted","Data":"8cd95eaa419799996b57d0254ff8e7aca2ddef856111976d88516a7d11857f20"} Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.896699 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pgtk4" event={"ID":"d04a7b9b-e3a4-4876-bb57-10d86295d9c0","Type":"ContainerStarted","Data":"6722388c46f29289d392408f57d0c263d7261823db6def876997162ca195da91"} Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.896972 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pgtk4" Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.906835 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f" event={"ID":"897f5a3f-a04e-4725-8a9a-0ce91c8bb372","Type":"ContainerStarted","Data":"817f752bf0c88d50d30bca1c36f1fa1631d92210d9d8b0e6649ecccbcaaf0e03"} Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.907732 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f" Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.919069 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp" event={"ID":"d49437c7-7f60-4304-b216-dcf93e31be87","Type":"ContainerStarted","Data":"4cfdb6efe42182851cd20b86a47153d9d6161448cd6d0d4d7a5c3a596978c19b"} Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.920590 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp" Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.932955 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rjxxw" event={"ID":"5b790d8b-575d-462e-a9b1-512d91261517","Type":"ContainerStarted","Data":"5101b4da06bbc8d31a3457a15fa20675cbd58c5bf177749f6c7f8d126e8c2cd4"} Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.933892 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rjxxw" Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.944815 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-p58p8" event={"ID":"9ae55185-83a5-47ea-b54f-01b31471f512","Type":"ContainerStarted","Data":"ac2eff5490f664088692ca8bc8934c7837d985a373663a1aa17480393a6ae019"} Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.945202 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-p58p8" Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.956838 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ntcpr" event={"ID":"d90588ea-6237-4fd0-a321-9c6db1e07525","Type":"ContainerStarted","Data":"eabba3cb801b867c0ac631c46ab3011f9c8256c315015abf5b8794f1277fc472"} Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.957372 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ntcpr" Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.960641 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp" podStartSLOduration=4.772011309 podStartE2EDuration="35.960620219s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:26:37.307056692 +0000 UTC m=+1263.962795807" lastFinishedPulling="2026-02-26 11:27:08.495665612 +0000 UTC m=+1295.151404717" observedRunningTime="2026-02-26 11:27:09.958943877 +0000 UTC m=+1296.614682992" watchObservedRunningTime="2026-02-26 11:27:09.960620219 +0000 UTC m=+1296.616359334" Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.960815 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pxtxm" podStartSLOduration=4.109152047 podStartE2EDuration="35.960809974s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:26:36.761894633 +0000 UTC m=+1263.417633748" lastFinishedPulling="2026-02-26 11:27:08.61355256 +0000 UTC m=+1295.269291675" observedRunningTime="2026-02-26 11:27:09.932014579 +0000 UTC m=+1296.587753714" watchObservedRunningTime="2026-02-26 11:27:09.960809974 +0000 UTC m=+1296.616549089" Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.966815 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-sbkcr" event={"ID":"0cfee1c3-df60-4944-a16e-e01dd310f2c4","Type":"ContainerStarted","Data":"2935d61eebbfe8019228de5f05b9f75f8f835343a9b6ac06ac7033e41129f9fa"} Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.967031 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-sbkcr" Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.991719 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pwg6d" event={"ID":"cd71c91d-33bb-4eae-9f27-84f39ef7653d","Type":"ContainerStarted","Data":"0a521392b5862b516b454c8dae8210e656e3e095d7761d40ea511ecae4ef9bd3"} Feb 26 11:27:09 crc kubenswrapper[4724]: I0226 11:27:09.991961 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pwg6d" Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.002668 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" event={"ID":"39700bc5-43f0-49b6-b510-523322e34eb5","Type":"ContainerStarted","Data":"8f0cdd10a240fd4d3d505b236e7c8b051c8456aff9f4945fcb39cd0c1eac8b57"} Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.015064 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-q9sb4" event={"ID":"f0ccafa2-8b59-49e6-b881-ffaee0c98646","Type":"ContainerStarted","Data":"dd8791ee18578c7203a33fad9369e284755e539b875d4fb2705b20fa9f7df18a"} Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.015371 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-q9sb4" Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.021876 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pgtk4" podStartSLOduration=11.023773402 podStartE2EDuration="35.021845161s" podCreationTimestamp="2026-02-26 11:26:35 +0000 UTC" firstStartedPulling="2026-02-26 11:26:37.245898012 +0000 UTC m=+1263.901637127" lastFinishedPulling="2026-02-26 11:27:01.243969771 +0000 UTC m=+1287.899708886" observedRunningTime="2026-02-26 11:27:10.01944749 +0000 UTC m=+1296.675186605" watchObservedRunningTime="2026-02-26 11:27:10.021845161 +0000 UTC m=+1296.677584276" Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.022703 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6zmmb" event={"ID":"731a1439-aa83-4119-ae37-23f526e6e73a","Type":"ContainerStarted","Data":"404cd007a34a7862e63f3e198b0dc30e400898e3f6f238e635669cd1c02bb222"} Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.023650 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6zmmb" Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.028943 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2r984" event={"ID":"dc7781b3-4d7b-4855-8e76-bb3ad2028a9c","Type":"ContainerStarted","Data":"67c6e1bf673efeab48d8f303ab62969fa80f8a1849985daceb4aacdf6e1ca658"} Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.033232 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r" event={"ID":"b14d5ade-65f3-4402-bacd-5acc8ef39ce5","Type":"ContainerStarted","Data":"2bc6091fcfbff374f361d22d667ae32396d2e40208551c477f55c02509f46e56"} Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.034171 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r" Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.066760 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f" podStartSLOduration=4.820913537 podStartE2EDuration="36.066736527s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:26:37.266650311 +0000 UTC m=+1263.922389426" lastFinishedPulling="2026-02-26 11:27:08.512473301 +0000 UTC m=+1295.168212416" observedRunningTime="2026-02-26 11:27:10.057351477 +0000 UTC m=+1296.713090612" watchObservedRunningTime="2026-02-26 11:27:10.066736527 +0000 UTC m=+1296.722475642" Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.184187 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" podStartSLOduration=35.184154683 podStartE2EDuration="35.184154683s" podCreationTimestamp="2026-02-26 11:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:27:10.146585534 +0000 UTC m=+1296.802324669" watchObservedRunningTime="2026-02-26 11:27:10.184154683 +0000 UTC m=+1296.839893808" Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.224524 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rjxxw" podStartSLOduration=4.462488822 podStartE2EDuration="36.224505332s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:26:37.13649371 +0000 UTC m=+1263.792232825" lastFinishedPulling="2026-02-26 11:27:08.89851023 +0000 UTC m=+1295.554249335" observedRunningTime="2026-02-26 11:27:10.2165906 +0000 UTC m=+1296.872329715" watchObservedRunningTime="2026-02-26 11:27:10.224505332 +0000 UTC m=+1296.880244457" Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.229437 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-q9sb4" podStartSLOduration=12.360146281 podStartE2EDuration="37.229418288s" podCreationTimestamp="2026-02-26 11:26:33 +0000 UTC" firstStartedPulling="2026-02-26 11:26:36.375258428 +0000 UTC m=+1263.030997553" lastFinishedPulling="2026-02-26 11:27:01.244530445 +0000 UTC m=+1287.900269560" observedRunningTime="2026-02-26 11:27:10.188497723 +0000 UTC m=+1296.844236858" watchObservedRunningTime="2026-02-26 11:27:10.229418288 +0000 UTC m=+1296.885157403" Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.274979 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2r984" podStartSLOduration=3.8557610159999998 podStartE2EDuration="35.274958999s" podCreationTimestamp="2026-02-26 11:26:35 +0000 UTC" firstStartedPulling="2026-02-26 11:26:37.308962301 +0000 UTC m=+1263.964701416" lastFinishedPulling="2026-02-26 11:27:08.728160284 +0000 UTC m=+1295.383899399" observedRunningTime="2026-02-26 11:27:10.266307349 +0000 UTC m=+1296.922046494" watchObservedRunningTime="2026-02-26 11:27:10.274958999 +0000 UTC m=+1296.930698114" Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.305980 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-sbkcr" podStartSLOduration=11.777439943 podStartE2EDuration="37.305964151s" podCreationTimestamp="2026-02-26 11:26:33 +0000 UTC" firstStartedPulling="2026-02-26 11:26:35.716612423 +0000 UTC m=+1262.372351538" lastFinishedPulling="2026-02-26 11:27:01.245136631 +0000 UTC m=+1287.900875746" observedRunningTime="2026-02-26 11:27:10.304526384 +0000 UTC m=+1296.960265509" watchObservedRunningTime="2026-02-26 11:27:10.305964151 +0000 UTC m=+1296.961703256" Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.387669 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-p58p8" podStartSLOduration=12.853268732 podStartE2EDuration="37.387644325s" podCreationTimestamp="2026-02-26 11:26:33 +0000 UTC" firstStartedPulling="2026-02-26 11:26:36.709908356 +0000 UTC m=+1263.365647471" lastFinishedPulling="2026-02-26 11:27:01.244283949 +0000 UTC m=+1287.900023064" observedRunningTime="2026-02-26 11:27:10.371811991 +0000 UTC m=+1297.027551116" watchObservedRunningTime="2026-02-26 11:27:10.387644325 +0000 UTC m=+1297.043383430" Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.438495 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r" podStartSLOduration=5.251940705 podStartE2EDuration="36.438477482s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:26:37.309030312 +0000 UTC m=+1263.964769427" lastFinishedPulling="2026-02-26 11:27:08.495567089 +0000 UTC m=+1295.151306204" observedRunningTime="2026-02-26 11:27:10.435790083 +0000 UTC m=+1297.091529208" watchObservedRunningTime="2026-02-26 11:27:10.438477482 +0000 UTC m=+1297.094216597" Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.465225 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ntcpr" podStartSLOduration=12.413170917 podStartE2EDuration="36.465202423s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:26:37.191886934 +0000 UTC m=+1263.847626059" lastFinishedPulling="2026-02-26 11:27:01.24391845 +0000 UTC m=+1287.899657565" observedRunningTime="2026-02-26 11:27:10.46310746 +0000 UTC m=+1297.118846575" watchObservedRunningTime="2026-02-26 11:27:10.465202423 +0000 UTC m=+1297.120941538" Feb 26 11:27:10 crc kubenswrapper[4724]: I0226 11:27:10.560803 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pwg6d" podStartSLOduration=11.834683648 podStartE2EDuration="36.560781582s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:26:37.065962761 +0000 UTC m=+1263.721701876" lastFinishedPulling="2026-02-26 11:27:01.792060695 +0000 UTC m=+1288.447799810" observedRunningTime="2026-02-26 11:27:10.518713249 +0000 UTC m=+1297.174452384" watchObservedRunningTime="2026-02-26 11:27:10.560781582 +0000 UTC m=+1297.216520697" Feb 26 11:27:11 crc kubenswrapper[4724]: I0226 11:27:11.008776 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6zmmb" podStartSLOduration=12.551887907 podStartE2EDuration="37.008755502s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:26:36.785412873 +0000 UTC m=+1263.441151988" lastFinishedPulling="2026-02-26 11:27:01.242280468 +0000 UTC m=+1287.898019583" observedRunningTime="2026-02-26 11:27:10.563506892 +0000 UTC m=+1297.219246007" watchObservedRunningTime="2026-02-26 11:27:11.008755502 +0000 UTC m=+1297.664494607" Feb 26 11:27:12 crc kubenswrapper[4724]: I0226 11:27:12.057618 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wxw2f" event={"ID":"bc959a10-5f94-4d38-87d2-dda60f8ae078","Type":"ContainerStarted","Data":"1ee7bb41cc44daf8ca841b54b05d2c06abfd8aae3bb0225b9af5b91c4bc725e5"} Feb 26 11:27:12 crc kubenswrapper[4724]: I0226 11:27:12.058113 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wxw2f" Feb 26 11:27:12 crc kubenswrapper[4724]: I0226 11:27:12.079778 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wxw2f" podStartSLOduration=3.059387763 podStartE2EDuration="38.079753447s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:26:36.708646854 +0000 UTC m=+1263.364385969" lastFinishedPulling="2026-02-26 11:27:11.729012538 +0000 UTC m=+1298.384751653" observedRunningTime="2026-02-26 11:27:12.075372725 +0000 UTC m=+1298.731111850" watchObservedRunningTime="2026-02-26 11:27:12.079753447 +0000 UTC m=+1298.735492612" Feb 26 11:27:14 crc kubenswrapper[4724]: I0226 11:27:14.299033 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-sbkcr" Feb 26 11:27:14 crc kubenswrapper[4724]: I0226 11:27:14.395703 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-q9sb4" Feb 26 11:27:14 crc kubenswrapper[4724]: I0226 11:27:14.429189 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-p58p8" Feb 26 11:27:14 crc kubenswrapper[4724]: I0226 11:27:14.531895 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6zmmb" Feb 26 11:27:14 crc kubenswrapper[4724]: I0226 11:27:14.729030 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pxtxm" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.040797 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rjxxw" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.089959 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" event={"ID":"39700bc5-43f0-49b6-b510-523322e34eb5","Type":"ContainerStarted","Data":"9214477c84d7c02c2b9ac74548e163b020ba6116b1927b64c6a2660104974e95"} Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.091493 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.093465 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp" event={"ID":"da86929c-f438-4994-80be-1a7aa3b7b76e","Type":"ContainerStarted","Data":"a19fca93a6c4594700276aa8d56fd67260a58622ed82d78c34546516705637bd"} Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.093897 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.115224 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" event={"ID":"2714a834-e9ca-40b1-a73c-2b890783f29e","Type":"ContainerStarted","Data":"70ad6515663416d0b15e8e73b3f6e58bc4e4b48cfeae43cfbde56a47c52abbd2"} Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.115919 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.121815 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bfrsl" event={"ID":"5495b5cf-e7d2-4fbf-98a4-ca9fc0276a73","Type":"ContainerStarted","Data":"5b2af46a97d187c5850b16df6996b6dfff692e9cdd714d4bf54b278a6563cf67"} Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.122208 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bfrsl" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.130900 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-6qq4t" event={"ID":"1c97807f-b47f-4762-80d8-a296d8108e19","Type":"ContainerStarted","Data":"756cf957209620bb12b22525636288d67fae48daea0345781626df51fb86d253"} Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.131670 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-6qq4t" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.140721 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" podStartSLOduration=36.41726391 podStartE2EDuration="41.140701364s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:27:09.220131157 +0000 UTC m=+1295.875870272" lastFinishedPulling="2026-02-26 11:27:13.943568611 +0000 UTC m=+1300.599307726" observedRunningTime="2026-02-26 11:27:15.13584995 +0000 UTC m=+1301.791589065" watchObservedRunningTime="2026-02-26 11:27:15.140701364 +0000 UTC m=+1301.796440469" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.174734 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bfrsl" podStartSLOduration=4.112760227 podStartE2EDuration="41.174714251s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:26:36.768587443 +0000 UTC m=+1263.424326558" lastFinishedPulling="2026-02-26 11:27:13.830541467 +0000 UTC m=+1300.486280582" observedRunningTime="2026-02-26 11:27:15.170161085 +0000 UTC m=+1301.825900210" watchObservedRunningTime="2026-02-26 11:27:15.174714251 +0000 UTC m=+1301.830453366" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.207337 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" podStartSLOduration=36.305249823 podStartE2EDuration="41.207307163s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:27:09.040511885 +0000 UTC m=+1295.696251000" lastFinishedPulling="2026-02-26 11:27:13.942569215 +0000 UTC m=+1300.598308340" observedRunningTime="2026-02-26 11:27:15.202534541 +0000 UTC m=+1301.858273656" watchObservedRunningTime="2026-02-26 11:27:15.207307163 +0000 UTC m=+1301.863046278" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.237365 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp" podStartSLOduration=4.073744432 podStartE2EDuration="41.237341049s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:26:36.779538303 +0000 UTC m=+1263.435277418" lastFinishedPulling="2026-02-26 11:27:13.94313492 +0000 UTC m=+1300.598874035" observedRunningTime="2026-02-26 11:27:15.230744271 +0000 UTC m=+1301.886483396" watchObservedRunningTime="2026-02-26 11:27:15.237341049 +0000 UTC m=+1301.893080174" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.253643 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-6qq4t" podStartSLOduration=3.871306298 podStartE2EDuration="41.253619265s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:26:36.562479155 +0000 UTC m=+1263.218218270" lastFinishedPulling="2026-02-26 11:27:13.944792122 +0000 UTC m=+1300.600531237" observedRunningTime="2026-02-26 11:27:15.25302793 +0000 UTC m=+1301.908767065" watchObservedRunningTime="2026-02-26 11:27:15.253619265 +0000 UTC m=+1301.909358390" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.453635 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pwg6d" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.501941 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6j4fp" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.615326 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ntcpr" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.651314 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-qhw9r" Feb 26 11:27:15 crc kubenswrapper[4724]: I0226 11:27:15.724021 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-f8k6f" Feb 26 11:27:16 crc kubenswrapper[4724]: I0226 11:27:16.061077 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pgtk4" Feb 26 11:27:16 crc kubenswrapper[4724]: I0226 11:27:16.906603 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:27:16 crc kubenswrapper[4724]: I0226 11:27:16.906968 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:27:17 crc kubenswrapper[4724]: I0226 11:27:17.913247 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" Feb 26 11:27:18 crc kubenswrapper[4724]: I0226 11:27:18.151175 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-rqsqh" event={"ID":"d2b50788-4e25-4589-84b8-00851a2a18b7","Type":"ContainerStarted","Data":"ab1e34fbf10cd3787bb31e5493bcc4d54e133ea6b9959a7b206c080c746df51a"} Feb 26 11:27:18 crc kubenswrapper[4724]: I0226 11:27:18.151654 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-rqsqh" Feb 26 11:27:18 crc kubenswrapper[4724]: I0226 11:27:18.152687 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k75jd" event={"ID":"9bcf19f6-1ed9-4315-a263-1bd5c8da7774","Type":"ContainerStarted","Data":"c41c076d42a70b911fb8c6b81c6abc122fddf32a38a7b6838154ff3217285261"} Feb 26 11:27:18 crc kubenswrapper[4724]: I0226 11:27:18.152869 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k75jd" Feb 26 11:27:18 crc kubenswrapper[4724]: I0226 11:27:18.176140 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-rqsqh" podStartSLOduration=3.901862907 podStartE2EDuration="44.176111389s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:26:36.775299395 +0000 UTC m=+1263.431038510" lastFinishedPulling="2026-02-26 11:27:17.049547887 +0000 UTC m=+1303.705286992" observedRunningTime="2026-02-26 11:27:18.169720107 +0000 UTC m=+1304.825459232" watchObservedRunningTime="2026-02-26 11:27:18.176111389 +0000 UTC m=+1304.831850504" Feb 26 11:27:18 crc kubenswrapper[4724]: I0226 11:27:18.203607 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k75jd" podStartSLOduration=4.217299105 podStartE2EDuration="44.20358456s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:26:36.821166725 +0000 UTC m=+1263.476905840" lastFinishedPulling="2026-02-26 11:27:16.80745218 +0000 UTC m=+1303.463191295" observedRunningTime="2026-02-26 11:27:18.19692035 +0000 UTC m=+1304.852659465" watchObservedRunningTime="2026-02-26 11:27:18.20358456 +0000 UTC m=+1304.859323695" Feb 26 11:27:19 crc kubenswrapper[4724]: E0226 11:27:19.978512 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc" podUID="193a7bdd-a3a7-493d-8c99-a04d591e3a19" Feb 26 11:27:20 crc kubenswrapper[4724]: I0226 11:27:20.280046 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8n9qj" Feb 26 11:27:24 crc kubenswrapper[4724]: I0226 11:27:24.476611 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-6qq4t" Feb 26 11:27:24 crc kubenswrapper[4724]: I0226 11:27:24.506862 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wxw2f" Feb 26 11:27:24 crc kubenswrapper[4724]: I0226 11:27:24.760118 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-rqsqh" Feb 26 11:27:24 crc kubenswrapper[4724]: I0226 11:27:24.783343 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bfrsl" Feb 26 11:27:24 crc kubenswrapper[4724]: I0226 11:27:24.865761 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp" Feb 26 11:27:24 crc kubenswrapper[4724]: I0226 11:27:24.944468 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-k75jd" Feb 26 11:27:27 crc kubenswrapper[4724]: I0226 11:27:27.023223 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" Feb 26 11:27:36 crc kubenswrapper[4724]: I0226 11:27:36.276891 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc" event={"ID":"193a7bdd-a3a7-493d-8c99-a04d591e3a19","Type":"ContainerStarted","Data":"aebf1e6059a8f9c7c82ed0729e66006eec2e3b076cc9cbd67e48061c9b1150dd"} Feb 26 11:27:36 crc kubenswrapper[4724]: I0226 11:27:36.277869 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc" Feb 26 11:27:36 crc kubenswrapper[4724]: I0226 11:27:36.292995 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc" podStartSLOduration=3.817882654 podStartE2EDuration="1m2.292955525s" podCreationTimestamp="2026-02-26 11:26:34 +0000 UTC" firstStartedPulling="2026-02-26 11:26:37.289888624 +0000 UTC m=+1263.945627739" lastFinishedPulling="2026-02-26 11:27:35.764961495 +0000 UTC m=+1322.420700610" observedRunningTime="2026-02-26 11:27:36.291935189 +0000 UTC m=+1322.947674314" watchObservedRunningTime="2026-02-26 11:27:36.292955525 +0000 UTC m=+1322.948694650" Feb 26 11:27:44 crc kubenswrapper[4724]: I0226 11:27:44.865589 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-wjjjc" Feb 26 11:27:46 crc kubenswrapper[4724]: I0226 11:27:46.906503 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:27:46 crc kubenswrapper[4724]: I0226 11:27:46.906852 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.055396 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c79hk"] Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.056874 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c79hk" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.060241 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.060455 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.060925 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7nrjs" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.062268 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.093330 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c79hk"] Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.134626 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-km282"] Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.136445 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-km282" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.139216 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.207545 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-km282"] Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.210447 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2f2r\" (UniqueName: \"kubernetes.io/projected/26b204be-3e88-4df5-aeb0-202f78e065a6-kube-api-access-m2f2r\") pod \"dnsmasq-dns-78dd6ddcc-km282\" (UID: \"26b204be-3e88-4df5-aeb0-202f78e065a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-km282" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.210528 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b204be-3e88-4df5-aeb0-202f78e065a6-config\") pod \"dnsmasq-dns-78dd6ddcc-km282\" (UID: \"26b204be-3e88-4df5-aeb0-202f78e065a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-km282" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.210552 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26b204be-3e88-4df5-aeb0-202f78e065a6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-km282\" (UID: \"26b204be-3e88-4df5-aeb0-202f78e065a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-km282" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.210603 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe5f7817-1dcd-4d76-9817-2ceebf76317f-config\") pod \"dnsmasq-dns-675f4bcbfc-c79hk\" (UID: \"fe5f7817-1dcd-4d76-9817-2ceebf76317f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c79hk" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.210621 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s46wn\" (UniqueName: \"kubernetes.io/projected/fe5f7817-1dcd-4d76-9817-2ceebf76317f-kube-api-access-s46wn\") pod \"dnsmasq-dns-675f4bcbfc-c79hk\" (UID: \"fe5f7817-1dcd-4d76-9817-2ceebf76317f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c79hk" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.311654 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s46wn\" (UniqueName: \"kubernetes.io/projected/fe5f7817-1dcd-4d76-9817-2ceebf76317f-kube-api-access-s46wn\") pod \"dnsmasq-dns-675f4bcbfc-c79hk\" (UID: \"fe5f7817-1dcd-4d76-9817-2ceebf76317f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c79hk" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.311732 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2f2r\" (UniqueName: \"kubernetes.io/projected/26b204be-3e88-4df5-aeb0-202f78e065a6-kube-api-access-m2f2r\") pod \"dnsmasq-dns-78dd6ddcc-km282\" (UID: \"26b204be-3e88-4df5-aeb0-202f78e065a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-km282" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.311784 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b204be-3e88-4df5-aeb0-202f78e065a6-config\") pod \"dnsmasq-dns-78dd6ddcc-km282\" (UID: \"26b204be-3e88-4df5-aeb0-202f78e065a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-km282" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.311807 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26b204be-3e88-4df5-aeb0-202f78e065a6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-km282\" (UID: \"26b204be-3e88-4df5-aeb0-202f78e065a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-km282" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.311858 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe5f7817-1dcd-4d76-9817-2ceebf76317f-config\") pod \"dnsmasq-dns-675f4bcbfc-c79hk\" (UID: \"fe5f7817-1dcd-4d76-9817-2ceebf76317f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c79hk" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.312929 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe5f7817-1dcd-4d76-9817-2ceebf76317f-config\") pod \"dnsmasq-dns-675f4bcbfc-c79hk\" (UID: \"fe5f7817-1dcd-4d76-9817-2ceebf76317f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c79hk" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.312982 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26b204be-3e88-4df5-aeb0-202f78e065a6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-km282\" (UID: \"26b204be-3e88-4df5-aeb0-202f78e065a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-km282" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.313109 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b204be-3e88-4df5-aeb0-202f78e065a6-config\") pod \"dnsmasq-dns-78dd6ddcc-km282\" (UID: \"26b204be-3e88-4df5-aeb0-202f78e065a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-km282" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.340741 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s46wn\" (UniqueName: \"kubernetes.io/projected/fe5f7817-1dcd-4d76-9817-2ceebf76317f-kube-api-access-s46wn\") pod \"dnsmasq-dns-675f4bcbfc-c79hk\" (UID: \"fe5f7817-1dcd-4d76-9817-2ceebf76317f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c79hk" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.357561 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2f2r\" (UniqueName: \"kubernetes.io/projected/26b204be-3e88-4df5-aeb0-202f78e065a6-kube-api-access-m2f2r\") pod \"dnsmasq-dns-78dd6ddcc-km282\" (UID: \"26b204be-3e88-4df5-aeb0-202f78e065a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-km282" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.379404 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c79hk" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.459632 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-km282" Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.947929 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c79hk"] Feb 26 11:27:59 crc kubenswrapper[4724]: I0226 11:27:59.986704 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-km282"] Feb 26 11:28:00 crc kubenswrapper[4724]: I0226 11:28:00.136037 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535088-zp6m5"] Feb 26 11:28:00 crc kubenswrapper[4724]: I0226 11:28:00.138072 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535088-zp6m5" Feb 26 11:28:00 crc kubenswrapper[4724]: I0226 11:28:00.144806 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:28:00 crc kubenswrapper[4724]: I0226 11:28:00.144874 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:28:00 crc kubenswrapper[4724]: I0226 11:28:00.144806 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:28:00 crc kubenswrapper[4724]: I0226 11:28:00.160740 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535088-zp6m5"] Feb 26 11:28:00 crc kubenswrapper[4724]: I0226 11:28:00.229992 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf4pt\" (UniqueName: \"kubernetes.io/projected/07227daa-9b2f-4573-a280-84d80a8b9db7-kube-api-access-bf4pt\") pod \"auto-csr-approver-29535088-zp6m5\" (UID: \"07227daa-9b2f-4573-a280-84d80a8b9db7\") " pod="openshift-infra/auto-csr-approver-29535088-zp6m5" Feb 26 11:28:00 crc kubenswrapper[4724]: I0226 11:28:00.331793 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf4pt\" (UniqueName: \"kubernetes.io/projected/07227daa-9b2f-4573-a280-84d80a8b9db7-kube-api-access-bf4pt\") pod \"auto-csr-approver-29535088-zp6m5\" (UID: \"07227daa-9b2f-4573-a280-84d80a8b9db7\") " pod="openshift-infra/auto-csr-approver-29535088-zp6m5" Feb 26 11:28:00 crc kubenswrapper[4724]: I0226 11:28:00.353783 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf4pt\" (UniqueName: \"kubernetes.io/projected/07227daa-9b2f-4573-a280-84d80a8b9db7-kube-api-access-bf4pt\") pod \"auto-csr-approver-29535088-zp6m5\" (UID: \"07227daa-9b2f-4573-a280-84d80a8b9db7\") " pod="openshift-infra/auto-csr-approver-29535088-zp6m5" Feb 26 11:28:00 crc kubenswrapper[4724]: I0226 11:28:00.445883 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-c79hk" event={"ID":"fe5f7817-1dcd-4d76-9817-2ceebf76317f","Type":"ContainerStarted","Data":"ab93308fd0ac54f9c5edfeb632507afbf30fd380641bb4382b9d59f1aade1da4"} Feb 26 11:28:00 crc kubenswrapper[4724]: I0226 11:28:00.447942 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-km282" event={"ID":"26b204be-3e88-4df5-aeb0-202f78e065a6","Type":"ContainerStarted","Data":"403f2f736deb82b44ca37f268a5a5148ce672dcd6682fb1010024e70befff92a"} Feb 26 11:28:00 crc kubenswrapper[4724]: I0226 11:28:00.472469 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535088-zp6m5" Feb 26 11:28:00 crc kubenswrapper[4724]: I0226 11:28:00.938152 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535088-zp6m5"] Feb 26 11:28:00 crc kubenswrapper[4724]: W0226 11:28:00.940362 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07227daa_9b2f_4573_a280_84d80a8b9db7.slice/crio-6c3c7be858766cca30497a84df54d039d6c357bd3d6521e74873a1b711ac823d WatchSource:0}: Error finding container 6c3c7be858766cca30497a84df54d039d6c357bd3d6521e74873a1b711ac823d: Status 404 returned error can't find the container with id 6c3c7be858766cca30497a84df54d039d6c357bd3d6521e74873a1b711ac823d Feb 26 11:28:01 crc kubenswrapper[4724]: I0226 11:28:01.481355 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535088-zp6m5" event={"ID":"07227daa-9b2f-4573-a280-84d80a8b9db7","Type":"ContainerStarted","Data":"6c3c7be858766cca30497a84df54d039d6c357bd3d6521e74873a1b711ac823d"} Feb 26 11:28:01 crc kubenswrapper[4724]: I0226 11:28:01.873037 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c79hk"] Feb 26 11:28:01 crc kubenswrapper[4724]: I0226 11:28:01.905255 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5txwr"] Feb 26 11:28:01 crc kubenswrapper[4724]: I0226 11:28:01.906448 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5txwr" Feb 26 11:28:01 crc kubenswrapper[4724]: I0226 11:28:01.921483 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5txwr"] Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.068354 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9wt8\" (UniqueName: \"kubernetes.io/projected/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-kube-api-access-z9wt8\") pod \"dnsmasq-dns-666b6646f7-5txwr\" (UID: \"ccbe85f6-ff6c-49c2-9304-72ae30711c4b\") " pod="openstack/dnsmasq-dns-666b6646f7-5txwr" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.068431 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5txwr\" (UID: \"ccbe85f6-ff6c-49c2-9304-72ae30711c4b\") " pod="openstack/dnsmasq-dns-666b6646f7-5txwr" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.068452 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-config\") pod \"dnsmasq-dns-666b6646f7-5txwr\" (UID: \"ccbe85f6-ff6c-49c2-9304-72ae30711c4b\") " pod="openstack/dnsmasq-dns-666b6646f7-5txwr" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.174307 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9wt8\" (UniqueName: \"kubernetes.io/projected/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-kube-api-access-z9wt8\") pod \"dnsmasq-dns-666b6646f7-5txwr\" (UID: \"ccbe85f6-ff6c-49c2-9304-72ae30711c4b\") " pod="openstack/dnsmasq-dns-666b6646f7-5txwr" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.174413 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5txwr\" (UID: \"ccbe85f6-ff6c-49c2-9304-72ae30711c4b\") " pod="openstack/dnsmasq-dns-666b6646f7-5txwr" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.174437 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-config\") pod \"dnsmasq-dns-666b6646f7-5txwr\" (UID: \"ccbe85f6-ff6c-49c2-9304-72ae30711c4b\") " pod="openstack/dnsmasq-dns-666b6646f7-5txwr" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.175310 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-config\") pod \"dnsmasq-dns-666b6646f7-5txwr\" (UID: \"ccbe85f6-ff6c-49c2-9304-72ae30711c4b\") " pod="openstack/dnsmasq-dns-666b6646f7-5txwr" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.176253 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5txwr\" (UID: \"ccbe85f6-ff6c-49c2-9304-72ae30711c4b\") " pod="openstack/dnsmasq-dns-666b6646f7-5txwr" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.239802 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9wt8\" (UniqueName: \"kubernetes.io/projected/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-kube-api-access-z9wt8\") pod \"dnsmasq-dns-666b6646f7-5txwr\" (UID: \"ccbe85f6-ff6c-49c2-9304-72ae30711c4b\") " pod="openstack/dnsmasq-dns-666b6646f7-5txwr" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.290377 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-km282"] Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.326309 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z6pc7"] Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.327450 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.346122 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z6pc7"] Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.480852 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6446d\" (UniqueName: \"kubernetes.io/projected/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-kube-api-access-6446d\") pod \"dnsmasq-dns-57d769cc4f-z6pc7\" (UID: \"5aaccc92-86d5-4ad9-a198-2a41fd2c0675\") " pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.480949 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-config\") pod \"dnsmasq-dns-57d769cc4f-z6pc7\" (UID: \"5aaccc92-86d5-4ad9-a198-2a41fd2c0675\") " pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.481013 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-z6pc7\" (UID: \"5aaccc92-86d5-4ad9-a198-2a41fd2c0675\") " pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.538678 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5txwr" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.583029 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-z6pc7\" (UID: \"5aaccc92-86d5-4ad9-a198-2a41fd2c0675\") " pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.583086 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6446d\" (UniqueName: \"kubernetes.io/projected/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-kube-api-access-6446d\") pod \"dnsmasq-dns-57d769cc4f-z6pc7\" (UID: \"5aaccc92-86d5-4ad9-a198-2a41fd2c0675\") " pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.583155 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-config\") pod \"dnsmasq-dns-57d769cc4f-z6pc7\" (UID: \"5aaccc92-86d5-4ad9-a198-2a41fd2c0675\") " pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.584238 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-z6pc7\" (UID: \"5aaccc92-86d5-4ad9-a198-2a41fd2c0675\") " pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.584910 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-config\") pod \"dnsmasq-dns-57d769cc4f-z6pc7\" (UID: \"5aaccc92-86d5-4ad9-a198-2a41fd2c0675\") " pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.623477 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6446d\" (UniqueName: \"kubernetes.io/projected/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-kube-api-access-6446d\") pod \"dnsmasq-dns-57d769cc4f-z6pc7\" (UID: \"5aaccc92-86d5-4ad9-a198-2a41fd2c0675\") " pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" Feb 26 11:28:02 crc kubenswrapper[4724]: I0226 11:28:02.651601 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.186246 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.188375 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.201336 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.201578 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.201749 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.201956 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.202991 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.203414 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.203718 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-cg4xv" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.221309 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.300612 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.300647 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.300675 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dd2j\" (UniqueName: \"kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-kube-api-access-4dd2j\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.300745 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.300772 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.300826 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-config-data\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.300874 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.300905 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.300929 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.300952 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.300967 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.409984 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dd2j\" (UniqueName: \"kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-kube-api-access-4dd2j\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.410139 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.410171 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.410270 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-config-data\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.410371 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.410433 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.410463 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.410506 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.410527 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.410601 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.410624 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.410788 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.417796 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.418085 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.418580 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.418769 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-config-data\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.421156 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.429940 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.433888 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.441008 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.472025 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.478866 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dd2j\" (UniqueName: \"kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-kube-api-access-4dd2j\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.492868 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.498972 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.519036 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.526327 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.533702 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.533892 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.534004 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.534136 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.535648 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-cfjpw" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.536526 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.544995 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.550328 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.612762 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.612811 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49bt2\" (UniqueName: \"kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-kube-api-access-49bt2\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.612856 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.612892 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d7fdccb-4fd0-4a6e-9241-add667b9a537-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.612931 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.612961 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.612991 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.613030 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.613067 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.613102 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d7fdccb-4fd0-4a6e-9241-add667b9a537-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.613130 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.714077 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.714135 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.714165 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.714284 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.714328 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.714363 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d7fdccb-4fd0-4a6e-9241-add667b9a537-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.714390 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.714429 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.714455 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49bt2\" (UniqueName: \"kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-kube-api-access-49bt2\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.714488 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.714515 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d7fdccb-4fd0-4a6e-9241-add667b9a537-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.719742 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.720371 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.721282 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.721451 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.723818 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.723824 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d7fdccb-4fd0-4a6e-9241-add667b9a537-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.725520 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.744748 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.744958 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.745030 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d7fdccb-4fd0-4a6e-9241-add667b9a537-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.788128 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49bt2\" (UniqueName: \"kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-kube-api-access-49bt2\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.835976 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z6pc7"] Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.848834 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5txwr"] Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.851700 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:03 crc kubenswrapper[4724]: I0226 11:28:03.918212 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.260041 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.579820 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5txwr" event={"ID":"ccbe85f6-ff6c-49c2-9304-72ae30711c4b","Type":"ContainerStarted","Data":"98b4e71fa59eb1d1e6834e78ecceecd4e8313fd4d7c535704a9a170f3a234224"} Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.590007 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535088-zp6m5" event={"ID":"07227daa-9b2f-4573-a280-84d80a8b9db7","Type":"ContainerStarted","Data":"6a4e8c5deeff5e7e2d8b1dcbde0bdd01b3fae4fe6b90c4b8b31772fee0d41700"} Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.590053 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.593173 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ad24283d-3357-4230-a2b2-3d5ed0fefa7f","Type":"ContainerStarted","Data":"cc7734e6d220507580f813de6d45266da4278dd3a73d937cd7ca08f0d4cad186"} Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.595496 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" event={"ID":"5aaccc92-86d5-4ad9-a198-2a41fd2c0675","Type":"ContainerStarted","Data":"a58f4e7387eb1534170ec91569cdef36c96531f83bde02302c0245fc7c720d58"} Feb 26 11:28:04 crc kubenswrapper[4724]: W0226 11:28:04.616330 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d7fdccb_4fd0_4a6e_9241_add667b9a537.slice/crio-a84a542ea8195b6ea4bec9a645a70add310134a2247d1b2753568f2b55f10e11 WatchSource:0}: Error finding container a84a542ea8195b6ea4bec9a645a70add310134a2247d1b2753568f2b55f10e11: Status 404 returned error can't find the container with id a84a542ea8195b6ea4bec9a645a70add310134a2247d1b2753568f2b55f10e11 Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.625445 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535088-zp6m5" podStartSLOduration=2.6085881459999998 podStartE2EDuration="4.625422374s" podCreationTimestamp="2026-02-26 11:28:00 +0000 UTC" firstStartedPulling="2026-02-26 11:28:00.943748529 +0000 UTC m=+1347.599487644" lastFinishedPulling="2026-02-26 11:28:02.960582757 +0000 UTC m=+1349.616321872" observedRunningTime="2026-02-26 11:28:04.621601046 +0000 UTC m=+1351.277340161" watchObservedRunningTime="2026-02-26 11:28:04.625422374 +0000 UTC m=+1351.281161489" Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.750894 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.757013 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.770648 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.770874 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.771030 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6b9wz" Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.771187 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.781391 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.798553 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.951767 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6abc9b19-0018-46d1-a119-0ffb069a1795-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.951829 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6abc9b19-0018-46d1-a119-0ffb069a1795-config-data-default\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.951887 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6abc9b19-0018-46d1-a119-0ffb069a1795-kolla-config\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.951918 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6abc9b19-0018-46d1-a119-0ffb069a1795-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.951950 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6abc9b19-0018-46d1-a119-0ffb069a1795-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.951995 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svvwk\" (UniqueName: \"kubernetes.io/projected/6abc9b19-0018-46d1-a119-0ffb069a1795-kube-api-access-svvwk\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.952044 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6abc9b19-0018-46d1-a119-0ffb069a1795-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:04 crc kubenswrapper[4724]: I0226 11:28:04.952073 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.055051 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6abc9b19-0018-46d1-a119-0ffb069a1795-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.055226 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6abc9b19-0018-46d1-a119-0ffb069a1795-config-data-default\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.055361 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6abc9b19-0018-46d1-a119-0ffb069a1795-kolla-config\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.055432 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6abc9b19-0018-46d1-a119-0ffb069a1795-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.055523 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6abc9b19-0018-46d1-a119-0ffb069a1795-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.055657 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svvwk\" (UniqueName: \"kubernetes.io/projected/6abc9b19-0018-46d1-a119-0ffb069a1795-kube-api-access-svvwk\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.055754 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6abc9b19-0018-46d1-a119-0ffb069a1795-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.055790 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.057026 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6abc9b19-0018-46d1-a119-0ffb069a1795-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.057507 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6abc9b19-0018-46d1-a119-0ffb069a1795-kolla-config\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.058435 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.060945 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6abc9b19-0018-46d1-a119-0ffb069a1795-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.061310 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6abc9b19-0018-46d1-a119-0ffb069a1795-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.086383 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6abc9b19-0018-46d1-a119-0ffb069a1795-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.087734 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svvwk\" (UniqueName: \"kubernetes.io/projected/6abc9b19-0018-46d1-a119-0ffb069a1795-kube-api-access-svvwk\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.093411 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6abc9b19-0018-46d1-a119-0ffb069a1795-config-data-default\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.104382 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"6abc9b19-0018-46d1-a119-0ffb069a1795\") " pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.120384 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.625594 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d7fdccb-4fd0-4a6e-9241-add667b9a537","Type":"ContainerStarted","Data":"a84a542ea8195b6ea4bec9a645a70add310134a2247d1b2753568f2b55f10e11"} Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.632937 4724 generic.go:334] "Generic (PLEG): container finished" podID="07227daa-9b2f-4573-a280-84d80a8b9db7" containerID="6a4e8c5deeff5e7e2d8b1dcbde0bdd01b3fae4fe6b90c4b8b31772fee0d41700" exitCode=0 Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.633026 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535088-zp6m5" event={"ID":"07227daa-9b2f-4573-a280-84d80a8b9db7","Type":"ContainerDied","Data":"6a4e8c5deeff5e7e2d8b1dcbde0bdd01b3fae4fe6b90c4b8b31772fee0d41700"} Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.852698 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.863273 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.875313 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.876054 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.876318 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-dr9jf" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.876330 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.877232 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.892359 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 11:28:05 crc kubenswrapper[4724]: W0226 11:28:05.965800 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6abc9b19_0018_46d1_a119_0ffb069a1795.slice/crio-81be83fdf649300136655b2ede4e06495f38f1bc4b84b57f0113168efea3aa46 WatchSource:0}: Error finding container 81be83fdf649300136655b2ede4e06495f38f1bc4b84b57f0113168efea3aa46: Status 404 returned error can't find the container with id 81be83fdf649300136655b2ede4e06495f38f1bc4b84b57f0113168efea3aa46 Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.982992 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.983083 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.983128 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.983156 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.983211 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.983225 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.983243 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmjwj\" (UniqueName: \"kubernetes.io/projected/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-kube-api-access-qmjwj\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:05 crc kubenswrapper[4724]: I0226 11:28:05.983484 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.084425 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.084472 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.084502 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.084838 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.086019 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.084527 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmjwj\" (UniqueName: \"kubernetes.io/projected/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-kube-api-access-qmjwj\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.086414 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.087117 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.087158 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.087194 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.087538 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.088454 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.095607 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.109089 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.120931 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.153443 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmjwj\" (UniqueName: \"kubernetes.io/projected/b0d66ab1-513b-452a-9f31-bfc4b4be6c18-kube-api-access-qmjwj\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.164367 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b0d66ab1-513b-452a-9f31-bfc4b4be6c18\") " pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.196621 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.200921 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.224226 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.236799 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.265325 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.265526 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-qd4xn" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.265685 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.293981 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70be877-253f-4859-ae54-bd241f38cb93-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b70be877-253f-4859-ae54-bd241f38cb93\") " pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.294031 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70be877-253f-4859-ae54-bd241f38cb93-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b70be877-253f-4859-ae54-bd241f38cb93\") " pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.294069 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dzcj\" (UniqueName: \"kubernetes.io/projected/b70be877-253f-4859-ae54-bd241f38cb93-kube-api-access-4dzcj\") pod \"memcached-0\" (UID: \"b70be877-253f-4859-ae54-bd241f38cb93\") " pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.294103 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b70be877-253f-4859-ae54-bd241f38cb93-config-data\") pod \"memcached-0\" (UID: \"b70be877-253f-4859-ae54-bd241f38cb93\") " pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.294139 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b70be877-253f-4859-ae54-bd241f38cb93-kolla-config\") pod \"memcached-0\" (UID: \"b70be877-253f-4859-ae54-bd241f38cb93\") " pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.398508 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70be877-253f-4859-ae54-bd241f38cb93-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b70be877-253f-4859-ae54-bd241f38cb93\") " pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.398569 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70be877-253f-4859-ae54-bd241f38cb93-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b70be877-253f-4859-ae54-bd241f38cb93\") " pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.398623 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dzcj\" (UniqueName: \"kubernetes.io/projected/b70be877-253f-4859-ae54-bd241f38cb93-kube-api-access-4dzcj\") pod \"memcached-0\" (UID: \"b70be877-253f-4859-ae54-bd241f38cb93\") " pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.398673 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b70be877-253f-4859-ae54-bd241f38cb93-config-data\") pod \"memcached-0\" (UID: \"b70be877-253f-4859-ae54-bd241f38cb93\") " pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.398722 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b70be877-253f-4859-ae54-bd241f38cb93-kolla-config\") pod \"memcached-0\" (UID: \"b70be877-253f-4859-ae54-bd241f38cb93\") " pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.400836 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b70be877-253f-4859-ae54-bd241f38cb93-kolla-config\") pod \"memcached-0\" (UID: \"b70be877-253f-4859-ae54-bd241f38cb93\") " pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.402686 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b70be877-253f-4859-ae54-bd241f38cb93-config-data\") pod \"memcached-0\" (UID: \"b70be877-253f-4859-ae54-bd241f38cb93\") " pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.407990 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70be877-253f-4859-ae54-bd241f38cb93-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b70be877-253f-4859-ae54-bd241f38cb93\") " pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.408525 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70be877-253f-4859-ae54-bd241f38cb93-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b70be877-253f-4859-ae54-bd241f38cb93\") " pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.426415 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dzcj\" (UniqueName: \"kubernetes.io/projected/b70be877-253f-4859-ae54-bd241f38cb93-kube-api-access-4dzcj\") pod \"memcached-0\" (UID: \"b70be877-253f-4859-ae54-bd241f38cb93\") " pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.587827 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 26 11:28:06 crc kubenswrapper[4724]: I0226 11:28:06.684666 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6abc9b19-0018-46d1-a119-0ffb069a1795","Type":"ContainerStarted","Data":"81be83fdf649300136655b2ede4e06495f38f1bc4b84b57f0113168efea3aa46"} Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.337990 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535088-zp6m5" Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.461464 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf4pt\" (UniqueName: \"kubernetes.io/projected/07227daa-9b2f-4573-a280-84d80a8b9db7-kube-api-access-bf4pt\") pod \"07227daa-9b2f-4573-a280-84d80a8b9db7\" (UID: \"07227daa-9b2f-4573-a280-84d80a8b9db7\") " Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.475508 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07227daa-9b2f-4573-a280-84d80a8b9db7-kube-api-access-bf4pt" (OuterVolumeSpecName: "kube-api-access-bf4pt") pod "07227daa-9b2f-4573-a280-84d80a8b9db7" (UID: "07227daa-9b2f-4573-a280-84d80a8b9db7"). InnerVolumeSpecName "kube-api-access-bf4pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.566121 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf4pt\" (UniqueName: \"kubernetes.io/projected/07227daa-9b2f-4573-a280-84d80a8b9db7-kube-api-access-bf4pt\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.652837 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 11:28:08 crc kubenswrapper[4724]: W0226 11:28:08.675304 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0d66ab1_513b_452a_9f31_bfc4b4be6c18.slice/crio-7b93ab5c19e94186efc6384ce2e0224f10e68207b565e11ac354da970416b38f WatchSource:0}: Error finding container 7b93ab5c19e94186efc6384ce2e0224f10e68207b565e11ac354da970416b38f: Status 404 returned error can't find the container with id 7b93ab5c19e94186efc6384ce2e0224f10e68207b565e11ac354da970416b38f Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.715393 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:28:08 crc kubenswrapper[4724]: E0226 11:28:08.715705 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07227daa-9b2f-4573-a280-84d80a8b9db7" containerName="oc" Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.715717 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="07227daa-9b2f-4573-a280-84d80a8b9db7" containerName="oc" Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.715876 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="07227daa-9b2f-4573-a280-84d80a8b9db7" containerName="oc" Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.725359 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.734695 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-tr4dw" Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.775317 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.846162 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b0d66ab1-513b-452a-9f31-bfc4b4be6c18","Type":"ContainerStarted","Data":"7b93ab5c19e94186efc6384ce2e0224f10e68207b565e11ac354da970416b38f"} Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.887629 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rlr7\" (UniqueName: \"kubernetes.io/projected/a9a1a92d-3769-4901-89b0-2fa52cbb547a-kube-api-access-8rlr7\") pod \"kube-state-metrics-0\" (UID: \"a9a1a92d-3769-4901-89b0-2fa52cbb547a\") " pod="openstack/kube-state-metrics-0" Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.898123 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535088-zp6m5" event={"ID":"07227daa-9b2f-4573-a280-84d80a8b9db7","Type":"ContainerDied","Data":"6c3c7be858766cca30497a84df54d039d6c357bd3d6521e74873a1b711ac823d"} Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.898205 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c3c7be858766cca30497a84df54d039d6c357bd3d6521e74873a1b711ac823d" Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.898310 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535088-zp6m5" Feb 26 11:28:08 crc kubenswrapper[4724]: I0226 11:28:08.990549 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rlr7\" (UniqueName: \"kubernetes.io/projected/a9a1a92d-3769-4901-89b0-2fa52cbb547a-kube-api-access-8rlr7\") pod \"kube-state-metrics-0\" (UID: \"a9a1a92d-3769-4901-89b0-2fa52cbb547a\") " pod="openstack/kube-state-metrics-0" Feb 26 11:28:09 crc kubenswrapper[4724]: I0226 11:28:09.007021 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 26 11:28:09 crc kubenswrapper[4724]: I0226 11:28:09.015296 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rlr7\" (UniqueName: \"kubernetes.io/projected/a9a1a92d-3769-4901-89b0-2fa52cbb547a-kube-api-access-8rlr7\") pod \"kube-state-metrics-0\" (UID: \"a9a1a92d-3769-4901-89b0-2fa52cbb547a\") " pod="openstack/kube-state-metrics-0" Feb 26 11:28:09 crc kubenswrapper[4724]: I0226 11:28:09.074979 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 11:28:09 crc kubenswrapper[4724]: I0226 11:28:09.429974 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535082-zvg5w"] Feb 26 11:28:09 crc kubenswrapper[4724]: I0226 11:28:09.430035 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535082-zvg5w"] Feb 26 11:28:09 crc kubenswrapper[4724]: I0226 11:28:09.913532 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b70be877-253f-4859-ae54-bd241f38cb93","Type":"ContainerStarted","Data":"cb41d18554de7c4427d3a527b3e6383659eb8093905c1fdc37d7acb25ee875e1"} Feb 26 11:28:09 crc kubenswrapper[4724]: I0226 11:28:09.947200 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:28:09 crc kubenswrapper[4724]: I0226 11:28:09.987990 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be10571b-4581-4365-9f84-a1e04076f8d4" path="/var/lib/kubelet/pods/be10571b-4581-4365-9f84-a1e04076f8d4/volumes" Feb 26 11:28:10 crc kubenswrapper[4724]: W0226 11:28:10.007894 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9a1a92d_3769_4901_89b0_2fa52cbb547a.slice/crio-3d462953177811c2f21ed66141f10056187043dca7c2504e933742b7f4d697ce WatchSource:0}: Error finding container 3d462953177811c2f21ed66141f10056187043dca7c2504e933742b7f4d697ce: Status 404 returned error can't find the container with id 3d462953177811c2f21ed66141f10056187043dca7c2504e933742b7f4d697ce Feb 26 11:28:10 crc kubenswrapper[4724]: I0226 11:28:10.937170 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a9a1a92d-3769-4901-89b0-2fa52cbb547a","Type":"ContainerStarted","Data":"3d462953177811c2f21ed66141f10056187043dca7c2504e933742b7f4d697ce"} Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.423262 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x9682"] Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.424573 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.490820 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5b8939ea-2d97-461c-ad75-cba4379157f7-var-log-ovn\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.490913 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b8939ea-2d97-461c-ad75-cba4379157f7-combined-ca-bundle\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.490948 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5b8939ea-2d97-461c-ad75-cba4379157f7-var-run\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.490978 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b8939ea-2d97-461c-ad75-cba4379157f7-scripts\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.491017 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6pcs\" (UniqueName: \"kubernetes.io/projected/5b8939ea-2d97-461c-ad75-cba4379157f7-kube-api-access-c6pcs\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.491046 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b8939ea-2d97-461c-ad75-cba4379157f7-ovn-controller-tls-certs\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.491110 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b8939ea-2d97-461c-ad75-cba4379157f7-var-run-ovn\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.502558 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x9682"] Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.516540 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.516979 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-b25jg" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.518074 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.565589 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-wsr8k"] Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.567755 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.599339 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6pcs\" (UniqueName: \"kubernetes.io/projected/5b8939ea-2d97-461c-ad75-cba4379157f7-kube-api-access-c6pcs\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.599403 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b8939ea-2d97-461c-ad75-cba4379157f7-ovn-controller-tls-certs\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.599441 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5948e8de-f31a-4efb-80dc-e8dfb083ab79-var-log\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.599493 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5948e8de-f31a-4efb-80dc-e8dfb083ab79-etc-ovs\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.599520 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5948e8de-f31a-4efb-80dc-e8dfb083ab79-scripts\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.599553 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5948e8de-f31a-4efb-80dc-e8dfb083ab79-var-lib\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.599584 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b8939ea-2d97-461c-ad75-cba4379157f7-var-run-ovn\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.599635 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5b8939ea-2d97-461c-ad75-cba4379157f7-var-log-ovn\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.599668 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hghzb\" (UniqueName: \"kubernetes.io/projected/5948e8de-f31a-4efb-80dc-e8dfb083ab79-kube-api-access-hghzb\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.599716 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5948e8de-f31a-4efb-80dc-e8dfb083ab79-var-run\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.599756 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b8939ea-2d97-461c-ad75-cba4379157f7-combined-ca-bundle\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.599786 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5b8939ea-2d97-461c-ad75-cba4379157f7-var-run\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.599810 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b8939ea-2d97-461c-ad75-cba4379157f7-scripts\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.600696 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5b8939ea-2d97-461c-ad75-cba4379157f7-var-log-ovn\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.600870 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5b8939ea-2d97-461c-ad75-cba4379157f7-var-run\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.601681 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b8939ea-2d97-461c-ad75-cba4379157f7-var-run-ovn\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.605219 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b8939ea-2d97-461c-ad75-cba4379157f7-scripts\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.606411 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b8939ea-2d97-461c-ad75-cba4379157f7-ovn-controller-tls-certs\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.607341 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b8939ea-2d97-461c-ad75-cba4379157f7-combined-ca-bundle\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.619755 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6pcs\" (UniqueName: \"kubernetes.io/projected/5b8939ea-2d97-461c-ad75-cba4379157f7-kube-api-access-c6pcs\") pod \"ovn-controller-x9682\" (UID: \"5b8939ea-2d97-461c-ad75-cba4379157f7\") " pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.624067 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wsr8k"] Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.701023 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hghzb\" (UniqueName: \"kubernetes.io/projected/5948e8de-f31a-4efb-80dc-e8dfb083ab79-kube-api-access-hghzb\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.701086 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5948e8de-f31a-4efb-80dc-e8dfb083ab79-var-run\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.701127 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5948e8de-f31a-4efb-80dc-e8dfb083ab79-var-log\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.701159 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5948e8de-f31a-4efb-80dc-e8dfb083ab79-etc-ovs\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.701191 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5948e8de-f31a-4efb-80dc-e8dfb083ab79-scripts\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.701213 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5948e8de-f31a-4efb-80dc-e8dfb083ab79-var-lib\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.701798 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5948e8de-f31a-4efb-80dc-e8dfb083ab79-var-lib\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.702103 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5948e8de-f31a-4efb-80dc-e8dfb083ab79-var-run\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.702762 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5948e8de-f31a-4efb-80dc-e8dfb083ab79-var-log\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.702881 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5948e8de-f31a-4efb-80dc-e8dfb083ab79-etc-ovs\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.704794 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5948e8de-f31a-4efb-80dc-e8dfb083ab79-scripts\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.724806 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hghzb\" (UniqueName: \"kubernetes.io/projected/5948e8de-f31a-4efb-80dc-e8dfb083ab79-kube-api-access-hghzb\") pod \"ovn-controller-ovs-wsr8k\" (UID: \"5948e8de-f31a-4efb-80dc-e8dfb083ab79\") " pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.757532 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.760026 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.768658 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.769317 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.769487 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.769639 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-fdbfl" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.776121 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.779485 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.789879 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x9682" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.899027 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.905105 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/028cb20f-b715-40db-94c1-38bfb934ef53-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.905266 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/028cb20f-b715-40db-94c1-38bfb934ef53-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.905323 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.905362 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/028cb20f-b715-40db-94c1-38bfb934ef53-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.905401 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sccjw\" (UniqueName: \"kubernetes.io/projected/028cb20f-b715-40db-94c1-38bfb934ef53-kube-api-access-sccjw\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.905449 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/028cb20f-b715-40db-94c1-38bfb934ef53-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.905695 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/028cb20f-b715-40db-94c1-38bfb934ef53-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:11 crc kubenswrapper[4724]: I0226 11:28:11.905736 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/028cb20f-b715-40db-94c1-38bfb934ef53-config\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.006809 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/028cb20f-b715-40db-94c1-38bfb934ef53-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.006874 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.006918 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/028cb20f-b715-40db-94c1-38bfb934ef53-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.006960 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sccjw\" (UniqueName: \"kubernetes.io/projected/028cb20f-b715-40db-94c1-38bfb934ef53-kube-api-access-sccjw\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.007013 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/028cb20f-b715-40db-94c1-38bfb934ef53-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.007052 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/028cb20f-b715-40db-94c1-38bfb934ef53-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.007083 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/028cb20f-b715-40db-94c1-38bfb934ef53-config\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.007147 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/028cb20f-b715-40db-94c1-38bfb934ef53-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.007369 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/028cb20f-b715-40db-94c1-38bfb934ef53-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.008118 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.009079 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/028cb20f-b715-40db-94c1-38bfb934ef53-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.010534 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/028cb20f-b715-40db-94c1-38bfb934ef53-config\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.022475 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/028cb20f-b715-40db-94c1-38bfb934ef53-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.022849 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/028cb20f-b715-40db-94c1-38bfb934ef53-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.029468 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sccjw\" (UniqueName: \"kubernetes.io/projected/028cb20f-b715-40db-94c1-38bfb934ef53-kube-api-access-sccjw\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.036926 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/028cb20f-b715-40db-94c1-38bfb934ef53-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.056945 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"028cb20f-b715-40db-94c1-38bfb934ef53\") " pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:12 crc kubenswrapper[4724]: I0226 11:28:12.115022 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.264162 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.266098 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.271735 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-575xq" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.272111 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.272427 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.279590 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.285292 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.398249 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.398316 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.398339 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-config\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.398375 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.398397 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.398453 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.398499 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btrwc\" (UniqueName: \"kubernetes.io/projected/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-kube-api-access-btrwc\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.398515 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.500533 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.500581 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-config\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.500622 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.500648 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.500740 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.500819 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btrwc\" (UniqueName: \"kubernetes.io/projected/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-kube-api-access-btrwc\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.500843 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.500917 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.500981 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.501496 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.501769 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-config\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.503231 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.521315 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.522258 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.525700 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.529135 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btrwc\" (UniqueName: \"kubernetes.io/projected/6f3d9665-0fdf-4b18-a4cb-1e84f24327ca-kube-api-access-btrwc\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.544598 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca\") " pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:15 crc kubenswrapper[4724]: I0226 11:28:15.595172 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:16 crc kubenswrapper[4724]: I0226 11:28:16.906367 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:28:16 crc kubenswrapper[4724]: I0226 11:28:16.906597 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:28:16 crc kubenswrapper[4724]: I0226 11:28:16.906637 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:28:16 crc kubenswrapper[4724]: I0226 11:28:16.907274 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89545c6222687528337cf32ba9bda30e19443137c7e0933c297f827f49d03a36"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:28:16 crc kubenswrapper[4724]: I0226 11:28:16.907322 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://89545c6222687528337cf32ba9bda30e19443137c7e0933c297f827f49d03a36" gracePeriod=600 Feb 26 11:28:17 crc kubenswrapper[4724]: I0226 11:28:17.110372 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="89545c6222687528337cf32ba9bda30e19443137c7e0933c297f827f49d03a36" exitCode=0 Feb 26 11:28:17 crc kubenswrapper[4724]: I0226 11:28:17.110420 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"89545c6222687528337cf32ba9bda30e19443137c7e0933c297f827f49d03a36"} Feb 26 11:28:17 crc kubenswrapper[4724]: I0226 11:28:17.110463 4724 scope.go:117] "RemoveContainer" containerID="9ba5115481d1102dd3adf13dea4151bf50f3cbd49195796f340f8393348a53ce" Feb 26 11:28:26 crc kubenswrapper[4724]: E0226 11:28:26.332269 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 26 11:28:26 crc kubenswrapper[4724]: E0226 11:28:26.332926 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n694h8ch5ffh7dhcfh685h584h8fh7bh77h56hd4h589hdch6fh64ch55chc4hd7h58dh56dh5b9h685h56h64dh67h658h5bchd9h647h57fh547q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dzcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(b70be877-253f-4859-ae54-bd241f38cb93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:28:26 crc kubenswrapper[4724]: E0226 11:28:26.334427 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="b70be877-253f-4859-ae54-bd241f38cb93" Feb 26 11:28:26 crc kubenswrapper[4724]: E0226 11:28:26.370558 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 26 11:28:26 crc kubenswrapper[4724]: E0226 11:28:26.370957 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49bt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(4d7fdccb-4fd0-4a6e-9241-add667b9a537): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:28:26 crc kubenswrapper[4724]: E0226 11:28:26.372166 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="4d7fdccb-4fd0-4a6e-9241-add667b9a537" Feb 26 11:28:26 crc kubenswrapper[4724]: I0226 11:28:26.781687 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x9682"] Feb 26 11:28:27 crc kubenswrapper[4724]: E0226 11:28:27.183606 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="b70be877-253f-4859-ae54-bd241f38cb93" Feb 26 11:28:27 crc kubenswrapper[4724]: E0226 11:28:27.184959 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="4d7fdccb-4fd0-4a6e-9241-add667b9a537" Feb 26 11:28:33 crc kubenswrapper[4724]: I0226 11:28:33.018228 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 11:28:33 crc kubenswrapper[4724]: I0226 11:28:33.233704 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x9682" event={"ID":"5b8939ea-2d97-461c-ad75-cba4379157f7","Type":"ContainerStarted","Data":"701ac93422b4b31a3a4ca136f99530e7de4145ab3fc4c6dd069565f0cb53c2e3"} Feb 26 11:28:33 crc kubenswrapper[4724]: E0226 11:28:33.337564 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 11:28:33 crc kubenswrapper[4724]: E0226 11:28:33.337990 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6446d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-z6pc7_openstack(5aaccc92-86d5-4ad9-a198-2a41fd2c0675): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:28:33 crc kubenswrapper[4724]: E0226 11:28:33.339374 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" podUID="5aaccc92-86d5-4ad9-a198-2a41fd2c0675" Feb 26 11:28:33 crc kubenswrapper[4724]: E0226 11:28:33.390974 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 11:28:33 crc kubenswrapper[4724]: E0226 11:28:33.391145 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s46wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-c79hk_openstack(fe5f7817-1dcd-4d76-9817-2ceebf76317f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:28:33 crc kubenswrapper[4724]: E0226 11:28:33.397046 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-c79hk" podUID="fe5f7817-1dcd-4d76-9817-2ceebf76317f" Feb 26 11:28:33 crc kubenswrapper[4724]: E0226 11:28:33.411015 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 11:28:33 crc kubenswrapper[4724]: E0226 11:28:33.411160 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2f2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-km282_openstack(26b204be-3e88-4df5-aeb0-202f78e065a6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:28:33 crc kubenswrapper[4724]: E0226 11:28:33.412619 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-km282" podUID="26b204be-3e88-4df5-aeb0-202f78e065a6" Feb 26 11:28:33 crc kubenswrapper[4724]: E0226 11:28:33.581928 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 11:28:33 crc kubenswrapper[4724]: E0226 11:28:33.582431 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9wt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-5txwr_openstack(ccbe85f6-ff6c-49c2-9304-72ae30711c4b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:28:33 crc kubenswrapper[4724]: E0226 11:28:33.583936 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-5txwr" podUID="ccbe85f6-ff6c-49c2-9304-72ae30711c4b" Feb 26 11:28:33 crc kubenswrapper[4724]: I0226 11:28:33.958388 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 11:28:34 crc kubenswrapper[4724]: I0226 11:28:34.107655 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wsr8k"] Feb 26 11:28:34 crc kubenswrapper[4724]: I0226 11:28:34.251765 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca","Type":"ContainerStarted","Data":"320b47cf24605e727833d653f451101304a98daf4cdedd168f379918c7d4490c"} Feb 26 11:28:34 crc kubenswrapper[4724]: I0226 11:28:34.255712 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"55d1fb33975b75b061c0528685eae11004b1a2f0eedaec829e3798af02cfba8d"} Feb 26 11:28:34 crc kubenswrapper[4724]: I0226 11:28:34.261225 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b0d66ab1-513b-452a-9f31-bfc4b4be6c18","Type":"ContainerStarted","Data":"4c45c753f70c2eedc6d5c845513cbfbadbe89580fea63a6a7303133041b962fa"} Feb 26 11:28:34 crc kubenswrapper[4724]: E0226 11:28:34.264031 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-5txwr" podUID="ccbe85f6-ff6c-49c2-9304-72ae30711c4b" Feb 26 11:28:34 crc kubenswrapper[4724]: E0226 11:28:34.264350 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" podUID="5aaccc92-86d5-4ad9-a198-2a41fd2c0675" Feb 26 11:28:34 crc kubenswrapper[4724]: I0226 11:28:34.983740 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-km282" Feb 26 11:28:34 crc kubenswrapper[4724]: I0226 11:28:34.991442 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c79hk" Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.058806 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b204be-3e88-4df5-aeb0-202f78e065a6-config\") pod \"26b204be-3e88-4df5-aeb0-202f78e065a6\" (UID: \"26b204be-3e88-4df5-aeb0-202f78e065a6\") " Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.059204 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe5f7817-1dcd-4d76-9817-2ceebf76317f-config\") pod \"fe5f7817-1dcd-4d76-9817-2ceebf76317f\" (UID: \"fe5f7817-1dcd-4d76-9817-2ceebf76317f\") " Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.059352 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26b204be-3e88-4df5-aeb0-202f78e065a6-dns-svc\") pod \"26b204be-3e88-4df5-aeb0-202f78e065a6\" (UID: \"26b204be-3e88-4df5-aeb0-202f78e065a6\") " Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.059408 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2f2r\" (UniqueName: \"kubernetes.io/projected/26b204be-3e88-4df5-aeb0-202f78e065a6-kube-api-access-m2f2r\") pod \"26b204be-3e88-4df5-aeb0-202f78e065a6\" (UID: \"26b204be-3e88-4df5-aeb0-202f78e065a6\") " Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.059483 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s46wn\" (UniqueName: \"kubernetes.io/projected/fe5f7817-1dcd-4d76-9817-2ceebf76317f-kube-api-access-s46wn\") pod \"fe5f7817-1dcd-4d76-9817-2ceebf76317f\" (UID: \"fe5f7817-1dcd-4d76-9817-2ceebf76317f\") " Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.059812 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b204be-3e88-4df5-aeb0-202f78e065a6-config" (OuterVolumeSpecName: "config") pod "26b204be-3e88-4df5-aeb0-202f78e065a6" (UID: "26b204be-3e88-4df5-aeb0-202f78e065a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.060531 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b204be-3e88-4df5-aeb0-202f78e065a6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "26b204be-3e88-4df5-aeb0-202f78e065a6" (UID: "26b204be-3e88-4df5-aeb0-202f78e065a6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.060632 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe5f7817-1dcd-4d76-9817-2ceebf76317f-config" (OuterVolumeSpecName: "config") pod "fe5f7817-1dcd-4d76-9817-2ceebf76317f" (UID: "fe5f7817-1dcd-4d76-9817-2ceebf76317f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.069488 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe5f7817-1dcd-4d76-9817-2ceebf76317f-kube-api-access-s46wn" (OuterVolumeSpecName: "kube-api-access-s46wn") pod "fe5f7817-1dcd-4d76-9817-2ceebf76317f" (UID: "fe5f7817-1dcd-4d76-9817-2ceebf76317f"). InnerVolumeSpecName "kube-api-access-s46wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.069594 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b204be-3e88-4df5-aeb0-202f78e065a6-kube-api-access-m2f2r" (OuterVolumeSpecName: "kube-api-access-m2f2r") pod "26b204be-3e88-4df5-aeb0-202f78e065a6" (UID: "26b204be-3e88-4df5-aeb0-202f78e065a6"). InnerVolumeSpecName "kube-api-access-m2f2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.161521 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s46wn\" (UniqueName: \"kubernetes.io/projected/fe5f7817-1dcd-4d76-9817-2ceebf76317f-kube-api-access-s46wn\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.161555 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b204be-3e88-4df5-aeb0-202f78e065a6-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.161569 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe5f7817-1dcd-4d76-9817-2ceebf76317f-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.161580 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26b204be-3e88-4df5-aeb0-202f78e065a6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.161591 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2f2r\" (UniqueName: \"kubernetes.io/projected/26b204be-3e88-4df5-aeb0-202f78e065a6-kube-api-access-m2f2r\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.276134 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-km282" event={"ID":"26b204be-3e88-4df5-aeb0-202f78e065a6","Type":"ContainerDied","Data":"403f2f736deb82b44ca37f268a5a5148ce672dcd6682fb1010024e70befff92a"} Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.276333 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-km282" Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.281719 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6abc9b19-0018-46d1-a119-0ffb069a1795","Type":"ContainerStarted","Data":"96edbc3b48739f9d0722271dbd52ec3aa789ff446e86459d9e835af50e6462dc"} Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.284877 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wsr8k" event={"ID":"5948e8de-f31a-4efb-80dc-e8dfb083ab79","Type":"ContainerStarted","Data":"5d6fc2ef050b8490ead72b662fe1230e78a40286c316d5de7cecf975b76ad5bb"} Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.287055 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"028cb20f-b715-40db-94c1-38bfb934ef53","Type":"ContainerStarted","Data":"ab66f6d582a823eea8d95f7b10c1d9102018d97c3f5f5789398616719bec2bc2"} Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.289688 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ad24283d-3357-4230-a2b2-3d5ed0fefa7f","Type":"ContainerStarted","Data":"2ae61775b75b66aef3e5990aa38f0d8b0eb327cc86db14bc172e02fa575dee21"} Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.291117 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-c79hk" event={"ID":"fe5f7817-1dcd-4d76-9817-2ceebf76317f","Type":"ContainerDied","Data":"ab93308fd0ac54f9c5edfeb632507afbf30fd380641bb4382b9d59f1aade1da4"} Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.291346 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c79hk" Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.435919 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-km282"] Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.455486 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-km282"] Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.484923 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c79hk"] Feb 26 11:28:35 crc kubenswrapper[4724]: I0226 11:28:35.495620 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c79hk"] Feb 26 11:28:36 crc kubenswrapper[4724]: I0226 11:28:36.010828 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b204be-3e88-4df5-aeb0-202f78e065a6" path="/var/lib/kubelet/pods/26b204be-3e88-4df5-aeb0-202f78e065a6/volumes" Feb 26 11:28:36 crc kubenswrapper[4724]: I0226 11:28:36.011570 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe5f7817-1dcd-4d76-9817-2ceebf76317f" path="/var/lib/kubelet/pods/fe5f7817-1dcd-4d76-9817-2ceebf76317f/volumes" Feb 26 11:28:38 crc kubenswrapper[4724]: I0226 11:28:38.315884 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b0d66ab1-513b-452a-9f31-bfc4b4be6c18","Type":"ContainerDied","Data":"4c45c753f70c2eedc6d5c845513cbfbadbe89580fea63a6a7303133041b962fa"} Feb 26 11:28:38 crc kubenswrapper[4724]: I0226 11:28:38.315908 4724 generic.go:334] "Generic (PLEG): container finished" podID="b0d66ab1-513b-452a-9f31-bfc4b4be6c18" containerID="4c45c753f70c2eedc6d5c845513cbfbadbe89580fea63a6a7303133041b962fa" exitCode=0 Feb 26 11:28:39 crc kubenswrapper[4724]: I0226 11:28:39.333699 4724 generic.go:334] "Generic (PLEG): container finished" podID="6abc9b19-0018-46d1-a119-0ffb069a1795" containerID="96edbc3b48739f9d0722271dbd52ec3aa789ff446e86459d9e835af50e6462dc" exitCode=0 Feb 26 11:28:39 crc kubenswrapper[4724]: I0226 11:28:39.333777 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6abc9b19-0018-46d1-a119-0ffb069a1795","Type":"ContainerDied","Data":"96edbc3b48739f9d0722271dbd52ec3aa789ff446e86459d9e835af50e6462dc"} Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.345214 4724 generic.go:334] "Generic (PLEG): container finished" podID="5948e8de-f31a-4efb-80dc-e8dfb083ab79" containerID="588d58a31310e05344989f0c25564674108af14f7fd39c0e7bff4bf830d9005a" exitCode=0 Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.346490 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wsr8k" event={"ID":"5948e8de-f31a-4efb-80dc-e8dfb083ab79","Type":"ContainerDied","Data":"588d58a31310e05344989f0c25564674108af14f7fd39c0e7bff4bf830d9005a"} Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.359062 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x9682" event={"ID":"5b8939ea-2d97-461c-ad75-cba4379157f7","Type":"ContainerStarted","Data":"7b71c0a47c1c3f1881cc147e5d41017efc6cdb8ad235523e9a2ffdd4cbf6ab44"} Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.360035 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-x9682" Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.362046 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"028cb20f-b715-40db-94c1-38bfb934ef53","Type":"ContainerStarted","Data":"0b214c6f5cbed977cbc9c9d6a60c962f914e73a80eb00b69a4cfd2148f3eb73d"} Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.364518 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b0d66ab1-513b-452a-9f31-bfc4b4be6c18","Type":"ContainerStarted","Data":"0c0f9e95d6e4ec66700aea824e873ddc64b042a8d7a56d426c49add330043cb9"} Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.376978 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca","Type":"ContainerStarted","Data":"09788a6f7bb013081f4124190b83486025a9733f5d403e7ccd5110861ac1c3a4"} Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.381256 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b70be877-253f-4859-ae54-bd241f38cb93","Type":"ContainerStarted","Data":"ea0733af623d6fb0daad49fec1d98b622f12a63303340cce33e5d6b6f9643c42"} Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.382043 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.388554 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6abc9b19-0018-46d1-a119-0ffb069a1795","Type":"ContainerStarted","Data":"01184f9e3783573a4675f7ca61d590f2a4e298a924b415ffdcf40826a27e3406"} Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.389892 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a9a1a92d-3769-4901-89b0-2fa52cbb547a","Type":"ContainerStarted","Data":"92948e4a8856fdc4a7a0d7b4e275154b86dc00d827ac85cf0f3dd688fb623f0e"} Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.390036 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.401699 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-x9682" podStartSLOduration=22.650249413 podStartE2EDuration="29.401675361s" podCreationTimestamp="2026-02-26 11:28:11 +0000 UTC" firstStartedPulling="2026-02-26 11:28:32.380366496 +0000 UTC m=+1379.036105601" lastFinishedPulling="2026-02-26 11:28:39.131792434 +0000 UTC m=+1385.787531549" observedRunningTime="2026-02-26 11:28:40.394398145 +0000 UTC m=+1387.050137290" watchObservedRunningTime="2026-02-26 11:28:40.401675361 +0000 UTC m=+1387.057414486" Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.418523 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=12.754191547 podStartE2EDuration="36.418503801s" podCreationTimestamp="2026-02-26 11:28:04 +0000 UTC" firstStartedPulling="2026-02-26 11:28:08.71401776 +0000 UTC m=+1355.369756875" lastFinishedPulling="2026-02-26 11:28:32.378330014 +0000 UTC m=+1379.034069129" observedRunningTime="2026-02-26 11:28:40.415006871 +0000 UTC m=+1387.070745996" watchObservedRunningTime="2026-02-26 11:28:40.418503801 +0000 UTC m=+1387.074242906" Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.437721 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.935560346 podStartE2EDuration="37.437704131s" podCreationTimestamp="2026-02-26 11:28:03 +0000 UTC" firstStartedPulling="2026-02-26 11:28:05.993642702 +0000 UTC m=+1352.649381817" lastFinishedPulling="2026-02-26 11:28:33.495786497 +0000 UTC m=+1380.151525602" observedRunningTime="2026-02-26 11:28:40.435200047 +0000 UTC m=+1387.090939382" watchObservedRunningTime="2026-02-26 11:28:40.437704131 +0000 UTC m=+1387.093443246" Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.459554 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.343996711 podStartE2EDuration="32.459538779s" podCreationTimestamp="2026-02-26 11:28:08 +0000 UTC" firstStartedPulling="2026-02-26 11:28:10.012207742 +0000 UTC m=+1356.667946857" lastFinishedPulling="2026-02-26 11:28:39.12774981 +0000 UTC m=+1385.783488925" observedRunningTime="2026-02-26 11:28:40.455696481 +0000 UTC m=+1387.111435606" watchObservedRunningTime="2026-02-26 11:28:40.459538779 +0000 UTC m=+1387.115277894" Feb 26 11:28:40 crc kubenswrapper[4724]: I0226 11:28:40.476007 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.381624006 podStartE2EDuration="34.475983629s" podCreationTimestamp="2026-02-26 11:28:06 +0000 UTC" firstStartedPulling="2026-02-26 11:28:09.056496168 +0000 UTC m=+1355.712235283" lastFinishedPulling="2026-02-26 11:28:39.150855781 +0000 UTC m=+1385.806594906" observedRunningTime="2026-02-26 11:28:40.470418537 +0000 UTC m=+1387.126157662" watchObservedRunningTime="2026-02-26 11:28:40.475983629 +0000 UTC m=+1387.131722744" Feb 26 11:28:41 crc kubenswrapper[4724]: I0226 11:28:41.400216 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wsr8k" event={"ID":"5948e8de-f31a-4efb-80dc-e8dfb083ab79","Type":"ContainerStarted","Data":"f2ea31b6c63fa8be364df4f808d5d7dd2a1d6897a13980b459b4e8249c62e8d5"} Feb 26 11:28:41 crc kubenswrapper[4724]: I0226 11:28:41.406216 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"028cb20f-b715-40db-94c1-38bfb934ef53","Type":"ContainerStarted","Data":"ba33e9bb4d361ad46b66ed8b7c3db3a6e06d9ddce0fd4e581ebae4de23a25166"} Feb 26 11:28:41 crc kubenswrapper[4724]: I0226 11:28:41.409668 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6f3d9665-0fdf-4b18-a4cb-1e84f24327ca","Type":"ContainerStarted","Data":"c2d57f767d2e893ac98e7913bd9c530108feb2214aaadb84769194d60de95bc7"} Feb 26 11:28:41 crc kubenswrapper[4724]: I0226 11:28:41.427374 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=24.655417797 podStartE2EDuration="31.427354027s" podCreationTimestamp="2026-02-26 11:28:10 +0000 UTC" firstStartedPulling="2026-02-26 11:28:34.302849789 +0000 UTC m=+1380.958588904" lastFinishedPulling="2026-02-26 11:28:41.074786019 +0000 UTC m=+1387.730525134" observedRunningTime="2026-02-26 11:28:41.421337953 +0000 UTC m=+1388.077077068" watchObservedRunningTime="2026-02-26 11:28:41.427354027 +0000 UTC m=+1388.083093162" Feb 26 11:28:41 crc kubenswrapper[4724]: I0226 11:28:41.454714 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.677493631 podStartE2EDuration="27.454693956s" podCreationTimestamp="2026-02-26 11:28:14 +0000 UTC" firstStartedPulling="2026-02-26 11:28:33.280896267 +0000 UTC m=+1379.936635382" lastFinishedPulling="2026-02-26 11:28:41.058096592 +0000 UTC m=+1387.713835707" observedRunningTime="2026-02-26 11:28:41.451399892 +0000 UTC m=+1388.107139017" watchObservedRunningTime="2026-02-26 11:28:41.454693956 +0000 UTC m=+1388.110433071" Feb 26 11:28:42 crc kubenswrapper[4724]: I0226 11:28:42.115898 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:42 crc kubenswrapper[4724]: I0226 11:28:42.116091 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:42 crc kubenswrapper[4724]: I0226 11:28:42.153483 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:42 crc kubenswrapper[4724]: I0226 11:28:42.420112 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wsr8k" event={"ID":"5948e8de-f31a-4efb-80dc-e8dfb083ab79","Type":"ContainerStarted","Data":"bab3670b47990a7d4aa9b7a519b5b6e890f0454e691e722f513bd7928a2d815c"} Feb 26 11:28:42 crc kubenswrapper[4724]: I0226 11:28:42.420914 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:42 crc kubenswrapper[4724]: I0226 11:28:42.420936 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:28:42 crc kubenswrapper[4724]: I0226 11:28:42.440769 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-wsr8k" podStartSLOduration=26.674223211 podStartE2EDuration="31.440750621s" podCreationTimestamp="2026-02-26 11:28:11 +0000 UTC" firstStartedPulling="2026-02-26 11:28:34.303055794 +0000 UTC m=+1380.958794909" lastFinishedPulling="2026-02-26 11:28:39.069583204 +0000 UTC m=+1385.725322319" observedRunningTime="2026-02-26 11:28:42.436235015 +0000 UTC m=+1389.091974120" watchObservedRunningTime="2026-02-26 11:28:42.440750621 +0000 UTC m=+1389.096489736" Feb 26 11:28:42 crc kubenswrapper[4724]: I0226 11:28:42.596517 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:42 crc kubenswrapper[4724]: I0226 11:28:42.635339 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:43 crc kubenswrapper[4724]: I0226 11:28:43.425132 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.433106 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d7fdccb-4fd0-4a6e-9241-add667b9a537","Type":"ContainerStarted","Data":"f02d3f46994abb7bc5f7a8cc31c24d6db26868c82a95d430fdc99e0104c638a2"} Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.475267 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.761571 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5txwr"] Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.807224 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-xjrd6"] Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.808494 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.812763 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.816907 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-xjrd6"] Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.855347 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wm86x"] Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.856351 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.864834 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.897145 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wm86x"] Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.943427 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9784324f-b3cf-403e-9e3f-c5298a5257eb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.943481 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-xjrd6\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.943500 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-xjrd6\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.943596 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9knx\" (UniqueName: \"kubernetes.io/projected/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-kube-api-access-b9knx\") pod \"dnsmasq-dns-6bc7876d45-xjrd6\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.943681 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9784324f-b3cf-403e-9e3f-c5298a5257eb-config\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.943826 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9784324f-b3cf-403e-9e3f-c5298a5257eb-ovn-rundir\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.943881 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm895\" (UniqueName: \"kubernetes.io/projected/9784324f-b3cf-403e-9e3f-c5298a5257eb-kube-api-access-lm895\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.943924 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9784324f-b3cf-403e-9e3f-c5298a5257eb-combined-ca-bundle\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.943951 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9784324f-b3cf-403e-9e3f-c5298a5257eb-ovs-rundir\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:44 crc kubenswrapper[4724]: I0226 11:28:44.943981 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-config\") pod \"dnsmasq-dns-6bc7876d45-xjrd6\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.045681 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9784324f-b3cf-403e-9e3f-c5298a5257eb-combined-ca-bundle\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.045749 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9784324f-b3cf-403e-9e3f-c5298a5257eb-ovs-rundir\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.045791 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-config\") pod \"dnsmasq-dns-6bc7876d45-xjrd6\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.045845 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9784324f-b3cf-403e-9e3f-c5298a5257eb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.045880 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-xjrd6\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.045901 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-xjrd6\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.045928 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9knx\" (UniqueName: \"kubernetes.io/projected/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-kube-api-access-b9knx\") pod \"dnsmasq-dns-6bc7876d45-xjrd6\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.045961 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9784324f-b3cf-403e-9e3f-c5298a5257eb-config\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.046058 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9784324f-b3cf-403e-9e3f-c5298a5257eb-ovn-rundir\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.046104 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm895\" (UniqueName: \"kubernetes.io/projected/9784324f-b3cf-403e-9e3f-c5298a5257eb-kube-api-access-lm895\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.046452 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9784324f-b3cf-403e-9e3f-c5298a5257eb-ovs-rundir\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.047032 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-config\") pod \"dnsmasq-dns-6bc7876d45-xjrd6\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.047135 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9784324f-b3cf-403e-9e3f-c5298a5257eb-ovn-rundir\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.047137 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9784324f-b3cf-403e-9e3f-c5298a5257eb-config\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.047653 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-xjrd6\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.047761 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-xjrd6\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.052704 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9784324f-b3cf-403e-9e3f-c5298a5257eb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.061679 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9784324f-b3cf-403e-9e3f-c5298a5257eb-combined-ca-bundle\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.062120 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm895\" (UniqueName: \"kubernetes.io/projected/9784324f-b3cf-403e-9e3f-c5298a5257eb-kube-api-access-lm895\") pod \"ovn-controller-metrics-wm86x\" (UID: \"9784324f-b3cf-403e-9e3f-c5298a5257eb\") " pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.075728 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9knx\" (UniqueName: \"kubernetes.io/projected/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-kube-api-access-b9knx\") pod \"dnsmasq-dns-6bc7876d45-xjrd6\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.121384 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.121430 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.136445 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.184627 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wm86x" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.216532 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5txwr" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.339892 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z6pc7"] Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.353866 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9wt8\" (UniqueName: \"kubernetes.io/projected/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-kube-api-access-z9wt8\") pod \"ccbe85f6-ff6c-49c2-9304-72ae30711c4b\" (UID: \"ccbe85f6-ff6c-49c2-9304-72ae30711c4b\") " Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.354103 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-config\") pod \"ccbe85f6-ff6c-49c2-9304-72ae30711c4b\" (UID: \"ccbe85f6-ff6c-49c2-9304-72ae30711c4b\") " Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.354150 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-dns-svc\") pod \"ccbe85f6-ff6c-49c2-9304-72ae30711c4b\" (UID: \"ccbe85f6-ff6c-49c2-9304-72ae30711c4b\") " Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.354948 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ccbe85f6-ff6c-49c2-9304-72ae30711c4b" (UID: "ccbe85f6-ff6c-49c2-9304-72ae30711c4b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.355369 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-config" (OuterVolumeSpecName: "config") pod "ccbe85f6-ff6c-49c2-9304-72ae30711c4b" (UID: "ccbe85f6-ff6c-49c2-9304-72ae30711c4b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.368595 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-kube-api-access-z9wt8" (OuterVolumeSpecName: "kube-api-access-z9wt8") pod "ccbe85f6-ff6c-49c2-9304-72ae30711c4b" (UID: "ccbe85f6-ff6c-49c2-9304-72ae30711c4b"). InnerVolumeSpecName "kube-api-access-z9wt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.457753 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9wt8\" (UniqueName: \"kubernetes.io/projected/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-kube-api-access-z9wt8\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.457802 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.457813 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccbe85f6-ff6c-49c2-9304-72ae30711c4b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.476362 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5txwr" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.478266 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5txwr" event={"ID":"ccbe85f6-ff6c-49c2-9304-72ae30711c4b","Type":"ContainerDied","Data":"98b4e71fa59eb1d1e6834e78ecceecd4e8313fd4d7c535704a9a170f3a234224"} Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.479283 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-xcnz9"] Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.480533 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.492061 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.563796 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-dns-svc\") pod \"dnsmasq-dns-8554648995-xcnz9\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.563901 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvtqf\" (UniqueName: \"kubernetes.io/projected/8d99e287-d985-4f45-9117-0ccf544d858e-kube-api-access-fvtqf\") pod \"dnsmasq-dns-8554648995-xcnz9\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.563935 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-xcnz9\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.563973 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-config\") pod \"dnsmasq-dns-8554648995-xcnz9\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.563994 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-xcnz9\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.660367 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-xcnz9"] Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.671387 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-config\") pod \"dnsmasq-dns-8554648995-xcnz9\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.671433 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-xcnz9\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.671491 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-dns-svc\") pod \"dnsmasq-dns-8554648995-xcnz9\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.671565 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvtqf\" (UniqueName: \"kubernetes.io/projected/8d99e287-d985-4f45-9117-0ccf544d858e-kube-api-access-fvtqf\") pod \"dnsmasq-dns-8554648995-xcnz9\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.671599 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-xcnz9\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.674038 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-xcnz9\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.674212 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-xcnz9\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.674591 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-config\") pod \"dnsmasq-dns-8554648995-xcnz9\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.674805 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-dns-svc\") pod \"dnsmasq-dns-8554648995-xcnz9\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.687432 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5txwr"] Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.689604 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5txwr"] Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.707443 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvtqf\" (UniqueName: \"kubernetes.io/projected/8d99e287-d985-4f45-9117-0ccf544d858e-kube-api-access-fvtqf\") pod \"dnsmasq-dns-8554648995-xcnz9\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.845375 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-xjrd6"] Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.864881 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wm86x"] Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.904894 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:45 crc kubenswrapper[4724]: I0226 11:28:45.985816 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccbe85f6-ff6c-49c2-9304-72ae30711c4b" path="/var/lib/kubelet/pods/ccbe85f6-ff6c-49c2-9304-72ae30711c4b/volumes" Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.017708 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.087541 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6446d\" (UniqueName: \"kubernetes.io/projected/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-kube-api-access-6446d\") pod \"5aaccc92-86d5-4ad9-a198-2a41fd2c0675\" (UID: \"5aaccc92-86d5-4ad9-a198-2a41fd2c0675\") " Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.087826 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-config\") pod \"5aaccc92-86d5-4ad9-a198-2a41fd2c0675\" (UID: \"5aaccc92-86d5-4ad9-a198-2a41fd2c0675\") " Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.087887 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-dns-svc\") pod \"5aaccc92-86d5-4ad9-a198-2a41fd2c0675\" (UID: \"5aaccc92-86d5-4ad9-a198-2a41fd2c0675\") " Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.089332 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-config" (OuterVolumeSpecName: "config") pod "5aaccc92-86d5-4ad9-a198-2a41fd2c0675" (UID: "5aaccc92-86d5-4ad9-a198-2a41fd2c0675"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.091425 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5aaccc92-86d5-4ad9-a198-2a41fd2c0675" (UID: "5aaccc92-86d5-4ad9-a198-2a41fd2c0675"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.097720 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-kube-api-access-6446d" (OuterVolumeSpecName: "kube-api-access-6446d") pod "5aaccc92-86d5-4ad9-a198-2a41fd2c0675" (UID: "5aaccc92-86d5-4ad9-a198-2a41fd2c0675"). InnerVolumeSpecName "kube-api-access-6446d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.189683 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6446d\" (UniqueName: \"kubernetes.io/projected/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-kube-api-access-6446d\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.190047 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.190057 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5aaccc92-86d5-4ad9-a198-2a41fd2c0675-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.199672 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.199711 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.305373 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.482265 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" event={"ID":"5aaccc92-86d5-4ad9-a198-2a41fd2c0675","Type":"ContainerDied","Data":"a58f4e7387eb1534170ec91569cdef36c96531f83bde02302c0245fc7c720d58"} Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.482296 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z6pc7" Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.486270 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wm86x" event={"ID":"9784324f-b3cf-403e-9e3f-c5298a5257eb","Type":"ContainerStarted","Data":"20bbfc6cc45ca487177fbecdfa377b8ecca5c8ddf4bd00b5d46c63ad42c15b2b"} Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.486306 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wm86x" event={"ID":"9784324f-b3cf-403e-9e3f-c5298a5257eb","Type":"ContainerStarted","Data":"30a35409885b203a473fb5830eda9136dfc81eda3c18e5eb55844656d71381c5"} Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.487801 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" event={"ID":"fb4b2f4d-eab5-465e-b78f-3f1eae492b05","Type":"ContainerStarted","Data":"1d1a4bfcb1dc761309549aa998c0c25509c50b492e3f8b847f82bb94da72027a"} Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.521432 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wm86x" podStartSLOduration=2.521412766 podStartE2EDuration="2.521412766s" podCreationTimestamp="2026-02-26 11:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:28:46.508142207 +0000 UTC m=+1393.163881322" watchObservedRunningTime="2026-02-26 11:28:46.521412766 +0000 UTC m=+1393.177151881" Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.533028 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-xcnz9"] Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.590510 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.636847 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.685283 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z6pc7"] Feb 26 11:28:46 crc kubenswrapper[4724]: I0226 11:28:46.693835 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z6pc7"] Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.158474 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.380000 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.381485 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.383494 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.383711 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.383978 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-nj8q9" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.388386 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.402538 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.493971 4724 generic.go:334] "Generic (PLEG): container finished" podID="fb4b2f4d-eab5-465e-b78f-3f1eae492b05" containerID="b65b430cd537c49309493d15a7b42543a0613be23335fa3e69656b6edaa11da1" exitCode=0 Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.494042 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" event={"ID":"fb4b2f4d-eab5-465e-b78f-3f1eae492b05","Type":"ContainerDied","Data":"b65b430cd537c49309493d15a7b42543a0613be23335fa3e69656b6edaa11da1"} Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.497129 4724 generic.go:334] "Generic (PLEG): container finished" podID="8d99e287-d985-4f45-9117-0ccf544d858e" containerID="d894167865763a371779af2fff76f4ff16a7b25a6fc1371f15ad4750ab9f6ccf" exitCode=0 Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.497209 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-xcnz9" event={"ID":"8d99e287-d985-4f45-9117-0ccf544d858e","Type":"ContainerDied","Data":"d894167865763a371779af2fff76f4ff16a7b25a6fc1371f15ad4750ab9f6ccf"} Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.497248 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-xcnz9" event={"ID":"8d99e287-d985-4f45-9117-0ccf544d858e","Type":"ContainerStarted","Data":"34f7dae08cd3999e64f813cbe96280aba87938b5b68508a329b72726eff7e97f"} Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.512402 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/619c3911-f86d-468d-b689-e939b16388e2-scripts\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.512822 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619c3911-f86d-468d-b689-e939b16388e2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.513003 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/619c3911-f86d-468d-b689-e939b16388e2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.513303 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlrgn\" (UniqueName: \"kubernetes.io/projected/619c3911-f86d-468d-b689-e939b16388e2-kube-api-access-hlrgn\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.513420 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/619c3911-f86d-468d-b689-e939b16388e2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.513531 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/619c3911-f86d-468d-b689-e939b16388e2-config\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.513635 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/619c3911-f86d-468d-b689-e939b16388e2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.615623 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/619c3911-f86d-468d-b689-e939b16388e2-config\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.615909 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/619c3911-f86d-468d-b689-e939b16388e2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.615988 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/619c3911-f86d-468d-b689-e939b16388e2-scripts\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.616046 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619c3911-f86d-468d-b689-e939b16388e2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.616106 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/619c3911-f86d-468d-b689-e939b16388e2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.616159 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlrgn\" (UniqueName: \"kubernetes.io/projected/619c3911-f86d-468d-b689-e939b16388e2-kube-api-access-hlrgn\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.616340 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/619c3911-f86d-468d-b689-e939b16388e2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.616915 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/619c3911-f86d-468d-b689-e939b16388e2-scripts\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.617905 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/619c3911-f86d-468d-b689-e939b16388e2-config\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.623590 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/619c3911-f86d-468d-b689-e939b16388e2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.627029 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619c3911-f86d-468d-b689-e939b16388e2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.629149 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/619c3911-f86d-468d-b689-e939b16388e2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.629494 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/619c3911-f86d-468d-b689-e939b16388e2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.640390 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlrgn\" (UniqueName: \"kubernetes.io/projected/619c3911-f86d-468d-b689-e939b16388e2-kube-api-access-hlrgn\") pod \"ovn-northd-0\" (UID: \"619c3911-f86d-468d-b689-e939b16388e2\") " pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.701709 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.986112 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aaccc92-86d5-4ad9-a198-2a41fd2c0675" path="/var/lib/kubelet/pods/5aaccc92-86d5-4ad9-a198-2a41fd2c0675/volumes" Feb 26 11:28:47 crc kubenswrapper[4724]: I0226 11:28:47.986650 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 26 11:28:48 crc kubenswrapper[4724]: I0226 11:28:48.237978 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 26 11:28:48 crc kubenswrapper[4724]: I0226 11:28:48.318823 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 26 11:28:48 crc kubenswrapper[4724]: I0226 11:28:48.506608 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" event={"ID":"fb4b2f4d-eab5-465e-b78f-3f1eae492b05","Type":"ContainerStarted","Data":"b211f8f0b97488f29d5d8e79dc6dff5e5d010b3bd6030cdd27b5f392cd6fdae3"} Feb 26 11:28:48 crc kubenswrapper[4724]: I0226 11:28:48.506743 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:48 crc kubenswrapper[4724]: I0226 11:28:48.509245 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-xcnz9" event={"ID":"8d99e287-d985-4f45-9117-0ccf544d858e","Type":"ContainerStarted","Data":"b2aa358ef59a07eee474199fd651e5135324fbf28420965f3554f7676f82062d"} Feb 26 11:28:48 crc kubenswrapper[4724]: I0226 11:28:48.509306 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:48 crc kubenswrapper[4724]: I0226 11:28:48.510808 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"619c3911-f86d-468d-b689-e939b16388e2","Type":"ContainerStarted","Data":"387e6b84462307fdfcd899727519e16086c6c7400d89fc0579933e8dbd8be1e9"} Feb 26 11:28:48 crc kubenswrapper[4724]: I0226 11:28:48.533690 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" podStartSLOduration=3.94739943 podStartE2EDuration="4.53366533s" podCreationTimestamp="2026-02-26 11:28:44 +0000 UTC" firstStartedPulling="2026-02-26 11:28:45.898371366 +0000 UTC m=+1392.554110481" lastFinishedPulling="2026-02-26 11:28:46.484637266 +0000 UTC m=+1393.140376381" observedRunningTime="2026-02-26 11:28:48.529583356 +0000 UTC m=+1395.185322471" watchObservedRunningTime="2026-02-26 11:28:48.53366533 +0000 UTC m=+1395.189404455" Feb 26 11:28:48 crc kubenswrapper[4724]: I0226 11:28:48.550520 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-xcnz9" podStartSLOduration=3.55050136 podStartE2EDuration="3.55050136s" podCreationTimestamp="2026-02-26 11:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:28:48.546763245 +0000 UTC m=+1395.202502370" watchObservedRunningTime="2026-02-26 11:28:48.55050136 +0000 UTC m=+1395.206240485" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.081428 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.272375 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-xjrd6"] Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.325407 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jkf2d"] Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.326642 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.330703 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jkf2d"] Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.447957 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-jkf2d\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.448351 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlttz\" (UniqueName: \"kubernetes.io/projected/39d817a7-9237-4683-88aa-20bbbd487d49-kube-api-access-rlttz\") pod \"dnsmasq-dns-b8fbc5445-jkf2d\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.448433 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-jkf2d\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.448537 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-jkf2d\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.448644 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-config\") pod \"dnsmasq-dns-b8fbc5445-jkf2d\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.530659 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"619c3911-f86d-468d-b689-e939b16388e2","Type":"ContainerStarted","Data":"455e561b8acfefae1add304bf95ac3576c0aaa2f9fa06122d77280614d86a69b"} Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.550264 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-jkf2d\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.551248 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-config\") pod \"dnsmasq-dns-b8fbc5445-jkf2d\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.551155 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-jkf2d\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.551762 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-config\") pod \"dnsmasq-dns-b8fbc5445-jkf2d\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.551909 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-jkf2d\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.551938 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlttz\" (UniqueName: \"kubernetes.io/projected/39d817a7-9237-4683-88aa-20bbbd487d49-kube-api-access-rlttz\") pod \"dnsmasq-dns-b8fbc5445-jkf2d\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.551956 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-jkf2d\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.552856 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-jkf2d\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.553296 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-jkf2d\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.591172 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlttz\" (UniqueName: \"kubernetes.io/projected/39d817a7-9237-4683-88aa-20bbbd487d49-kube-api-access-rlttz\") pod \"dnsmasq-dns-b8fbc5445-jkf2d\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:49 crc kubenswrapper[4724]: I0226 11:28:49.774228 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.298897 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jkf2d"] Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.489998 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.498083 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.500124 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.500456 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.500755 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dk7fw" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.501015 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.518315 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.537876 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" event={"ID":"39d817a7-9237-4683-88aa-20bbbd487d49","Type":"ContainerStarted","Data":"67a5cd1c59279449319086ea1c4586d927b84ac1988de79de6c1c0b5e7e62156"} Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.542928 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"619c3911-f86d-468d-b689-e939b16388e2","Type":"ContainerStarted","Data":"fa0c2bea97335a45eaed558dff3ec617c3b4026bac60688b6d66e871f40e359a"} Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.543129 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" podUID="fb4b2f4d-eab5-465e-b78f-3f1eae492b05" containerName="dnsmasq-dns" containerID="cri-o://b211f8f0b97488f29d5d8e79dc6dff5e5d010b3bd6030cdd27b5f392cd6fdae3" gracePeriod=10 Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.567977 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d5750fa4-34c3-4c23-b0cc-af9726d3034c-lock\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.568250 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.568556 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvhcq\" (UniqueName: \"kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-kube-api-access-hvhcq\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.568682 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5750fa4-34c3-4c23-b0cc-af9726d3034c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.568761 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d5750fa4-34c3-4c23-b0cc-af9726d3034c-cache\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.568830 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.604941 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.6696446959999998 podStartE2EDuration="3.604920003s" podCreationTimestamp="2026-02-26 11:28:47 +0000 UTC" firstStartedPulling="2026-02-26 11:28:47.9837838 +0000 UTC m=+1394.639522915" lastFinishedPulling="2026-02-26 11:28:48.919059107 +0000 UTC m=+1395.574798222" observedRunningTime="2026-02-26 11:28:50.588528764 +0000 UTC m=+1397.244267889" watchObservedRunningTime="2026-02-26 11:28:50.604920003 +0000 UTC m=+1397.260659128" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.670477 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvhcq\" (UniqueName: \"kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-kube-api-access-hvhcq\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.670674 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5750fa4-34c3-4c23-b0cc-af9726d3034c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.670705 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d5750fa4-34c3-4c23-b0cc-af9726d3034c-cache\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.670751 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.670776 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d5750fa4-34c3-4c23-b0cc-af9726d3034c-lock\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.670907 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: E0226 11:28:50.671634 4724 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 11:28:50 crc kubenswrapper[4724]: E0226 11:28:50.672019 4724 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 11:28:50 crc kubenswrapper[4724]: E0226 11:28:50.672069 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift podName:d5750fa4-34c3-4c23-b0cc-af9726d3034c nodeName:}" failed. No retries permitted until 2026-02-26 11:28:51.172051659 +0000 UTC m=+1397.827790774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift") pod "swift-storage-0" (UID: "d5750fa4-34c3-4c23-b0cc-af9726d3034c") : configmap "swift-ring-files" not found Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.672324 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d5750fa4-34c3-4c23-b0cc-af9726d3034c-lock\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.672408 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.678398 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d5750fa4-34c3-4c23-b0cc-af9726d3034c-cache\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.678878 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5750fa4-34c3-4c23-b0cc-af9726d3034c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.688759 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvhcq\" (UniqueName: \"kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-kube-api-access-hvhcq\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.694877 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.885693 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.975051 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9knx\" (UniqueName: \"kubernetes.io/projected/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-kube-api-access-b9knx\") pod \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.975092 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-ovsdbserver-sb\") pod \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.975214 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-dns-svc\") pod \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.975304 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-config\") pod \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\" (UID: \"fb4b2f4d-eab5-465e-b78f-3f1eae492b05\") " Feb 26 11:28:50 crc kubenswrapper[4724]: I0226 11:28:50.985771 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-kube-api-access-b9knx" (OuterVolumeSpecName: "kube-api-access-b9knx") pod "fb4b2f4d-eab5-465e-b78f-3f1eae492b05" (UID: "fb4b2f4d-eab5-465e-b78f-3f1eae492b05"). InnerVolumeSpecName "kube-api-access-b9knx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.045831 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fb4b2f4d-eab5-465e-b78f-3f1eae492b05" (UID: "fb4b2f4d-eab5-465e-b78f-3f1eae492b05"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.046797 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-config" (OuterVolumeSpecName: "config") pod "fb4b2f4d-eab5-465e-b78f-3f1eae492b05" (UID: "fb4b2f4d-eab5-465e-b78f-3f1eae492b05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.052967 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-7kkhs"] Feb 26 11:28:51 crc kubenswrapper[4724]: E0226 11:28:51.053671 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb4b2f4d-eab5-465e-b78f-3f1eae492b05" containerName="init" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.053771 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb4b2f4d-eab5-465e-b78f-3f1eae492b05" containerName="init" Feb 26 11:28:51 crc kubenswrapper[4724]: E0226 11:28:51.053868 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb4b2f4d-eab5-465e-b78f-3f1eae492b05" containerName="dnsmasq-dns" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.053945 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb4b2f4d-eab5-465e-b78f-3f1eae492b05" containerName="dnsmasq-dns" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.054247 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb4b2f4d-eab5-465e-b78f-3f1eae492b05" containerName="dnsmasq-dns" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.054292 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fb4b2f4d-eab5-465e-b78f-3f1eae492b05" (UID: "fb4b2f4d-eab5-465e-b78f-3f1eae492b05"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.055209 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.060335 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.060341 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.061085 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.071892 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7kkhs"] Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.078488 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9knx\" (UniqueName: \"kubernetes.io/projected/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-kube-api-access-b9knx\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.078514 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.078524 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.078533 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4b2f4d-eab5-465e-b78f-3f1eae492b05-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.180066 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e7412680-68df-4ebb-9961-8a89d8f83176-ring-data-devices\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.180129 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-combined-ca-bundle\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.180146 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-swiftconf\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.180168 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7412680-68df-4ebb-9961-8a89d8f83176-scripts\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.180206 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e7412680-68df-4ebb-9961-8a89d8f83176-etc-swift\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.180224 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7z82\" (UniqueName: \"kubernetes.io/projected/e7412680-68df-4ebb-9961-8a89d8f83176-kube-api-access-c7z82\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.180266 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.180309 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-dispersionconf\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: E0226 11:28:51.180520 4724 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 11:28:51 crc kubenswrapper[4724]: E0226 11:28:51.180534 4724 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 11:28:51 crc kubenswrapper[4724]: E0226 11:28:51.180576 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift podName:d5750fa4-34c3-4c23-b0cc-af9726d3034c nodeName:}" failed. No retries permitted until 2026-02-26 11:28:52.180558282 +0000 UTC m=+1398.836297397 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift") pod "swift-storage-0" (UID: "d5750fa4-34c3-4c23-b0cc-af9726d3034c") : configmap "swift-ring-files" not found Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.281809 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-combined-ca-bundle\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.281848 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-swiftconf\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.281872 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7412680-68df-4ebb-9961-8a89d8f83176-scripts\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.281889 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e7412680-68df-4ebb-9961-8a89d8f83176-etc-swift\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.281905 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7z82\" (UniqueName: \"kubernetes.io/projected/e7412680-68df-4ebb-9961-8a89d8f83176-kube-api-access-c7z82\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.281980 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-dispersionconf\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.282050 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e7412680-68df-4ebb-9961-8a89d8f83176-ring-data-devices\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.282679 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e7412680-68df-4ebb-9961-8a89d8f83176-ring-data-devices\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.282926 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e7412680-68df-4ebb-9961-8a89d8f83176-etc-swift\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.284074 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7412680-68df-4ebb-9961-8a89d8f83176-scripts\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.288228 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-combined-ca-bundle\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.288413 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-dispersionconf\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.288570 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-swiftconf\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.302316 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7z82\" (UniqueName: \"kubernetes.io/projected/e7412680-68df-4ebb-9961-8a89d8f83176-kube-api-access-c7z82\") pod \"swift-ring-rebalance-7kkhs\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.372825 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.563607 4724 generic.go:334] "Generic (PLEG): container finished" podID="fb4b2f4d-eab5-465e-b78f-3f1eae492b05" containerID="b211f8f0b97488f29d5d8e79dc6dff5e5d010b3bd6030cdd27b5f392cd6fdae3" exitCode=0 Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.563669 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" event={"ID":"fb4b2f4d-eab5-465e-b78f-3f1eae492b05","Type":"ContainerDied","Data":"b211f8f0b97488f29d5d8e79dc6dff5e5d010b3bd6030cdd27b5f392cd6fdae3"} Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.563699 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.564028 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-xjrd6" event={"ID":"fb4b2f4d-eab5-465e-b78f-3f1eae492b05","Type":"ContainerDied","Data":"1d1a4bfcb1dc761309549aa998c0c25509c50b492e3f8b847f82bb94da72027a"} Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.564066 4724 scope.go:117] "RemoveContainer" containerID="b211f8f0b97488f29d5d8e79dc6dff5e5d010b3bd6030cdd27b5f392cd6fdae3" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.568771 4724 generic.go:334] "Generic (PLEG): container finished" podID="39d817a7-9237-4683-88aa-20bbbd487d49" containerID="7c562a223b3f141228aeab8387c0596740f7becc4574a79fd4a0a2d9462621b5" exitCode=0 Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.570004 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" event={"ID":"39d817a7-9237-4683-88aa-20bbbd487d49","Type":"ContainerDied","Data":"7c562a223b3f141228aeab8387c0596740f7becc4574a79fd4a0a2d9462621b5"} Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.570218 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.690334 4724 scope.go:117] "RemoveContainer" containerID="b65b430cd537c49309493d15a7b42543a0613be23335fa3e69656b6edaa11da1" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.714963 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-xjrd6"] Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.725286 4724 scope.go:117] "RemoveContainer" containerID="b211f8f0b97488f29d5d8e79dc6dff5e5d010b3bd6030cdd27b5f392cd6fdae3" Feb 26 11:28:51 crc kubenswrapper[4724]: E0226 11:28:51.726323 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b211f8f0b97488f29d5d8e79dc6dff5e5d010b3bd6030cdd27b5f392cd6fdae3\": container with ID starting with b211f8f0b97488f29d5d8e79dc6dff5e5d010b3bd6030cdd27b5f392cd6fdae3 not found: ID does not exist" containerID="b211f8f0b97488f29d5d8e79dc6dff5e5d010b3bd6030cdd27b5f392cd6fdae3" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.726358 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b211f8f0b97488f29d5d8e79dc6dff5e5d010b3bd6030cdd27b5f392cd6fdae3"} err="failed to get container status \"b211f8f0b97488f29d5d8e79dc6dff5e5d010b3bd6030cdd27b5f392cd6fdae3\": rpc error: code = NotFound desc = could not find container \"b211f8f0b97488f29d5d8e79dc6dff5e5d010b3bd6030cdd27b5f392cd6fdae3\": container with ID starting with b211f8f0b97488f29d5d8e79dc6dff5e5d010b3bd6030cdd27b5f392cd6fdae3 not found: ID does not exist" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.726377 4724 scope.go:117] "RemoveContainer" containerID="b65b430cd537c49309493d15a7b42543a0613be23335fa3e69656b6edaa11da1" Feb 26 11:28:51 crc kubenswrapper[4724]: E0226 11:28:51.726722 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b65b430cd537c49309493d15a7b42543a0613be23335fa3e69656b6edaa11da1\": container with ID starting with b65b430cd537c49309493d15a7b42543a0613be23335fa3e69656b6edaa11da1 not found: ID does not exist" containerID="b65b430cd537c49309493d15a7b42543a0613be23335fa3e69656b6edaa11da1" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.726748 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b65b430cd537c49309493d15a7b42543a0613be23335fa3e69656b6edaa11da1"} err="failed to get container status \"b65b430cd537c49309493d15a7b42543a0613be23335fa3e69656b6edaa11da1\": rpc error: code = NotFound desc = could not find container \"b65b430cd537c49309493d15a7b42543a0613be23335fa3e69656b6edaa11da1\": container with ID starting with b65b430cd537c49309493d15a7b42543a0613be23335fa3e69656b6edaa11da1 not found: ID does not exist" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.737947 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-xjrd6"] Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.854191 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7kkhs"] Feb 26 11:28:51 crc kubenswrapper[4724]: W0226 11:28:51.854855 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7412680_68df_4ebb_9961_8a89d8f83176.slice/crio-56efaf13a23bb4a5838f790eb19c537ab07293230b0c956b37fbebca4c8734aa WatchSource:0}: Error finding container 56efaf13a23bb4a5838f790eb19c537ab07293230b0c956b37fbebca4c8734aa: Status 404 returned error can't find the container with id 56efaf13a23bb4a5838f790eb19c537ab07293230b0c956b37fbebca4c8734aa Feb 26 11:28:51 crc kubenswrapper[4724]: E0226 11:28:51.860933 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb4b2f4d_eab5_465e_b78f_3f1eae492b05.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb4b2f4d_eab5_465e_b78f_3f1eae492b05.slice/crio-1d1a4bfcb1dc761309549aa998c0c25509c50b492e3f8b847f82bb94da72027a\": RecentStats: unable to find data in memory cache]" Feb 26 11:28:51 crc kubenswrapper[4724]: I0226 11:28:51.984761 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb4b2f4d-eab5-465e-b78f-3f1eae492b05" path="/var/lib/kubelet/pods/fb4b2f4d-eab5-465e-b78f-3f1eae492b05/volumes" Feb 26 11:28:52 crc kubenswrapper[4724]: I0226 11:28:52.199538 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:52 crc kubenswrapper[4724]: E0226 11:28:52.199707 4724 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 11:28:52 crc kubenswrapper[4724]: E0226 11:28:52.199732 4724 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 11:28:52 crc kubenswrapper[4724]: E0226 11:28:52.199792 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift podName:d5750fa4-34c3-4c23-b0cc-af9726d3034c nodeName:}" failed. No retries permitted until 2026-02-26 11:28:54.199773103 +0000 UTC m=+1400.855512218 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift") pod "swift-storage-0" (UID: "d5750fa4-34c3-4c23-b0cc-af9726d3034c") : configmap "swift-ring-files" not found Feb 26 11:28:52 crc kubenswrapper[4724]: I0226 11:28:52.579346 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" event={"ID":"39d817a7-9237-4683-88aa-20bbbd487d49","Type":"ContainerStarted","Data":"8f05001cead92858e49ae1d7c9b130d932f2b0bf5ecbb9b718dafe8b26f89571"} Feb 26 11:28:52 crc kubenswrapper[4724]: I0226 11:28:52.579485 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:52 crc kubenswrapper[4724]: I0226 11:28:52.580545 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7kkhs" event={"ID":"e7412680-68df-4ebb-9961-8a89d8f83176","Type":"ContainerStarted","Data":"56efaf13a23bb4a5838f790eb19c537ab07293230b0c956b37fbebca4c8734aa"} Feb 26 11:28:52 crc kubenswrapper[4724]: I0226 11:28:52.597827 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" podStartSLOduration=3.597807733 podStartE2EDuration="3.597807733s" podCreationTimestamp="2026-02-26 11:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:28:52.594237872 +0000 UTC m=+1399.249976987" watchObservedRunningTime="2026-02-26 11:28:52.597807733 +0000 UTC m=+1399.253546838" Feb 26 11:28:53 crc kubenswrapper[4724]: I0226 11:28:53.773880 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fsv49"] Feb 26 11:28:53 crc kubenswrapper[4724]: I0226 11:28:53.775395 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fsv49" Feb 26 11:28:53 crc kubenswrapper[4724]: I0226 11:28:53.786141 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 26 11:28:53 crc kubenswrapper[4724]: I0226 11:28:53.787998 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fsv49"] Feb 26 11:28:53 crc kubenswrapper[4724]: I0226 11:28:53.834601 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a96424d-fa80-4b1b-8da7-55b1ba799cd2-operator-scripts\") pod \"root-account-create-update-fsv49\" (UID: \"7a96424d-fa80-4b1b-8da7-55b1ba799cd2\") " pod="openstack/root-account-create-update-fsv49" Feb 26 11:28:53 crc kubenswrapper[4724]: I0226 11:28:53.834919 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbq68\" (UniqueName: \"kubernetes.io/projected/7a96424d-fa80-4b1b-8da7-55b1ba799cd2-kube-api-access-nbq68\") pod \"root-account-create-update-fsv49\" (UID: \"7a96424d-fa80-4b1b-8da7-55b1ba799cd2\") " pod="openstack/root-account-create-update-fsv49" Feb 26 11:28:53 crc kubenswrapper[4724]: I0226 11:28:53.936970 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbq68\" (UniqueName: \"kubernetes.io/projected/7a96424d-fa80-4b1b-8da7-55b1ba799cd2-kube-api-access-nbq68\") pod \"root-account-create-update-fsv49\" (UID: \"7a96424d-fa80-4b1b-8da7-55b1ba799cd2\") " pod="openstack/root-account-create-update-fsv49" Feb 26 11:28:53 crc kubenswrapper[4724]: I0226 11:28:53.937297 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a96424d-fa80-4b1b-8da7-55b1ba799cd2-operator-scripts\") pod \"root-account-create-update-fsv49\" (UID: \"7a96424d-fa80-4b1b-8da7-55b1ba799cd2\") " pod="openstack/root-account-create-update-fsv49" Feb 26 11:28:53 crc kubenswrapper[4724]: I0226 11:28:53.938018 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a96424d-fa80-4b1b-8da7-55b1ba799cd2-operator-scripts\") pod \"root-account-create-update-fsv49\" (UID: \"7a96424d-fa80-4b1b-8da7-55b1ba799cd2\") " pod="openstack/root-account-create-update-fsv49" Feb 26 11:28:53 crc kubenswrapper[4724]: I0226 11:28:53.957669 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbq68\" (UniqueName: \"kubernetes.io/projected/7a96424d-fa80-4b1b-8da7-55b1ba799cd2-kube-api-access-nbq68\") pod \"root-account-create-update-fsv49\" (UID: \"7a96424d-fa80-4b1b-8da7-55b1ba799cd2\") " pod="openstack/root-account-create-update-fsv49" Feb 26 11:28:54 crc kubenswrapper[4724]: I0226 11:28:54.094927 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fsv49" Feb 26 11:28:54 crc kubenswrapper[4724]: I0226 11:28:54.242457 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:54 crc kubenswrapper[4724]: E0226 11:28:54.242686 4724 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 11:28:54 crc kubenswrapper[4724]: E0226 11:28:54.242723 4724 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 11:28:54 crc kubenswrapper[4724]: E0226 11:28:54.242793 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift podName:d5750fa4-34c3-4c23-b0cc-af9726d3034c nodeName:}" failed. No retries permitted until 2026-02-26 11:28:58.242776214 +0000 UTC m=+1404.898515329 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift") pod "swift-storage-0" (UID: "d5750fa4-34c3-4c23-b0cc-af9726d3034c") : configmap "swift-ring-files" not found Feb 26 11:28:55 crc kubenswrapper[4724]: I0226 11:28:55.605245 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7kkhs" event={"ID":"e7412680-68df-4ebb-9961-8a89d8f83176","Type":"ContainerStarted","Data":"e6c83014aa19524b396aff1631631d5b0c0e521ad2a66feeb53340a4cde6e788"} Feb 26 11:28:55 crc kubenswrapper[4724]: I0226 11:28:55.627197 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-7kkhs" podStartSLOduration=1.187891159 podStartE2EDuration="4.627153616s" podCreationTimestamp="2026-02-26 11:28:51 +0000 UTC" firstStartedPulling="2026-02-26 11:28:51.857114677 +0000 UTC m=+1398.512853792" lastFinishedPulling="2026-02-26 11:28:55.296377134 +0000 UTC m=+1401.952116249" observedRunningTime="2026-02-26 11:28:55.619407928 +0000 UTC m=+1402.275147053" watchObservedRunningTime="2026-02-26 11:28:55.627153616 +0000 UTC m=+1402.282892751" Feb 26 11:28:55 crc kubenswrapper[4724]: W0226 11:28:55.671631 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a96424d_fa80_4b1b_8da7_55b1ba799cd2.slice/crio-62ecadadff27c4c581688eea4decc07829e97dea64499ba83bd41a9dd959b1ab WatchSource:0}: Error finding container 62ecadadff27c4c581688eea4decc07829e97dea64499ba83bd41a9dd959b1ab: Status 404 returned error can't find the container with id 62ecadadff27c4c581688eea4decc07829e97dea64499ba83bd41a9dd959b1ab Feb 26 11:28:55 crc kubenswrapper[4724]: I0226 11:28:55.673250 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fsv49"] Feb 26 11:28:55 crc kubenswrapper[4724]: I0226 11:28:55.907358 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.612711 4724 generic.go:334] "Generic (PLEG): container finished" podID="7a96424d-fa80-4b1b-8da7-55b1ba799cd2" containerID="d5e1fcd72bc882e298601e88c79f821339f695f6b1df1f0d88b74af683f964b2" exitCode=0 Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.613982 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fsv49" event={"ID":"7a96424d-fa80-4b1b-8da7-55b1ba799cd2","Type":"ContainerDied","Data":"d5e1fcd72bc882e298601e88c79f821339f695f6b1df1f0d88b74af683f964b2"} Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.614010 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fsv49" event={"ID":"7a96424d-fa80-4b1b-8da7-55b1ba799cd2","Type":"ContainerStarted","Data":"62ecadadff27c4c581688eea4decc07829e97dea64499ba83bd41a9dd959b1ab"} Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.816882 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-npkbx"] Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.818598 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-npkbx" Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.824028 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-npkbx"] Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.895829 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv286\" (UniqueName: \"kubernetes.io/projected/c18b60bf-4d85-4125-802b-6de116af3e23-kube-api-access-lv286\") pod \"glance-db-create-npkbx\" (UID: \"c18b60bf-4d85-4125-802b-6de116af3e23\") " pod="openstack/glance-db-create-npkbx" Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.895929 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18b60bf-4d85-4125-802b-6de116af3e23-operator-scripts\") pod \"glance-db-create-npkbx\" (UID: \"c18b60bf-4d85-4125-802b-6de116af3e23\") " pod="openstack/glance-db-create-npkbx" Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.936228 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-5ab1-account-create-update-2pjjt"] Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.937363 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5ab1-account-create-update-2pjjt" Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.945547 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.948601 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5ab1-account-create-update-2pjjt"] Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.996851 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18b60bf-4d85-4125-802b-6de116af3e23-operator-scripts\") pod \"glance-db-create-npkbx\" (UID: \"c18b60bf-4d85-4125-802b-6de116af3e23\") " pod="openstack/glance-db-create-npkbx" Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.997114 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q7w7\" (UniqueName: \"kubernetes.io/projected/d30bb3e5-0cb3-4fe1-9507-6b3527e260c3-kube-api-access-6q7w7\") pod \"glance-5ab1-account-create-update-2pjjt\" (UID: \"d30bb3e5-0cb3-4fe1-9507-6b3527e260c3\") " pod="openstack/glance-5ab1-account-create-update-2pjjt" Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.997292 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d30bb3e5-0cb3-4fe1-9507-6b3527e260c3-operator-scripts\") pod \"glance-5ab1-account-create-update-2pjjt\" (UID: \"d30bb3e5-0cb3-4fe1-9507-6b3527e260c3\") " pod="openstack/glance-5ab1-account-create-update-2pjjt" Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.997455 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv286\" (UniqueName: \"kubernetes.io/projected/c18b60bf-4d85-4125-802b-6de116af3e23-kube-api-access-lv286\") pod \"glance-db-create-npkbx\" (UID: \"c18b60bf-4d85-4125-802b-6de116af3e23\") " pod="openstack/glance-db-create-npkbx" Feb 26 11:28:56 crc kubenswrapper[4724]: I0226 11:28:56.998154 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18b60bf-4d85-4125-802b-6de116af3e23-operator-scripts\") pod \"glance-db-create-npkbx\" (UID: \"c18b60bf-4d85-4125-802b-6de116af3e23\") " pod="openstack/glance-db-create-npkbx" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.025397 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv286\" (UniqueName: \"kubernetes.io/projected/c18b60bf-4d85-4125-802b-6de116af3e23-kube-api-access-lv286\") pod \"glance-db-create-npkbx\" (UID: \"c18b60bf-4d85-4125-802b-6de116af3e23\") " pod="openstack/glance-db-create-npkbx" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.099306 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q7w7\" (UniqueName: \"kubernetes.io/projected/d30bb3e5-0cb3-4fe1-9507-6b3527e260c3-kube-api-access-6q7w7\") pod \"glance-5ab1-account-create-update-2pjjt\" (UID: \"d30bb3e5-0cb3-4fe1-9507-6b3527e260c3\") " pod="openstack/glance-5ab1-account-create-update-2pjjt" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.099382 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d30bb3e5-0cb3-4fe1-9507-6b3527e260c3-operator-scripts\") pod \"glance-5ab1-account-create-update-2pjjt\" (UID: \"d30bb3e5-0cb3-4fe1-9507-6b3527e260c3\") " pod="openstack/glance-5ab1-account-create-update-2pjjt" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.101949 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d30bb3e5-0cb3-4fe1-9507-6b3527e260c3-operator-scripts\") pod \"glance-5ab1-account-create-update-2pjjt\" (UID: \"d30bb3e5-0cb3-4fe1-9507-6b3527e260c3\") " pod="openstack/glance-5ab1-account-create-update-2pjjt" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.118301 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q7w7\" (UniqueName: \"kubernetes.io/projected/d30bb3e5-0cb3-4fe1-9507-6b3527e260c3-kube-api-access-6q7w7\") pod \"glance-5ab1-account-create-update-2pjjt\" (UID: \"d30bb3e5-0cb3-4fe1-9507-6b3527e260c3\") " pod="openstack/glance-5ab1-account-create-update-2pjjt" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.136868 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-npkbx" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.259742 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5ab1-account-create-update-2pjjt" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.484869 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-bznxm"] Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.486052 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bznxm" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.507402 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bznxm"] Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.599288 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c767-account-create-update-97tv6"] Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.600344 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c767-account-create-update-97tv6" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.604409 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.607210 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f55ef083-be52-48d6-8b62-3d8f92cbeec5-operator-scripts\") pod \"keystone-db-create-bznxm\" (UID: \"f55ef083-be52-48d6-8b62-3d8f92cbeec5\") " pod="openstack/keystone-db-create-bznxm" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.607314 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp7lx\" (UniqueName: \"kubernetes.io/projected/f55ef083-be52-48d6-8b62-3d8f92cbeec5-kube-api-access-rp7lx\") pod \"keystone-db-create-bznxm\" (UID: \"f55ef083-be52-48d6-8b62-3d8f92cbeec5\") " pod="openstack/keystone-db-create-bznxm" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.617682 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c767-account-create-update-97tv6"] Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.658965 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-npkbx"] Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.710189 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g5j8\" (UniqueName: \"kubernetes.io/projected/d750effb-07c0-4dab-b0d3-0cf351228638-kube-api-access-7g5j8\") pod \"keystone-c767-account-create-update-97tv6\" (UID: \"d750effb-07c0-4dab-b0d3-0cf351228638\") " pod="openstack/keystone-c767-account-create-update-97tv6" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.710311 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f55ef083-be52-48d6-8b62-3d8f92cbeec5-operator-scripts\") pod \"keystone-db-create-bznxm\" (UID: \"f55ef083-be52-48d6-8b62-3d8f92cbeec5\") " pod="openstack/keystone-db-create-bznxm" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.710467 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d750effb-07c0-4dab-b0d3-0cf351228638-operator-scripts\") pod \"keystone-c767-account-create-update-97tv6\" (UID: \"d750effb-07c0-4dab-b0d3-0cf351228638\") " pod="openstack/keystone-c767-account-create-update-97tv6" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.710499 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp7lx\" (UniqueName: \"kubernetes.io/projected/f55ef083-be52-48d6-8b62-3d8f92cbeec5-kube-api-access-rp7lx\") pod \"keystone-db-create-bznxm\" (UID: \"f55ef083-be52-48d6-8b62-3d8f92cbeec5\") " pod="openstack/keystone-db-create-bznxm" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.711610 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f55ef083-be52-48d6-8b62-3d8f92cbeec5-operator-scripts\") pod \"keystone-db-create-bznxm\" (UID: \"f55ef083-be52-48d6-8b62-3d8f92cbeec5\") " pod="openstack/keystone-db-create-bznxm" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.754042 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-bm75g"] Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.755369 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bm75g" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.774853 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bm75g"] Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.808928 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp7lx\" (UniqueName: \"kubernetes.io/projected/f55ef083-be52-48d6-8b62-3d8f92cbeec5-kube-api-access-rp7lx\") pod \"keystone-db-create-bznxm\" (UID: \"f55ef083-be52-48d6-8b62-3d8f92cbeec5\") " pod="openstack/keystone-db-create-bznxm" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.812099 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g5j8\" (UniqueName: \"kubernetes.io/projected/d750effb-07c0-4dab-b0d3-0cf351228638-kube-api-access-7g5j8\") pod \"keystone-c767-account-create-update-97tv6\" (UID: \"d750effb-07c0-4dab-b0d3-0cf351228638\") " pod="openstack/keystone-c767-account-create-update-97tv6" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.812136 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22f21b36-c8f6-4804-9f20-317255534086-operator-scripts\") pod \"placement-db-create-bm75g\" (UID: \"22f21b36-c8f6-4804-9f20-317255534086\") " pod="openstack/placement-db-create-bm75g" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.812161 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6shkm\" (UniqueName: \"kubernetes.io/projected/22f21b36-c8f6-4804-9f20-317255534086-kube-api-access-6shkm\") pod \"placement-db-create-bm75g\" (UID: \"22f21b36-c8f6-4804-9f20-317255534086\") " pod="openstack/placement-db-create-bm75g" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.812301 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d750effb-07c0-4dab-b0d3-0cf351228638-operator-scripts\") pod \"keystone-c767-account-create-update-97tv6\" (UID: \"d750effb-07c0-4dab-b0d3-0cf351228638\") " pod="openstack/keystone-c767-account-create-update-97tv6" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.812950 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d750effb-07c0-4dab-b0d3-0cf351228638-operator-scripts\") pod \"keystone-c767-account-create-update-97tv6\" (UID: \"d750effb-07c0-4dab-b0d3-0cf351228638\") " pod="openstack/keystone-c767-account-create-update-97tv6" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.819582 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bznxm" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.856864 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g5j8\" (UniqueName: \"kubernetes.io/projected/d750effb-07c0-4dab-b0d3-0cf351228638-kube-api-access-7g5j8\") pod \"keystone-c767-account-create-update-97tv6\" (UID: \"d750effb-07c0-4dab-b0d3-0cf351228638\") " pod="openstack/keystone-c767-account-create-update-97tv6" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.864850 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5ab1-account-create-update-2pjjt"] Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.918133 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22f21b36-c8f6-4804-9f20-317255534086-operator-scripts\") pod \"placement-db-create-bm75g\" (UID: \"22f21b36-c8f6-4804-9f20-317255534086\") " pod="openstack/placement-db-create-bm75g" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.918230 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6shkm\" (UniqueName: \"kubernetes.io/projected/22f21b36-c8f6-4804-9f20-317255534086-kube-api-access-6shkm\") pod \"placement-db-create-bm75g\" (UID: \"22f21b36-c8f6-4804-9f20-317255534086\") " pod="openstack/placement-db-create-bm75g" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.924053 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22f21b36-c8f6-4804-9f20-317255534086-operator-scripts\") pod \"placement-db-create-bm75g\" (UID: \"22f21b36-c8f6-4804-9f20-317255534086\") " pod="openstack/placement-db-create-bm75g" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.940743 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c767-account-create-update-97tv6" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.963283 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-be15-account-create-update-dckff"] Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.964617 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-be15-account-create-update-dckff" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.971126 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-be15-account-create-update-dckff"] Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.973372 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 26 11:28:57 crc kubenswrapper[4724]: I0226 11:28:57.979940 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6shkm\" (UniqueName: \"kubernetes.io/projected/22f21b36-c8f6-4804-9f20-317255534086-kube-api-access-6shkm\") pod \"placement-db-create-bm75g\" (UID: \"22f21b36-c8f6-4804-9f20-317255534086\") " pod="openstack/placement-db-create-bm75g" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.126798 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448f51f7-7dab-41bb-aafa-2ed352f22710-operator-scripts\") pod \"placement-be15-account-create-update-dckff\" (UID: \"448f51f7-7dab-41bb-aafa-2ed352f22710\") " pod="openstack/placement-be15-account-create-update-dckff" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.126874 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbzx2\" (UniqueName: \"kubernetes.io/projected/448f51f7-7dab-41bb-aafa-2ed352f22710-kube-api-access-sbzx2\") pod \"placement-be15-account-create-update-dckff\" (UID: \"448f51f7-7dab-41bb-aafa-2ed352f22710\") " pod="openstack/placement-be15-account-create-update-dckff" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.182648 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bm75g" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.228533 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448f51f7-7dab-41bb-aafa-2ed352f22710-operator-scripts\") pod \"placement-be15-account-create-update-dckff\" (UID: \"448f51f7-7dab-41bb-aafa-2ed352f22710\") " pod="openstack/placement-be15-account-create-update-dckff" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.228806 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbzx2\" (UniqueName: \"kubernetes.io/projected/448f51f7-7dab-41bb-aafa-2ed352f22710-kube-api-access-sbzx2\") pod \"placement-be15-account-create-update-dckff\" (UID: \"448f51f7-7dab-41bb-aafa-2ed352f22710\") " pod="openstack/placement-be15-account-create-update-dckff" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.229492 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448f51f7-7dab-41bb-aafa-2ed352f22710-operator-scripts\") pod \"placement-be15-account-create-update-dckff\" (UID: \"448f51f7-7dab-41bb-aafa-2ed352f22710\") " pod="openstack/placement-be15-account-create-update-dckff" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.269653 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbzx2\" (UniqueName: \"kubernetes.io/projected/448f51f7-7dab-41bb-aafa-2ed352f22710-kube-api-access-sbzx2\") pod \"placement-be15-account-create-update-dckff\" (UID: \"448f51f7-7dab-41bb-aafa-2ed352f22710\") " pod="openstack/placement-be15-account-create-update-dckff" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.333208 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:28:58 crc kubenswrapper[4724]: E0226 11:28:58.333380 4724 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 11:28:58 crc kubenswrapper[4724]: E0226 11:28:58.333394 4724 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 11:28:58 crc kubenswrapper[4724]: E0226 11:28:58.333445 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift podName:d5750fa4-34c3-4c23-b0cc-af9726d3034c nodeName:}" failed. No retries permitted until 2026-02-26 11:29:06.333430434 +0000 UTC m=+1412.989169549 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift") pod "swift-storage-0" (UID: "d5750fa4-34c3-4c23-b0cc-af9726d3034c") : configmap "swift-ring-files" not found Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.339432 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-be15-account-create-update-dckff" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.445582 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fsv49" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.543632 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbq68\" (UniqueName: \"kubernetes.io/projected/7a96424d-fa80-4b1b-8da7-55b1ba799cd2-kube-api-access-nbq68\") pod \"7a96424d-fa80-4b1b-8da7-55b1ba799cd2\" (UID: \"7a96424d-fa80-4b1b-8da7-55b1ba799cd2\") " Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.544009 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a96424d-fa80-4b1b-8da7-55b1ba799cd2-operator-scripts\") pod \"7a96424d-fa80-4b1b-8da7-55b1ba799cd2\" (UID: \"7a96424d-fa80-4b1b-8da7-55b1ba799cd2\") " Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.552817 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a96424d-fa80-4b1b-8da7-55b1ba799cd2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a96424d-fa80-4b1b-8da7-55b1ba799cd2" (UID: "7a96424d-fa80-4b1b-8da7-55b1ba799cd2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.579221 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a96424d-fa80-4b1b-8da7-55b1ba799cd2-kube-api-access-nbq68" (OuterVolumeSpecName: "kube-api-access-nbq68") pod "7a96424d-fa80-4b1b-8da7-55b1ba799cd2" (UID: "7a96424d-fa80-4b1b-8da7-55b1ba799cd2"). InnerVolumeSpecName "kube-api-access-nbq68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.637873 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fsv49" event={"ID":"7a96424d-fa80-4b1b-8da7-55b1ba799cd2","Type":"ContainerDied","Data":"62ecadadff27c4c581688eea4decc07829e97dea64499ba83bd41a9dd959b1ab"} Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.637918 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62ecadadff27c4c581688eea4decc07829e97dea64499ba83bd41a9dd959b1ab" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.637990 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fsv49" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.642872 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5ab1-account-create-update-2pjjt" event={"ID":"d30bb3e5-0cb3-4fe1-9507-6b3527e260c3","Type":"ContainerStarted","Data":"13dd34b2fee27b1bf61adc8145b2d97b48f7c21f29e6ad77f11e7bf52966aabd"} Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.642970 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5ab1-account-create-update-2pjjt" event={"ID":"d30bb3e5-0cb3-4fe1-9507-6b3527e260c3","Type":"ContainerStarted","Data":"0f997a3bd43435f805df1e32ff9f7deea14ab390f503c78a4232cd192f8823ca"} Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.647650 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-npkbx" event={"ID":"c18b60bf-4d85-4125-802b-6de116af3e23","Type":"ContainerStarted","Data":"2ce603486d2b7cd9c95715600768435c25e1b1f4df8ab88dac1b372401148755"} Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.647699 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-npkbx" event={"ID":"c18b60bf-4d85-4125-802b-6de116af3e23","Type":"ContainerStarted","Data":"dcd457be91fe8d2ba4eed7e08e8534679bd3e7217539558b2e45faca914f06ad"} Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.659378 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbq68\" (UniqueName: \"kubernetes.io/projected/7a96424d-fa80-4b1b-8da7-55b1ba799cd2-kube-api-access-nbq68\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.659420 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a96424d-fa80-4b1b-8da7-55b1ba799cd2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.673733 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-5ab1-account-create-update-2pjjt" podStartSLOduration=2.673713268 podStartE2EDuration="2.673713268s" podCreationTimestamp="2026-02-26 11:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:28:58.654817075 +0000 UTC m=+1405.310556190" watchObservedRunningTime="2026-02-26 11:28:58.673713268 +0000 UTC m=+1405.329452383" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.687644 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bznxm"] Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.689597 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-npkbx" podStartSLOduration=2.689580243 podStartE2EDuration="2.689580243s" podCreationTimestamp="2026-02-26 11:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:28:58.669212153 +0000 UTC m=+1405.324951278" watchObservedRunningTime="2026-02-26 11:28:58.689580243 +0000 UTC m=+1405.345319358" Feb 26 11:28:58 crc kubenswrapper[4724]: I0226 11:28:58.831801 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c767-account-create-update-97tv6"] Feb 26 11:28:58 crc kubenswrapper[4724]: W0226 11:28:58.842371 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd750effb_07c0_4dab_b0d3_0cf351228638.slice/crio-c18478e3b8e2c66e671e6ca3f669ae031c05558e9e873b2ad47039749a4e4e55 WatchSource:0}: Error finding container c18478e3b8e2c66e671e6ca3f669ae031c05558e9e873b2ad47039749a4e4e55: Status 404 returned error can't find the container with id c18478e3b8e2c66e671e6ca3f669ae031c05558e9e873b2ad47039749a4e4e55 Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:58.996731 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bm75g"] Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.021736 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-be15-account-create-update-dckff"] Feb 26 11:28:59 crc kubenswrapper[4724]: W0226 11:28:59.024003 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod448f51f7_7dab_41bb_aafa_2ed352f22710.slice/crio-3235a332dc492e5c4191a65eed58d5e4d4afbe869de3e31e2526b45b8f0ddf6c WatchSource:0}: Error finding container 3235a332dc492e5c4191a65eed58d5e4d4afbe869de3e31e2526b45b8f0ddf6c: Status 404 returned error can't find the container with id 3235a332dc492e5c4191a65eed58d5e4d4afbe869de3e31e2526b45b8f0ddf6c Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.657388 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c767-account-create-update-97tv6" event={"ID":"d750effb-07c0-4dab-b0d3-0cf351228638","Type":"ContainerStarted","Data":"e692bc8e1416f9b1d0afcb0b9f4e8f41a0b6d8aefed7dd652d0ae8efdb358a76"} Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.657442 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c767-account-create-update-97tv6" event={"ID":"d750effb-07c0-4dab-b0d3-0cf351228638","Type":"ContainerStarted","Data":"c18478e3b8e2c66e671e6ca3f669ae031c05558e9e873b2ad47039749a4e4e55"} Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.662336 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bznxm" event={"ID":"f55ef083-be52-48d6-8b62-3d8f92cbeec5","Type":"ContainerStarted","Data":"12aef3eb63f611cba309c05081f312028b7458ba9d7f9ef2c514a1100339337a"} Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.662400 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bznxm" event={"ID":"f55ef083-be52-48d6-8b62-3d8f92cbeec5","Type":"ContainerStarted","Data":"36a48c33a7c7e9efa6a7b436518215f629474d48891eb6b6eb96cffb530ccda3"} Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.664407 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bm75g" event={"ID":"22f21b36-c8f6-4804-9f20-317255534086","Type":"ContainerStarted","Data":"46fafd04c5672acd344a3d68e52aeac492feec2f72c0edab74e5c872d0b52e95"} Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.664497 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bm75g" event={"ID":"22f21b36-c8f6-4804-9f20-317255534086","Type":"ContainerStarted","Data":"c0538c9d8ac7e1ad3f531634867fddf78c78d0f8e9dafd5ef697c20617c4f6d1"} Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.665773 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-be15-account-create-update-dckff" event={"ID":"448f51f7-7dab-41bb-aafa-2ed352f22710","Type":"ContainerStarted","Data":"5d35e005213ebc8f35ff1e070ceecfc17c89396b8959da61f9678c26661fb115"} Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.665808 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-be15-account-create-update-dckff" event={"ID":"448f51f7-7dab-41bb-aafa-2ed352f22710","Type":"ContainerStarted","Data":"3235a332dc492e5c4191a65eed58d5e4d4afbe869de3e31e2526b45b8f0ddf6c"} Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.668090 4724 generic.go:334] "Generic (PLEG): container finished" podID="c18b60bf-4d85-4125-802b-6de116af3e23" containerID="2ce603486d2b7cd9c95715600768435c25e1b1f4df8ab88dac1b372401148755" exitCode=0 Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.668632 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-npkbx" event={"ID":"c18b60bf-4d85-4125-802b-6de116af3e23","Type":"ContainerDied","Data":"2ce603486d2b7cd9c95715600768435c25e1b1f4df8ab88dac1b372401148755"} Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.706645 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c767-account-create-update-97tv6" podStartSLOduration=2.70662685 podStartE2EDuration="2.70662685s" podCreationTimestamp="2026-02-26 11:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:28:59.68235773 +0000 UTC m=+1406.338096865" watchObservedRunningTime="2026-02-26 11:28:59.70662685 +0000 UTC m=+1406.362365965" Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.709486 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-be15-account-create-update-dckff" podStartSLOduration=2.709475543 podStartE2EDuration="2.709475543s" podCreationTimestamp="2026-02-26 11:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:28:59.702259409 +0000 UTC m=+1406.357998544" watchObservedRunningTime="2026-02-26 11:28:59.709475543 +0000 UTC m=+1406.365214668" Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.717622 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-bm75g" podStartSLOduration=2.717615581 podStartE2EDuration="2.717615581s" podCreationTimestamp="2026-02-26 11:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:28:59.716054911 +0000 UTC m=+1406.371794026" watchObservedRunningTime="2026-02-26 11:28:59.717615581 +0000 UTC m=+1406.373354696" Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.733838 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-bznxm" podStartSLOduration=2.733812835 podStartE2EDuration="2.733812835s" podCreationTimestamp="2026-02-26 11:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:28:59.730921041 +0000 UTC m=+1406.386660176" watchObservedRunningTime="2026-02-26 11:28:59.733812835 +0000 UTC m=+1406.389551950" Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.777265 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.813793 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fsv49"] Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.819696 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fsv49"] Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.951812 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-xcnz9"] Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.952051 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-xcnz9" podUID="8d99e287-d985-4f45-9117-0ccf544d858e" containerName="dnsmasq-dns" containerID="cri-o://b2aa358ef59a07eee474199fd651e5135324fbf28420965f3554f7676f82062d" gracePeriod=10 Feb 26 11:28:59 crc kubenswrapper[4724]: I0226 11:28:59.988599 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a96424d-fa80-4b1b-8da7-55b1ba799cd2" path="/var/lib/kubelet/pods/7a96424d-fa80-4b1b-8da7-55b1ba799cd2/volumes" Feb 26 11:29:00 crc kubenswrapper[4724]: I0226 11:29:00.675816 4724 generic.go:334] "Generic (PLEG): container finished" podID="8d99e287-d985-4f45-9117-0ccf544d858e" containerID="b2aa358ef59a07eee474199fd651e5135324fbf28420965f3554f7676f82062d" exitCode=0 Feb 26 11:29:00 crc kubenswrapper[4724]: I0226 11:29:00.675997 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-xcnz9" event={"ID":"8d99e287-d985-4f45-9117-0ccf544d858e","Type":"ContainerDied","Data":"b2aa358ef59a07eee474199fd651e5135324fbf28420965f3554f7676f82062d"} Feb 26 11:29:00 crc kubenswrapper[4724]: I0226 11:29:00.907238 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-xcnz9" podUID="8d99e287-d985-4f45-9117-0ccf544d858e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.053680 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-npkbx" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.231217 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18b60bf-4d85-4125-802b-6de116af3e23-operator-scripts\") pod \"c18b60bf-4d85-4125-802b-6de116af3e23\" (UID: \"c18b60bf-4d85-4125-802b-6de116af3e23\") " Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.231453 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv286\" (UniqueName: \"kubernetes.io/projected/c18b60bf-4d85-4125-802b-6de116af3e23-kube-api-access-lv286\") pod \"c18b60bf-4d85-4125-802b-6de116af3e23\" (UID: \"c18b60bf-4d85-4125-802b-6de116af3e23\") " Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.231977 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c18b60bf-4d85-4125-802b-6de116af3e23-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c18b60bf-4d85-4125-802b-6de116af3e23" (UID: "c18b60bf-4d85-4125-802b-6de116af3e23"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.239205 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c18b60bf-4d85-4125-802b-6de116af3e23-kube-api-access-lv286" (OuterVolumeSpecName: "kube-api-access-lv286") pod "c18b60bf-4d85-4125-802b-6de116af3e23" (UID: "c18b60bf-4d85-4125-802b-6de116af3e23"). InnerVolumeSpecName "kube-api-access-lv286". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.333009 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv286\" (UniqueName: \"kubernetes.io/projected/c18b60bf-4d85-4125-802b-6de116af3e23-kube-api-access-lv286\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.333044 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18b60bf-4d85-4125-802b-6de116af3e23-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.350578 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.433779 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-ovsdbserver-nb\") pod \"8d99e287-d985-4f45-9117-0ccf544d858e\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.433851 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-ovsdbserver-sb\") pod \"8d99e287-d985-4f45-9117-0ccf544d858e\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.433916 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvtqf\" (UniqueName: \"kubernetes.io/projected/8d99e287-d985-4f45-9117-0ccf544d858e-kube-api-access-fvtqf\") pod \"8d99e287-d985-4f45-9117-0ccf544d858e\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.433965 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-config\") pod \"8d99e287-d985-4f45-9117-0ccf544d858e\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.434043 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-dns-svc\") pod \"8d99e287-d985-4f45-9117-0ccf544d858e\" (UID: \"8d99e287-d985-4f45-9117-0ccf544d858e\") " Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.447336 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d99e287-d985-4f45-9117-0ccf544d858e-kube-api-access-fvtqf" (OuterVolumeSpecName: "kube-api-access-fvtqf") pod "8d99e287-d985-4f45-9117-0ccf544d858e" (UID: "8d99e287-d985-4f45-9117-0ccf544d858e"). InnerVolumeSpecName "kube-api-access-fvtqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.475976 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8d99e287-d985-4f45-9117-0ccf544d858e" (UID: "8d99e287-d985-4f45-9117-0ccf544d858e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.476647 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8d99e287-d985-4f45-9117-0ccf544d858e" (UID: "8d99e287-d985-4f45-9117-0ccf544d858e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.478613 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8d99e287-d985-4f45-9117-0ccf544d858e" (UID: "8d99e287-d985-4f45-9117-0ccf544d858e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.495399 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-config" (OuterVolumeSpecName: "config") pod "8d99e287-d985-4f45-9117-0ccf544d858e" (UID: "8d99e287-d985-4f45-9117-0ccf544d858e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.535861 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.536090 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.536158 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.536250 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d99e287-d985-4f45-9117-0ccf544d858e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.536321 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvtqf\" (UniqueName: \"kubernetes.io/projected/8d99e287-d985-4f45-9117-0ccf544d858e-kube-api-access-fvtqf\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.685985 4724 generic.go:334] "Generic (PLEG): container finished" podID="22f21b36-c8f6-4804-9f20-317255534086" containerID="46fafd04c5672acd344a3d68e52aeac492feec2f72c0edab74e5c872d0b52e95" exitCode=0 Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.686065 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bm75g" event={"ID":"22f21b36-c8f6-4804-9f20-317255534086","Type":"ContainerDied","Data":"46fafd04c5672acd344a3d68e52aeac492feec2f72c0edab74e5c872d0b52e95"} Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.688656 4724 generic.go:334] "Generic (PLEG): container finished" podID="f55ef083-be52-48d6-8b62-3d8f92cbeec5" containerID="12aef3eb63f611cba309c05081f312028b7458ba9d7f9ef2c514a1100339337a" exitCode=0 Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.688737 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bznxm" event={"ID":"f55ef083-be52-48d6-8b62-3d8f92cbeec5","Type":"ContainerDied","Data":"12aef3eb63f611cba309c05081f312028b7458ba9d7f9ef2c514a1100339337a"} Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.690970 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-npkbx" event={"ID":"c18b60bf-4d85-4125-802b-6de116af3e23","Type":"ContainerDied","Data":"dcd457be91fe8d2ba4eed7e08e8534679bd3e7217539558b2e45faca914f06ad"} Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.691006 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcd457be91fe8d2ba4eed7e08e8534679bd3e7217539558b2e45faca914f06ad" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.691086 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-npkbx" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.693566 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-xcnz9" event={"ID":"8d99e287-d985-4f45-9117-0ccf544d858e","Type":"ContainerDied","Data":"34f7dae08cd3999e64f813cbe96280aba87938b5b68508a329b72726eff7e97f"} Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.693723 4724 scope.go:117] "RemoveContainer" containerID="b2aa358ef59a07eee474199fd651e5135324fbf28420965f3554f7676f82062d" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.693959 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-xcnz9" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.727964 4724 scope.go:117] "RemoveContainer" containerID="d894167865763a371779af2fff76f4ff16a7b25a6fc1371f15ad4750ab9f6ccf" Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.765826 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-xcnz9"] Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.773135 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-xcnz9"] Feb 26 11:29:01 crc kubenswrapper[4724]: I0226 11:29:01.989484 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d99e287-d985-4f45-9117-0ccf544d858e" path="/var/lib/kubelet/pods/8d99e287-d985-4f45-9117-0ccf544d858e/volumes" Feb 26 11:29:02 crc kubenswrapper[4724]: E0226 11:29:02.043656 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd30bb3e5_0cb3_4fe1_9507_6b3527e260c3.slice/crio-conmon-13dd34b2fee27b1bf61adc8145b2d97b48f7c21f29e6ad77f11e7bf52966aabd.scope\": RecentStats: unable to find data in memory cache]" Feb 26 11:29:02 crc kubenswrapper[4724]: I0226 11:29:02.701544 4724 generic.go:334] "Generic (PLEG): container finished" podID="d30bb3e5-0cb3-4fe1-9507-6b3527e260c3" containerID="13dd34b2fee27b1bf61adc8145b2d97b48f7c21f29e6ad77f11e7bf52966aabd" exitCode=0 Feb 26 11:29:02 crc kubenswrapper[4724]: I0226 11:29:02.701641 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5ab1-account-create-update-2pjjt" event={"ID":"d30bb3e5-0cb3-4fe1-9507-6b3527e260c3","Type":"ContainerDied","Data":"13dd34b2fee27b1bf61adc8145b2d97b48f7c21f29e6ad77f11e7bf52966aabd"} Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.129593 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bznxm" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.138081 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bm75g" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.286464 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6shkm\" (UniqueName: \"kubernetes.io/projected/22f21b36-c8f6-4804-9f20-317255534086-kube-api-access-6shkm\") pod \"22f21b36-c8f6-4804-9f20-317255534086\" (UID: \"22f21b36-c8f6-4804-9f20-317255534086\") " Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.286581 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp7lx\" (UniqueName: \"kubernetes.io/projected/f55ef083-be52-48d6-8b62-3d8f92cbeec5-kube-api-access-rp7lx\") pod \"f55ef083-be52-48d6-8b62-3d8f92cbeec5\" (UID: \"f55ef083-be52-48d6-8b62-3d8f92cbeec5\") " Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.286613 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22f21b36-c8f6-4804-9f20-317255534086-operator-scripts\") pod \"22f21b36-c8f6-4804-9f20-317255534086\" (UID: \"22f21b36-c8f6-4804-9f20-317255534086\") " Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.286648 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f55ef083-be52-48d6-8b62-3d8f92cbeec5-operator-scripts\") pod \"f55ef083-be52-48d6-8b62-3d8f92cbeec5\" (UID: \"f55ef083-be52-48d6-8b62-3d8f92cbeec5\") " Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.294272 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55ef083-be52-48d6-8b62-3d8f92cbeec5-kube-api-access-rp7lx" (OuterVolumeSpecName: "kube-api-access-rp7lx") pod "f55ef083-be52-48d6-8b62-3d8f92cbeec5" (UID: "f55ef083-be52-48d6-8b62-3d8f92cbeec5"). InnerVolumeSpecName "kube-api-access-rp7lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.305480 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22f21b36-c8f6-4804-9f20-317255534086-kube-api-access-6shkm" (OuterVolumeSpecName: "kube-api-access-6shkm") pod "22f21b36-c8f6-4804-9f20-317255534086" (UID: "22f21b36-c8f6-4804-9f20-317255534086"). InnerVolumeSpecName "kube-api-access-6shkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.318986 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22f21b36-c8f6-4804-9f20-317255534086-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22f21b36-c8f6-4804-9f20-317255534086" (UID: "22f21b36-c8f6-4804-9f20-317255534086"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.322524 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f55ef083-be52-48d6-8b62-3d8f92cbeec5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f55ef083-be52-48d6-8b62-3d8f92cbeec5" (UID: "f55ef083-be52-48d6-8b62-3d8f92cbeec5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.388234 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6shkm\" (UniqueName: \"kubernetes.io/projected/22f21b36-c8f6-4804-9f20-317255534086-kube-api-access-6shkm\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.388265 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp7lx\" (UniqueName: \"kubernetes.io/projected/f55ef083-be52-48d6-8b62-3d8f92cbeec5-kube-api-access-rp7lx\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.388274 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22f21b36-c8f6-4804-9f20-317255534086-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.388284 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f55ef083-be52-48d6-8b62-3d8f92cbeec5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.716492 4724 generic.go:334] "Generic (PLEG): container finished" podID="d750effb-07c0-4dab-b0d3-0cf351228638" containerID="e692bc8e1416f9b1d0afcb0b9f4e8f41a0b6d8aefed7dd652d0ae8efdb358a76" exitCode=0 Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.716600 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c767-account-create-update-97tv6" event={"ID":"d750effb-07c0-4dab-b0d3-0cf351228638","Type":"ContainerDied","Data":"e692bc8e1416f9b1d0afcb0b9f4e8f41a0b6d8aefed7dd652d0ae8efdb358a76"} Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.719620 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bm75g" event={"ID":"22f21b36-c8f6-4804-9f20-317255534086","Type":"ContainerDied","Data":"c0538c9d8ac7e1ad3f531634867fddf78c78d0f8e9dafd5ef697c20617c4f6d1"} Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.719688 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0538c9d8ac7e1ad3f531634867fddf78c78d0f8e9dafd5ef697c20617c4f6d1" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.719759 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bm75g" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.723103 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bznxm" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.723806 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bznxm" event={"ID":"f55ef083-be52-48d6-8b62-3d8f92cbeec5","Type":"ContainerDied","Data":"36a48c33a7c7e9efa6a7b436518215f629474d48891eb6b6eb96cffb530ccda3"} Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.723896 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36a48c33a7c7e9efa6a7b436518215f629474d48891eb6b6eb96cffb530ccda3" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.726728 4724 generic.go:334] "Generic (PLEG): container finished" podID="448f51f7-7dab-41bb-aafa-2ed352f22710" containerID="5d35e005213ebc8f35ff1e070ceecfc17c89396b8959da61f9678c26661fb115" exitCode=0 Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.726998 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-be15-account-create-update-dckff" event={"ID":"448f51f7-7dab-41bb-aafa-2ed352f22710","Type":"ContainerDied","Data":"5d35e005213ebc8f35ff1e070ceecfc17c89396b8959da61f9678c26661fb115"} Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.824472 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lxnxw"] Feb 26 11:29:03 crc kubenswrapper[4724]: E0226 11:29:03.824976 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55ef083-be52-48d6-8b62-3d8f92cbeec5" containerName="mariadb-database-create" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.825007 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55ef083-be52-48d6-8b62-3d8f92cbeec5" containerName="mariadb-database-create" Feb 26 11:29:03 crc kubenswrapper[4724]: E0226 11:29:03.825028 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a96424d-fa80-4b1b-8da7-55b1ba799cd2" containerName="mariadb-account-create-update" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.825039 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a96424d-fa80-4b1b-8da7-55b1ba799cd2" containerName="mariadb-account-create-update" Feb 26 11:29:03 crc kubenswrapper[4724]: E0226 11:29:03.825052 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18b60bf-4d85-4125-802b-6de116af3e23" containerName="mariadb-database-create" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.825060 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18b60bf-4d85-4125-802b-6de116af3e23" containerName="mariadb-database-create" Feb 26 11:29:03 crc kubenswrapper[4724]: E0226 11:29:03.825074 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22f21b36-c8f6-4804-9f20-317255534086" containerName="mariadb-database-create" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.825085 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="22f21b36-c8f6-4804-9f20-317255534086" containerName="mariadb-database-create" Feb 26 11:29:03 crc kubenswrapper[4724]: E0226 11:29:03.825120 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d99e287-d985-4f45-9117-0ccf544d858e" containerName="init" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.825129 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d99e287-d985-4f45-9117-0ccf544d858e" containerName="init" Feb 26 11:29:03 crc kubenswrapper[4724]: E0226 11:29:03.825136 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d99e287-d985-4f45-9117-0ccf544d858e" containerName="dnsmasq-dns" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.825144 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d99e287-d985-4f45-9117-0ccf544d858e" containerName="dnsmasq-dns" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.825360 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a96424d-fa80-4b1b-8da7-55b1ba799cd2" containerName="mariadb-account-create-update" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.825389 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55ef083-be52-48d6-8b62-3d8f92cbeec5" containerName="mariadb-database-create" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.825406 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d99e287-d985-4f45-9117-0ccf544d858e" containerName="dnsmasq-dns" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.825421 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c18b60bf-4d85-4125-802b-6de116af3e23" containerName="mariadb-database-create" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.825432 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="22f21b36-c8f6-4804-9f20-317255534086" containerName="mariadb-database-create" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.826059 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lxnxw" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.828099 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 26 11:29:03 crc kubenswrapper[4724]: I0226 11:29:03.832773 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lxnxw"] Feb 26 11:29:04 crc kubenswrapper[4724]: I0226 11:29:03.997920 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5gj8\" (UniqueName: \"kubernetes.io/projected/372ab9a2-9dbd-4f77-ba28-62470047128b-kube-api-access-f5gj8\") pod \"root-account-create-update-lxnxw\" (UID: \"372ab9a2-9dbd-4f77-ba28-62470047128b\") " pod="openstack/root-account-create-update-lxnxw" Feb 26 11:29:04 crc kubenswrapper[4724]: I0226 11:29:03.998071 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372ab9a2-9dbd-4f77-ba28-62470047128b-operator-scripts\") pod \"root-account-create-update-lxnxw\" (UID: \"372ab9a2-9dbd-4f77-ba28-62470047128b\") " pod="openstack/root-account-create-update-lxnxw" Feb 26 11:29:04 crc kubenswrapper[4724]: I0226 11:29:04.099942 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5gj8\" (UniqueName: \"kubernetes.io/projected/372ab9a2-9dbd-4f77-ba28-62470047128b-kube-api-access-f5gj8\") pod \"root-account-create-update-lxnxw\" (UID: \"372ab9a2-9dbd-4f77-ba28-62470047128b\") " pod="openstack/root-account-create-update-lxnxw" Feb 26 11:29:04 crc kubenswrapper[4724]: I0226 11:29:04.100194 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372ab9a2-9dbd-4f77-ba28-62470047128b-operator-scripts\") pod \"root-account-create-update-lxnxw\" (UID: \"372ab9a2-9dbd-4f77-ba28-62470047128b\") " pod="openstack/root-account-create-update-lxnxw" Feb 26 11:29:04 crc kubenswrapper[4724]: I0226 11:29:04.101086 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372ab9a2-9dbd-4f77-ba28-62470047128b-operator-scripts\") pod \"root-account-create-update-lxnxw\" (UID: \"372ab9a2-9dbd-4f77-ba28-62470047128b\") " pod="openstack/root-account-create-update-lxnxw" Feb 26 11:29:04 crc kubenswrapper[4724]: I0226 11:29:04.117297 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5gj8\" (UniqueName: \"kubernetes.io/projected/372ab9a2-9dbd-4f77-ba28-62470047128b-kube-api-access-f5gj8\") pod \"root-account-create-update-lxnxw\" (UID: \"372ab9a2-9dbd-4f77-ba28-62470047128b\") " pod="openstack/root-account-create-update-lxnxw" Feb 26 11:29:04 crc kubenswrapper[4724]: I0226 11:29:04.171727 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lxnxw" Feb 26 11:29:04 crc kubenswrapper[4724]: I0226 11:29:04.183956 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5ab1-account-create-update-2pjjt" Feb 26 11:29:04 crc kubenswrapper[4724]: I0226 11:29:04.305158 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d30bb3e5-0cb3-4fe1-9507-6b3527e260c3-operator-scripts\") pod \"d30bb3e5-0cb3-4fe1-9507-6b3527e260c3\" (UID: \"d30bb3e5-0cb3-4fe1-9507-6b3527e260c3\") " Feb 26 11:29:04 crc kubenswrapper[4724]: I0226 11:29:04.305609 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q7w7\" (UniqueName: \"kubernetes.io/projected/d30bb3e5-0cb3-4fe1-9507-6b3527e260c3-kube-api-access-6q7w7\") pod \"d30bb3e5-0cb3-4fe1-9507-6b3527e260c3\" (UID: \"d30bb3e5-0cb3-4fe1-9507-6b3527e260c3\") " Feb 26 11:29:04 crc kubenswrapper[4724]: I0226 11:29:04.306704 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d30bb3e5-0cb3-4fe1-9507-6b3527e260c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d30bb3e5-0cb3-4fe1-9507-6b3527e260c3" (UID: "d30bb3e5-0cb3-4fe1-9507-6b3527e260c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:04 crc kubenswrapper[4724]: I0226 11:29:04.311656 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d30bb3e5-0cb3-4fe1-9507-6b3527e260c3-kube-api-access-6q7w7" (OuterVolumeSpecName: "kube-api-access-6q7w7") pod "d30bb3e5-0cb3-4fe1-9507-6b3527e260c3" (UID: "d30bb3e5-0cb3-4fe1-9507-6b3527e260c3"). InnerVolumeSpecName "kube-api-access-6q7w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:04 crc kubenswrapper[4724]: I0226 11:29:04.409600 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d30bb3e5-0cb3-4fe1-9507-6b3527e260c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:04 crc kubenswrapper[4724]: I0226 11:29:04.409645 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q7w7\" (UniqueName: \"kubernetes.io/projected/d30bb3e5-0cb3-4fe1-9507-6b3527e260c3-kube-api-access-6q7w7\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:05 crc kubenswrapper[4724]: I0226 11:29:04.630140 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lxnxw"] Feb 26 11:29:05 crc kubenswrapper[4724]: W0226 11:29:04.639974 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod372ab9a2_9dbd_4f77_ba28_62470047128b.slice/crio-623364b8b5911d059d757c94287cb0bb16503d63ac113b398a6d4783ed40aa55 WatchSource:0}: Error finding container 623364b8b5911d059d757c94287cb0bb16503d63ac113b398a6d4783ed40aa55: Status 404 returned error can't find the container with id 623364b8b5911d059d757c94287cb0bb16503d63ac113b398a6d4783ed40aa55 Feb 26 11:29:05 crc kubenswrapper[4724]: I0226 11:29:04.737108 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5ab1-account-create-update-2pjjt" event={"ID":"d30bb3e5-0cb3-4fe1-9507-6b3527e260c3","Type":"ContainerDied","Data":"0f997a3bd43435f805df1e32ff9f7deea14ab390f503c78a4232cd192f8823ca"} Feb 26 11:29:05 crc kubenswrapper[4724]: I0226 11:29:04.737148 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f997a3bd43435f805df1e32ff9f7deea14ab390f503c78a4232cd192f8823ca" Feb 26 11:29:05 crc kubenswrapper[4724]: I0226 11:29:04.737225 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5ab1-account-create-update-2pjjt" Feb 26 11:29:05 crc kubenswrapper[4724]: I0226 11:29:04.742309 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lxnxw" event={"ID":"372ab9a2-9dbd-4f77-ba28-62470047128b","Type":"ContainerStarted","Data":"623364b8b5911d059d757c94287cb0bb16503d63ac113b398a6d4783ed40aa55"} Feb 26 11:29:05 crc kubenswrapper[4724]: I0226 11:29:05.757426 4724 generic.go:334] "Generic (PLEG): container finished" podID="372ab9a2-9dbd-4f77-ba28-62470047128b" containerID="4cfd1f80078583554f5f1f90824e816b28b9447c41a0d397bca70469d63e4d7d" exitCode=0 Feb 26 11:29:05 crc kubenswrapper[4724]: I0226 11:29:05.758335 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lxnxw" event={"ID":"372ab9a2-9dbd-4f77-ba28-62470047128b","Type":"ContainerDied","Data":"4cfd1f80078583554f5f1f90824e816b28b9447c41a0d397bca70469d63e4d7d"} Feb 26 11:29:05 crc kubenswrapper[4724]: I0226 11:29:05.940472 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-be15-account-create-update-dckff" Feb 26 11:29:05 crc kubenswrapper[4724]: I0226 11:29:05.948169 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c767-account-create-update-97tv6" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.044016 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d750effb-07c0-4dab-b0d3-0cf351228638-operator-scripts\") pod \"d750effb-07c0-4dab-b0d3-0cf351228638\" (UID: \"d750effb-07c0-4dab-b0d3-0cf351228638\") " Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.044058 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g5j8\" (UniqueName: \"kubernetes.io/projected/d750effb-07c0-4dab-b0d3-0cf351228638-kube-api-access-7g5j8\") pod \"d750effb-07c0-4dab-b0d3-0cf351228638\" (UID: \"d750effb-07c0-4dab-b0d3-0cf351228638\") " Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.044133 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448f51f7-7dab-41bb-aafa-2ed352f22710-operator-scripts\") pod \"448f51f7-7dab-41bb-aafa-2ed352f22710\" (UID: \"448f51f7-7dab-41bb-aafa-2ed352f22710\") " Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.044224 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbzx2\" (UniqueName: \"kubernetes.io/projected/448f51f7-7dab-41bb-aafa-2ed352f22710-kube-api-access-sbzx2\") pod \"448f51f7-7dab-41bb-aafa-2ed352f22710\" (UID: \"448f51f7-7dab-41bb-aafa-2ed352f22710\") " Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.045034 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d750effb-07c0-4dab-b0d3-0cf351228638-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d750effb-07c0-4dab-b0d3-0cf351228638" (UID: "d750effb-07c0-4dab-b0d3-0cf351228638"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.045948 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/448f51f7-7dab-41bb-aafa-2ed352f22710-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "448f51f7-7dab-41bb-aafa-2ed352f22710" (UID: "448f51f7-7dab-41bb-aafa-2ed352f22710"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.054890 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d750effb-07c0-4dab-b0d3-0cf351228638-kube-api-access-7g5j8" (OuterVolumeSpecName: "kube-api-access-7g5j8") pod "d750effb-07c0-4dab-b0d3-0cf351228638" (UID: "d750effb-07c0-4dab-b0d3-0cf351228638"). InnerVolumeSpecName "kube-api-access-7g5j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.066547 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/448f51f7-7dab-41bb-aafa-2ed352f22710-kube-api-access-sbzx2" (OuterVolumeSpecName: "kube-api-access-sbzx2") pod "448f51f7-7dab-41bb-aafa-2ed352f22710" (UID: "448f51f7-7dab-41bb-aafa-2ed352f22710"). InnerVolumeSpecName "kube-api-access-sbzx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.147307 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbzx2\" (UniqueName: \"kubernetes.io/projected/448f51f7-7dab-41bb-aafa-2ed352f22710-kube-api-access-sbzx2\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.147351 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d750effb-07c0-4dab-b0d3-0cf351228638-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.147368 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g5j8\" (UniqueName: \"kubernetes.io/projected/d750effb-07c0-4dab-b0d3-0cf351228638-kube-api-access-7g5j8\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.147380 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448f51f7-7dab-41bb-aafa-2ed352f22710-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.350038 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.355068 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d5750fa4-34c3-4c23-b0cc-af9726d3034c-etc-swift\") pod \"swift-storage-0\" (UID: \"d5750fa4-34c3-4c23-b0cc-af9726d3034c\") " pod="openstack/swift-storage-0" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.420859 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.768096 4724 generic.go:334] "Generic (PLEG): container finished" podID="ad24283d-3357-4230-a2b2-3d5ed0fefa7f" containerID="2ae61775b75b66aef3e5990aa38f0d8b0eb327cc86db14bc172e02fa575dee21" exitCode=0 Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.768245 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ad24283d-3357-4230-a2b2-3d5ed0fefa7f","Type":"ContainerDied","Data":"2ae61775b75b66aef3e5990aa38f0d8b0eb327cc86db14bc172e02fa575dee21"} Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.769886 4724 generic.go:334] "Generic (PLEG): container finished" podID="e7412680-68df-4ebb-9961-8a89d8f83176" containerID="e6c83014aa19524b396aff1631631d5b0c0e521ad2a66feeb53340a4cde6e788" exitCode=0 Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.769942 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7kkhs" event={"ID":"e7412680-68df-4ebb-9961-8a89d8f83176","Type":"ContainerDied","Data":"e6c83014aa19524b396aff1631631d5b0c0e521ad2a66feeb53340a4cde6e788"} Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.776734 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-be15-account-create-update-dckff" event={"ID":"448f51f7-7dab-41bb-aafa-2ed352f22710","Type":"ContainerDied","Data":"3235a332dc492e5c4191a65eed58d5e4d4afbe869de3e31e2526b45b8f0ddf6c"} Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.776803 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3235a332dc492e5c4191a65eed58d5e4d4afbe869de3e31e2526b45b8f0ddf6c" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.776905 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-be15-account-create-update-dckff" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.781471 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c767-account-create-update-97tv6" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.781725 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c767-account-create-update-97tv6" event={"ID":"d750effb-07c0-4dab-b0d3-0cf351228638","Type":"ContainerDied","Data":"c18478e3b8e2c66e671e6ca3f669ae031c05558e9e873b2ad47039749a4e4e55"} Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.781774 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c18478e3b8e2c66e671e6ca3f669ae031c05558e9e873b2ad47039749a4e4e55" Feb 26 11:29:06 crc kubenswrapper[4724]: I0226 11:29:06.954775 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.178247 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lxnxw" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.205350 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6v8t4"] Feb 26 11:29:07 crc kubenswrapper[4724]: E0226 11:29:07.205729 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d750effb-07c0-4dab-b0d3-0cf351228638" containerName="mariadb-account-create-update" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.205742 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d750effb-07c0-4dab-b0d3-0cf351228638" containerName="mariadb-account-create-update" Feb 26 11:29:07 crc kubenswrapper[4724]: E0226 11:29:07.205751 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30bb3e5-0cb3-4fe1-9507-6b3527e260c3" containerName="mariadb-account-create-update" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.205757 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30bb3e5-0cb3-4fe1-9507-6b3527e260c3" containerName="mariadb-account-create-update" Feb 26 11:29:07 crc kubenswrapper[4724]: E0226 11:29:07.205771 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372ab9a2-9dbd-4f77-ba28-62470047128b" containerName="mariadb-account-create-update" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.205777 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="372ab9a2-9dbd-4f77-ba28-62470047128b" containerName="mariadb-account-create-update" Feb 26 11:29:07 crc kubenswrapper[4724]: E0226 11:29:07.205788 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448f51f7-7dab-41bb-aafa-2ed352f22710" containerName="mariadb-account-create-update" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.205795 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="448f51f7-7dab-41bb-aafa-2ed352f22710" containerName="mariadb-account-create-update" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.205959 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="372ab9a2-9dbd-4f77-ba28-62470047128b" containerName="mariadb-account-create-update" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.205968 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="448f51f7-7dab-41bb-aafa-2ed352f22710" containerName="mariadb-account-create-update" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.205978 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="d30bb3e5-0cb3-4fe1-9507-6b3527e260c3" containerName="mariadb-account-create-update" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.205988 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="d750effb-07c0-4dab-b0d3-0cf351228638" containerName="mariadb-account-create-update" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.206866 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.210535 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4k8sf" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.210685 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.226404 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6v8t4"] Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.366337 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372ab9a2-9dbd-4f77-ba28-62470047128b-operator-scripts\") pod \"372ab9a2-9dbd-4f77-ba28-62470047128b\" (UID: \"372ab9a2-9dbd-4f77-ba28-62470047128b\") " Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.366738 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5gj8\" (UniqueName: \"kubernetes.io/projected/372ab9a2-9dbd-4f77-ba28-62470047128b-kube-api-access-f5gj8\") pod \"372ab9a2-9dbd-4f77-ba28-62470047128b\" (UID: \"372ab9a2-9dbd-4f77-ba28-62470047128b\") " Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.367026 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372ab9a2-9dbd-4f77-ba28-62470047128b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "372ab9a2-9dbd-4f77-ba28-62470047128b" (UID: "372ab9a2-9dbd-4f77-ba28-62470047128b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.367134 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-config-data\") pod \"glance-db-sync-6v8t4\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.367194 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-combined-ca-bundle\") pod \"glance-db-sync-6v8t4\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.367228 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-db-sync-config-data\") pod \"glance-db-sync-6v8t4\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.367250 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdc84\" (UniqueName: \"kubernetes.io/projected/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-kube-api-access-mdc84\") pod \"glance-db-sync-6v8t4\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.367302 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372ab9a2-9dbd-4f77-ba28-62470047128b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.370533 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372ab9a2-9dbd-4f77-ba28-62470047128b-kube-api-access-f5gj8" (OuterVolumeSpecName: "kube-api-access-f5gj8") pod "372ab9a2-9dbd-4f77-ba28-62470047128b" (UID: "372ab9a2-9dbd-4f77-ba28-62470047128b"). InnerVolumeSpecName "kube-api-access-f5gj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.469083 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdc84\" (UniqueName: \"kubernetes.io/projected/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-kube-api-access-mdc84\") pod \"glance-db-sync-6v8t4\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.469267 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-config-data\") pod \"glance-db-sync-6v8t4\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.469307 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-combined-ca-bundle\") pod \"glance-db-sync-6v8t4\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.469341 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-db-sync-config-data\") pod \"glance-db-sync-6v8t4\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.469394 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5gj8\" (UniqueName: \"kubernetes.io/projected/372ab9a2-9dbd-4f77-ba28-62470047128b-kube-api-access-f5gj8\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.472642 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-db-sync-config-data\") pod \"glance-db-sync-6v8t4\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.472690 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-config-data\") pod \"glance-db-sync-6v8t4\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.473226 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-combined-ca-bundle\") pod \"glance-db-sync-6v8t4\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.490075 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdc84\" (UniqueName: \"kubernetes.io/projected/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-kube-api-access-mdc84\") pod \"glance-db-sync-6v8t4\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.528258 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.794066 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"5f4a5a77f26b84bffba34b3cbbb56cfbbe183c7390831f908f3448d22ede4f19"} Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.799546 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ad24283d-3357-4230-a2b2-3d5ed0fefa7f","Type":"ContainerStarted","Data":"080f6ade20d9e595da88afe20d4f9c55286bb1dc41f06f7dd295182e1fd362a9"} Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.799946 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.802001 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lxnxw" event={"ID":"372ab9a2-9dbd-4f77-ba28-62470047128b","Type":"ContainerDied","Data":"623364b8b5911d059d757c94287cb0bb16503d63ac113b398a6d4783ed40aa55"} Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.802124 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lxnxw" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.803052 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="623364b8b5911d059d757c94287cb0bb16503d63ac113b398a6d4783ed40aa55" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.830481 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 26 11:29:07 crc kubenswrapper[4724]: I0226 11:29:07.864730 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.917714116 podStartE2EDuration="1m5.864711497s" podCreationTimestamp="2026-02-26 11:28:02 +0000 UTC" firstStartedPulling="2026-02-26 11:28:04.334638685 +0000 UTC m=+1350.990377800" lastFinishedPulling="2026-02-26 11:28:33.281636066 +0000 UTC m=+1379.937375181" observedRunningTime="2026-02-26 11:29:07.840057947 +0000 UTC m=+1414.495797072" watchObservedRunningTime="2026-02-26 11:29:07.864711497 +0000 UTC m=+1414.520450612" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.170091 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6v8t4"] Feb 26 11:29:08 crc kubenswrapper[4724]: W0226 11:29:08.196107 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5a58b47_8a63_4ec7_aad6_5b7668e56faa.slice/crio-b93734af6d1de38273f04ce1e7c034a259b1a06069e6e6948ee058734c3267fe WatchSource:0}: Error finding container b93734af6d1de38273f04ce1e7c034a259b1a06069e6e6948ee058734c3267fe: Status 404 returned error can't find the container with id b93734af6d1de38273f04ce1e7c034a259b1a06069e6e6948ee058734c3267fe Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.264717 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.386547 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e7412680-68df-4ebb-9961-8a89d8f83176-etc-swift\") pod \"e7412680-68df-4ebb-9961-8a89d8f83176\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.386615 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-swiftconf\") pod \"e7412680-68df-4ebb-9961-8a89d8f83176\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.386665 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7412680-68df-4ebb-9961-8a89d8f83176-scripts\") pod \"e7412680-68df-4ebb-9961-8a89d8f83176\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.386695 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7z82\" (UniqueName: \"kubernetes.io/projected/e7412680-68df-4ebb-9961-8a89d8f83176-kube-api-access-c7z82\") pod \"e7412680-68df-4ebb-9961-8a89d8f83176\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.386856 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-dispersionconf\") pod \"e7412680-68df-4ebb-9961-8a89d8f83176\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.386883 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e7412680-68df-4ebb-9961-8a89d8f83176-ring-data-devices\") pod \"e7412680-68df-4ebb-9961-8a89d8f83176\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.386910 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-combined-ca-bundle\") pod \"e7412680-68df-4ebb-9961-8a89d8f83176\" (UID: \"e7412680-68df-4ebb-9961-8a89d8f83176\") " Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.444000 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7412680-68df-4ebb-9961-8a89d8f83176-kube-api-access-c7z82" (OuterVolumeSpecName: "kube-api-access-c7z82") pod "e7412680-68df-4ebb-9961-8a89d8f83176" (UID: "e7412680-68df-4ebb-9961-8a89d8f83176"). InnerVolumeSpecName "kube-api-access-c7z82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.473278 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7412680-68df-4ebb-9961-8a89d8f83176-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e7412680-68df-4ebb-9961-8a89d8f83176" (UID: "e7412680-68df-4ebb-9961-8a89d8f83176"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.474759 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7412680-68df-4ebb-9961-8a89d8f83176-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e7412680-68df-4ebb-9961-8a89d8f83176" (UID: "e7412680-68df-4ebb-9961-8a89d8f83176"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.475014 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7412680-68df-4ebb-9961-8a89d8f83176-scripts" (OuterVolumeSpecName: "scripts") pod "e7412680-68df-4ebb-9961-8a89d8f83176" (UID: "e7412680-68df-4ebb-9961-8a89d8f83176"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.480130 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e7412680-68df-4ebb-9961-8a89d8f83176" (UID: "e7412680-68df-4ebb-9961-8a89d8f83176"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.480210 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e7412680-68df-4ebb-9961-8a89d8f83176" (UID: "e7412680-68df-4ebb-9961-8a89d8f83176"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.481522 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7412680-68df-4ebb-9961-8a89d8f83176" (UID: "e7412680-68df-4ebb-9961-8a89d8f83176"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.488631 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7412680-68df-4ebb-9961-8a89d8f83176-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.488658 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7z82\" (UniqueName: \"kubernetes.io/projected/e7412680-68df-4ebb-9961-8a89d8f83176-kube-api-access-c7z82\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.488669 4724 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.488679 4724 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e7412680-68df-4ebb-9961-8a89d8f83176-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.488689 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.488698 4724 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e7412680-68df-4ebb-9961-8a89d8f83176-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.488706 4724 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e7412680-68df-4ebb-9961-8a89d8f83176-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.647533 4724 scope.go:117] "RemoveContainer" containerID="e57c843ee14ebaa3663e4d3163f22e09905491a0a163c402b52e48cd7b2e0b37" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.822214 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7kkhs" event={"ID":"e7412680-68df-4ebb-9961-8a89d8f83176","Type":"ContainerDied","Data":"56efaf13a23bb4a5838f790eb19c537ab07293230b0c956b37fbebca4c8734aa"} Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.822502 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56efaf13a23bb4a5838f790eb19c537ab07293230b0c956b37fbebca4c8734aa" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.822237 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7kkhs" Feb 26 11:29:08 crc kubenswrapper[4724]: I0226 11:29:08.825927 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6v8t4" event={"ID":"f5a58b47-8a63-4ec7-aad6-5b7668e56faa","Type":"ContainerStarted","Data":"b93734af6d1de38273f04ce1e7c034a259b1a06069e6e6948ee058734c3267fe"} Feb 26 11:29:09 crc kubenswrapper[4724]: I0226 11:29:09.835813 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lxnxw"] Feb 26 11:29:09 crc kubenswrapper[4724]: I0226 11:29:09.842389 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lxnxw"] Feb 26 11:29:09 crc kubenswrapper[4724]: I0226 11:29:09.987538 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372ab9a2-9dbd-4f77-ba28-62470047128b" path="/var/lib/kubelet/pods/372ab9a2-9dbd-4f77-ba28-62470047128b/volumes" Feb 26 11:29:10 crc kubenswrapper[4724]: I0226 11:29:10.847951 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"9b9465e02a9b6704ab236a9ded9a45b1a47a1f46eb5deef0e833a9ff3ab8c732"} Feb 26 11:29:10 crc kubenswrapper[4724]: I0226 11:29:10.848285 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"d5614338fc3e67b2444d25e97657fc7c6b61db02df4deb936dd9b2fef9cbfc1c"} Feb 26 11:29:10 crc kubenswrapper[4724]: I0226 11:29:10.848297 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"1fd13c664790aaab526cd6bf8809e3d09f4fe455b9a926a2ada4c7344a9e0de3"} Feb 26 11:29:10 crc kubenswrapper[4724]: I0226 11:29:10.848306 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"7b994ea5a1280886b5218268622b170cb75c16be9c907902bf9b5677e44bc772"} Feb 26 11:29:11 crc kubenswrapper[4724]: I0226 11:29:11.840306 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-x9682" podUID="5b8939ea-2d97-461c-ad75-cba4379157f7" containerName="ovn-controller" probeResult="failure" output=< Feb 26 11:29:11 crc kubenswrapper[4724]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 26 11:29:11 crc kubenswrapper[4724]: > Feb 26 11:29:11 crc kubenswrapper[4724]: I0226 11:29:11.953949 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:29:11 crc kubenswrapper[4724]: I0226 11:29:11.964099 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wsr8k" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.215531 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x9682-config-wv48g"] Feb 26 11:29:12 crc kubenswrapper[4724]: E0226 11:29:12.216009 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7412680-68df-4ebb-9961-8a89d8f83176" containerName="swift-ring-rebalance" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.216035 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7412680-68df-4ebb-9961-8a89d8f83176" containerName="swift-ring-rebalance" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.216274 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7412680-68df-4ebb-9961-8a89d8f83176" containerName="swift-ring-rebalance" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.216947 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.221732 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.229719 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x9682-config-wv48g"] Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.361398 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/241342cc-e734-44bf-bf46-aef3e2c31098-additional-scripts\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.361461 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-log-ovn\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.361521 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-run-ovn\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.361559 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-run\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.361589 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/241342cc-e734-44bf-bf46-aef3e2c31098-scripts\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.361650 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2qpz\" (UniqueName: \"kubernetes.io/projected/241342cc-e734-44bf-bf46-aef3e2c31098-kube-api-access-t2qpz\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.465916 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2qpz\" (UniqueName: \"kubernetes.io/projected/241342cc-e734-44bf-bf46-aef3e2c31098-kube-api-access-t2qpz\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.466018 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/241342cc-e734-44bf-bf46-aef3e2c31098-additional-scripts\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.466050 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-log-ovn\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.466108 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-run-ovn\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.466145 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-run\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.466189 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/241342cc-e734-44bf-bf46-aef3e2c31098-scripts\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.467588 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-log-ovn\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.468610 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/241342cc-e734-44bf-bf46-aef3e2c31098-additional-scripts\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.468669 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-run-ovn\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.468708 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-run\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.482937 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/241342cc-e734-44bf-bf46-aef3e2c31098-scripts\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.506448 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2qpz\" (UniqueName: \"kubernetes.io/projected/241342cc-e734-44bf-bf46-aef3e2c31098-kube-api-access-t2qpz\") pod \"ovn-controller-x9682-config-wv48g\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:12 crc kubenswrapper[4724]: I0226 11:29:12.576305 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:13 crc kubenswrapper[4724]: I0226 11:29:13.230655 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x9682-config-wv48g"] Feb 26 11:29:13 crc kubenswrapper[4724]: W0226 11:29:13.249553 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod241342cc_e734_44bf_bf46_aef3e2c31098.slice/crio-28e56a8dc1a2f9efaa55ff39303d960dff1c2778666d4c3a54a120c71b7c9bf5 WatchSource:0}: Error finding container 28e56a8dc1a2f9efaa55ff39303d960dff1c2778666d4c3a54a120c71b7c9bf5: Status 404 returned error can't find the container with id 28e56a8dc1a2f9efaa55ff39303d960dff1c2778666d4c3a54a120c71b7c9bf5 Feb 26 11:29:13 crc kubenswrapper[4724]: I0226 11:29:13.881728 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x9682-config-wv48g" event={"ID":"241342cc-e734-44bf-bf46-aef3e2c31098","Type":"ContainerStarted","Data":"7975d99a72a3c4b2d306233ff3eda269b60a6993b690843b8a87460726bc32da"} Feb 26 11:29:13 crc kubenswrapper[4724]: I0226 11:29:13.881979 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x9682-config-wv48g" event={"ID":"241342cc-e734-44bf-bf46-aef3e2c31098","Type":"ContainerStarted","Data":"28e56a8dc1a2f9efaa55ff39303d960dff1c2778666d4c3a54a120c71b7c9bf5"} Feb 26 11:29:13 crc kubenswrapper[4724]: I0226 11:29:13.888498 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"60b940c6c11b5e91aa905a433bd54cb98a44d75e9c22fcea73013787b0b030ac"} Feb 26 11:29:13 crc kubenswrapper[4724]: I0226 11:29:13.888534 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"46dfa1d56858e8222a7951625f1efb3eb275f16328bfe3704bc0107bd8303b47"} Feb 26 11:29:13 crc kubenswrapper[4724]: I0226 11:29:13.888543 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"f7f4f574d824651cf661d2bfcd6b24e69aa2cc573a9f087c841d78141609ec7c"} Feb 26 11:29:13 crc kubenswrapper[4724]: I0226 11:29:13.888551 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"939548916aea10a28b9c416dffb0d3968e2a96d70903adf39f0c50dcd8cad162"} Feb 26 11:29:13 crc kubenswrapper[4724]: I0226 11:29:13.906211 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-x9682-config-wv48g" podStartSLOduration=1.906187313 podStartE2EDuration="1.906187313s" podCreationTimestamp="2026-02-26 11:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:13.90018988 +0000 UTC m=+1420.555928995" watchObservedRunningTime="2026-02-26 11:29:13.906187313 +0000 UTC m=+1420.561926438" Feb 26 11:29:14 crc kubenswrapper[4724]: I0226 11:29:14.826413 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-k6n2b"] Feb 26 11:29:14 crc kubenswrapper[4724]: I0226 11:29:14.829477 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k6n2b" Feb 26 11:29:14 crc kubenswrapper[4724]: I0226 11:29:14.831951 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 26 11:29:14 crc kubenswrapper[4724]: I0226 11:29:14.834881 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k6n2b"] Feb 26 11:29:14 crc kubenswrapper[4724]: I0226 11:29:14.897821 4724 generic.go:334] "Generic (PLEG): container finished" podID="241342cc-e734-44bf-bf46-aef3e2c31098" containerID="7975d99a72a3c4b2d306233ff3eda269b60a6993b690843b8a87460726bc32da" exitCode=0 Feb 26 11:29:14 crc kubenswrapper[4724]: I0226 11:29:14.897860 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x9682-config-wv48g" event={"ID":"241342cc-e734-44bf-bf46-aef3e2c31098","Type":"ContainerDied","Data":"7975d99a72a3c4b2d306233ff3eda269b60a6993b690843b8a87460726bc32da"} Feb 26 11:29:14 crc kubenswrapper[4724]: I0226 11:29:14.921722 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f6dc\" (UniqueName: \"kubernetes.io/projected/fc1f2e39-391c-4eb6-9278-080dd6a1ec1d-kube-api-access-9f6dc\") pod \"root-account-create-update-k6n2b\" (UID: \"fc1f2e39-391c-4eb6-9278-080dd6a1ec1d\") " pod="openstack/root-account-create-update-k6n2b" Feb 26 11:29:14 crc kubenswrapper[4724]: I0226 11:29:14.921804 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc1f2e39-391c-4eb6-9278-080dd6a1ec1d-operator-scripts\") pod \"root-account-create-update-k6n2b\" (UID: \"fc1f2e39-391c-4eb6-9278-080dd6a1ec1d\") " pod="openstack/root-account-create-update-k6n2b" Feb 26 11:29:15 crc kubenswrapper[4724]: I0226 11:29:15.023268 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f6dc\" (UniqueName: \"kubernetes.io/projected/fc1f2e39-391c-4eb6-9278-080dd6a1ec1d-kube-api-access-9f6dc\") pod \"root-account-create-update-k6n2b\" (UID: \"fc1f2e39-391c-4eb6-9278-080dd6a1ec1d\") " pod="openstack/root-account-create-update-k6n2b" Feb 26 11:29:15 crc kubenswrapper[4724]: I0226 11:29:15.023346 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc1f2e39-391c-4eb6-9278-080dd6a1ec1d-operator-scripts\") pod \"root-account-create-update-k6n2b\" (UID: \"fc1f2e39-391c-4eb6-9278-080dd6a1ec1d\") " pod="openstack/root-account-create-update-k6n2b" Feb 26 11:29:15 crc kubenswrapper[4724]: I0226 11:29:15.024088 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc1f2e39-391c-4eb6-9278-080dd6a1ec1d-operator-scripts\") pod \"root-account-create-update-k6n2b\" (UID: \"fc1f2e39-391c-4eb6-9278-080dd6a1ec1d\") " pod="openstack/root-account-create-update-k6n2b" Feb 26 11:29:15 crc kubenswrapper[4724]: I0226 11:29:15.051001 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f6dc\" (UniqueName: \"kubernetes.io/projected/fc1f2e39-391c-4eb6-9278-080dd6a1ec1d-kube-api-access-9f6dc\") pod \"root-account-create-update-k6n2b\" (UID: \"fc1f2e39-391c-4eb6-9278-080dd6a1ec1d\") " pod="openstack/root-account-create-update-k6n2b" Feb 26 11:29:15 crc kubenswrapper[4724]: I0226 11:29:15.145126 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k6n2b" Feb 26 11:29:15 crc kubenswrapper[4724]: I0226 11:29:15.668328 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k6n2b"] Feb 26 11:29:15 crc kubenswrapper[4724]: I0226 11:29:15.933506 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"5d3c895ca5f7b5230d48929d4da6eb63a5907b91f42cf5d2dd06bfd7ee590e42"} Feb 26 11:29:15 crc kubenswrapper[4724]: I0226 11:29:15.933830 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"473b16d9655aca2b4056b3c2bb58528f9ee7ca574dbb2fb5bf4272252895bdf9"} Feb 26 11:29:15 crc kubenswrapper[4724]: I0226 11:29:15.940082 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k6n2b" event={"ID":"fc1f2e39-391c-4eb6-9278-080dd6a1ec1d","Type":"ContainerStarted","Data":"d0291347c52910dd8b1fc1d553d72bf2ac4dff608b401b522f6d41ab56af53f2"} Feb 26 11:29:15 crc kubenswrapper[4724]: I0226 11:29:15.940154 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k6n2b" event={"ID":"fc1f2e39-391c-4eb6-9278-080dd6a1ec1d","Type":"ContainerStarted","Data":"f0b48627f16bbcb532fccca22869c1f967a704afd08772f9694103c55fe090f7"} Feb 26 11:29:15 crc kubenswrapper[4724]: I0226 11:29:15.967667 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-k6n2b" podStartSLOduration=1.967625785 podStartE2EDuration="1.967625785s" podCreationTimestamp="2026-02-26 11:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:15.959482407 +0000 UTC m=+1422.615221542" watchObservedRunningTime="2026-02-26 11:29:15.967625785 +0000 UTC m=+1422.623364900" Feb 26 11:29:16 crc kubenswrapper[4724]: I0226 11:29:16.827888 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-x9682" Feb 26 11:29:16 crc kubenswrapper[4724]: I0226 11:29:16.961155 4724 generic.go:334] "Generic (PLEG): container finished" podID="4d7fdccb-4fd0-4a6e-9241-add667b9a537" containerID="f02d3f46994abb7bc5f7a8cc31c24d6db26868c82a95d430fdc99e0104c638a2" exitCode=0 Feb 26 11:29:16 crc kubenswrapper[4724]: I0226 11:29:16.961211 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d7fdccb-4fd0-4a6e-9241-add667b9a537","Type":"ContainerDied","Data":"f02d3f46994abb7bc5f7a8cc31c24d6db26868c82a95d430fdc99e0104c638a2"} Feb 26 11:29:16 crc kubenswrapper[4724]: I0226 11:29:16.969871 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"4acf3d85142bfc97c1ca658778a3f5156b17df0d862fcad18ff4fff664538ea3"} Feb 26 11:29:17 crc kubenswrapper[4724]: I0226 11:29:17.988411 4724 generic.go:334] "Generic (PLEG): container finished" podID="fc1f2e39-391c-4eb6-9278-080dd6a1ec1d" containerID="d0291347c52910dd8b1fc1d553d72bf2ac4dff608b401b522f6d41ab56af53f2" exitCode=0 Feb 26 11:29:17 crc kubenswrapper[4724]: I0226 11:29:17.988481 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k6n2b" event={"ID":"fc1f2e39-391c-4eb6-9278-080dd6a1ec1d","Type":"ContainerDied","Data":"d0291347c52910dd8b1fc1d553d72bf2ac4dff608b401b522f6d41ab56af53f2"} Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.532434 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.758905 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k6n2b" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.766814 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.884871 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-run-ovn\") pod \"241342cc-e734-44bf-bf46-aef3e2c31098\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.884980 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-run\") pod \"241342cc-e734-44bf-bf46-aef3e2c31098\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.885075 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2qpz\" (UniqueName: \"kubernetes.io/projected/241342cc-e734-44bf-bf46-aef3e2c31098-kube-api-access-t2qpz\") pod \"241342cc-e734-44bf-bf46-aef3e2c31098\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.885248 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/241342cc-e734-44bf-bf46-aef3e2c31098-scripts\") pod \"241342cc-e734-44bf-bf46-aef3e2c31098\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.885294 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f6dc\" (UniqueName: \"kubernetes.io/projected/fc1f2e39-391c-4eb6-9278-080dd6a1ec1d-kube-api-access-9f6dc\") pod \"fc1f2e39-391c-4eb6-9278-080dd6a1ec1d\" (UID: \"fc1f2e39-391c-4eb6-9278-080dd6a1ec1d\") " Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.885362 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-log-ovn\") pod \"241342cc-e734-44bf-bf46-aef3e2c31098\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.885389 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/241342cc-e734-44bf-bf46-aef3e2c31098-additional-scripts\") pod \"241342cc-e734-44bf-bf46-aef3e2c31098\" (UID: \"241342cc-e734-44bf-bf46-aef3e2c31098\") " Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.885441 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc1f2e39-391c-4eb6-9278-080dd6a1ec1d-operator-scripts\") pod \"fc1f2e39-391c-4eb6-9278-080dd6a1ec1d\" (UID: \"fc1f2e39-391c-4eb6-9278-080dd6a1ec1d\") " Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.886611 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc1f2e39-391c-4eb6-9278-080dd6a1ec1d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc1f2e39-391c-4eb6-9278-080dd6a1ec1d" (UID: "fc1f2e39-391c-4eb6-9278-080dd6a1ec1d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.886678 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "241342cc-e734-44bf-bf46-aef3e2c31098" (UID: "241342cc-e734-44bf-bf46-aef3e2c31098"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.886699 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-run" (OuterVolumeSpecName: "var-run") pod "241342cc-e734-44bf-bf46-aef3e2c31098" (UID: "241342cc-e734-44bf-bf46-aef3e2c31098"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.887280 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "241342cc-e734-44bf-bf46-aef3e2c31098" (UID: "241342cc-e734-44bf-bf46-aef3e2c31098"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.887829 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/241342cc-e734-44bf-bf46-aef3e2c31098-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "241342cc-e734-44bf-bf46-aef3e2c31098" (UID: "241342cc-e734-44bf-bf46-aef3e2c31098"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.888127 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/241342cc-e734-44bf-bf46-aef3e2c31098-scripts" (OuterVolumeSpecName: "scripts") pod "241342cc-e734-44bf-bf46-aef3e2c31098" (UID: "241342cc-e734-44bf-bf46-aef3e2c31098"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.894472 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc1f2e39-391c-4eb6-9278-080dd6a1ec1d-kube-api-access-9f6dc" (OuterVolumeSpecName: "kube-api-access-9f6dc") pod "fc1f2e39-391c-4eb6-9278-080dd6a1ec1d" (UID: "fc1f2e39-391c-4eb6-9278-080dd6a1ec1d"). InnerVolumeSpecName "kube-api-access-9f6dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.894846 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/241342cc-e734-44bf-bf46-aef3e2c31098-kube-api-access-t2qpz" (OuterVolumeSpecName: "kube-api-access-t2qpz") pod "241342cc-e734-44bf-bf46-aef3e2c31098" (UID: "241342cc-e734-44bf-bf46-aef3e2c31098"). InnerVolumeSpecName "kube-api-access-t2qpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.987515 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/241342cc-e734-44bf-bf46-aef3e2c31098-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.987556 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f6dc\" (UniqueName: \"kubernetes.io/projected/fc1f2e39-391c-4eb6-9278-080dd6a1ec1d-kube-api-access-9f6dc\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.987566 4724 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.987575 4724 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/241342cc-e734-44bf-bf46-aef3e2c31098-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.987584 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc1f2e39-391c-4eb6-9278-080dd6a1ec1d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.987592 4724 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.987603 4724 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/241342cc-e734-44bf-bf46-aef3e2c31098-var-run\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:23 crc kubenswrapper[4724]: I0226 11:29:23.987612 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2qpz\" (UniqueName: \"kubernetes.io/projected/241342cc-e734-44bf-bf46-aef3e2c31098-kube-api-access-t2qpz\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.022835 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-lmgxz"] Feb 26 11:29:24 crc kubenswrapper[4724]: E0226 11:29:24.024665 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="241342cc-e734-44bf-bf46-aef3e2c31098" containerName="ovn-config" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.024821 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="241342cc-e734-44bf-bf46-aef3e2c31098" containerName="ovn-config" Feb 26 11:29:24 crc kubenswrapper[4724]: E0226 11:29:24.024915 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1f2e39-391c-4eb6-9278-080dd6a1ec1d" containerName="mariadb-account-create-update" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.024982 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1f2e39-391c-4eb6-9278-080dd6a1ec1d" containerName="mariadb-account-create-update" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.025264 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="241342cc-e734-44bf-bf46-aef3e2c31098" containerName="ovn-config" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.025357 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc1f2e39-391c-4eb6-9278-080dd6a1ec1d" containerName="mariadb-account-create-update" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.026110 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lmgxz" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.040161 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x9682-config-wv48g" event={"ID":"241342cc-e734-44bf-bf46-aef3e2c31098","Type":"ContainerDied","Data":"28e56a8dc1a2f9efaa55ff39303d960dff1c2778666d4c3a54a120c71b7c9bf5"} Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.040219 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28e56a8dc1a2f9efaa55ff39303d960dff1c2778666d4c3a54a120c71b7c9bf5" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.040274 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x9682-config-wv48g" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.042770 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k6n2b" event={"ID":"fc1f2e39-391c-4eb6-9278-080dd6a1ec1d","Type":"ContainerDied","Data":"f0b48627f16bbcb532fccca22869c1f967a704afd08772f9694103c55fe090f7"} Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.042898 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0b48627f16bbcb532fccca22869c1f967a704afd08772f9694103c55fe090f7" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.042797 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k6n2b" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.089813 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-lmgxz"] Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.090692 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78cps\" (UniqueName: \"kubernetes.io/projected/b463c40e-2552-4c4a-97b4-4a0aba53b68a-kube-api-access-78cps\") pod \"cinder-db-create-lmgxz\" (UID: \"b463c40e-2552-4c4a-97b4-4a0aba53b68a\") " pod="openstack/cinder-db-create-lmgxz" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.090776 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b463c40e-2552-4c4a-97b4-4a0aba53b68a-operator-scripts\") pod \"cinder-db-create-lmgxz\" (UID: \"b463c40e-2552-4c4a-97b4-4a0aba53b68a\") " pod="openstack/cinder-db-create-lmgxz" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.192629 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78cps\" (UniqueName: \"kubernetes.io/projected/b463c40e-2552-4c4a-97b4-4a0aba53b68a-kube-api-access-78cps\") pod \"cinder-db-create-lmgxz\" (UID: \"b463c40e-2552-4c4a-97b4-4a0aba53b68a\") " pod="openstack/cinder-db-create-lmgxz" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.192709 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b463c40e-2552-4c4a-97b4-4a0aba53b68a-operator-scripts\") pod \"cinder-db-create-lmgxz\" (UID: \"b463c40e-2552-4c4a-97b4-4a0aba53b68a\") " pod="openstack/cinder-db-create-lmgxz" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.193912 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b463c40e-2552-4c4a-97b4-4a0aba53b68a-operator-scripts\") pod \"cinder-db-create-lmgxz\" (UID: \"b463c40e-2552-4c4a-97b4-4a0aba53b68a\") " pod="openstack/cinder-db-create-lmgxz" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.255384 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78cps\" (UniqueName: \"kubernetes.io/projected/b463c40e-2552-4c4a-97b4-4a0aba53b68a-kube-api-access-78cps\") pod \"cinder-db-create-lmgxz\" (UID: \"b463c40e-2552-4c4a-97b4-4a0aba53b68a\") " pod="openstack/cinder-db-create-lmgxz" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.337529 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-4eba-account-create-update-c7l6v"] Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.338574 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4eba-account-create-update-c7l6v" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.340573 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.342033 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lmgxz" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.350019 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-n599s"] Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.350981 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-n599s" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.373476 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-4eba-account-create-update-c7l6v"] Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.398009 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-n599s"] Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.465447 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-mhtt4"] Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.466641 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mhtt4" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.481191 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-eecc-account-create-update-zjhmj"] Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.482237 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eecc-account-create-update-zjhmj" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.494534 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.498821 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cds9d\" (UniqueName: \"kubernetes.io/projected/c8bdb72a-3792-4705-8601-a78cb69b4226-kube-api-access-cds9d\") pod \"heat-4eba-account-create-update-c7l6v\" (UID: \"c8bdb72a-3792-4705-8601-a78cb69b4226\") " pod="openstack/heat-4eba-account-create-update-c7l6v" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.498908 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm6zd\" (UniqueName: \"kubernetes.io/projected/69948f24-a054-4969-8449-0a85840a5da9-kube-api-access-gm6zd\") pod \"heat-db-create-n599s\" (UID: \"69948f24-a054-4969-8449-0a85840a5da9\") " pod="openstack/heat-db-create-n599s" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.498965 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8bdb72a-3792-4705-8601-a78cb69b4226-operator-scripts\") pod \"heat-4eba-account-create-update-c7l6v\" (UID: \"c8bdb72a-3792-4705-8601-a78cb69b4226\") " pod="openstack/heat-4eba-account-create-update-c7l6v" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.499005 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69948f24-a054-4969-8449-0a85840a5da9-operator-scripts\") pod \"heat-db-create-n599s\" (UID: \"69948f24-a054-4969-8449-0a85840a5da9\") " pod="openstack/heat-db-create-n599s" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.505107 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eecc-account-create-update-zjhmj"] Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.526204 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mhtt4"] Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.601040 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmqrf\" (UniqueName: \"kubernetes.io/projected/a07dd5f3-2e99-4c1d-985a-d47b7f889b54-kube-api-access-rmqrf\") pod \"cinder-eecc-account-create-update-zjhmj\" (UID: \"a07dd5f3-2e99-4c1d-985a-d47b7f889b54\") " pod="openstack/cinder-eecc-account-create-update-zjhmj" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.601222 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69948f24-a054-4969-8449-0a85840a5da9-operator-scripts\") pod \"heat-db-create-n599s\" (UID: \"69948f24-a054-4969-8449-0a85840a5da9\") " pod="openstack/heat-db-create-n599s" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.601412 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cds9d\" (UniqueName: \"kubernetes.io/projected/c8bdb72a-3792-4705-8601-a78cb69b4226-kube-api-access-cds9d\") pod \"heat-4eba-account-create-update-c7l6v\" (UID: \"c8bdb72a-3792-4705-8601-a78cb69b4226\") " pod="openstack/heat-4eba-account-create-update-c7l6v" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.602723 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a07dd5f3-2e99-4c1d-985a-d47b7f889b54-operator-scripts\") pod \"cinder-eecc-account-create-update-zjhmj\" (UID: \"a07dd5f3-2e99-4c1d-985a-d47b7f889b54\") " pod="openstack/cinder-eecc-account-create-update-zjhmj" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.602807 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pzff\" (UniqueName: \"kubernetes.io/projected/4971957b-b209-42b3-8f60-49fd69abde47-kube-api-access-6pzff\") pod \"barbican-db-create-mhtt4\" (UID: \"4971957b-b209-42b3-8f60-49fd69abde47\") " pod="openstack/barbican-db-create-mhtt4" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.602866 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm6zd\" (UniqueName: \"kubernetes.io/projected/69948f24-a054-4969-8449-0a85840a5da9-kube-api-access-gm6zd\") pod \"heat-db-create-n599s\" (UID: \"69948f24-a054-4969-8449-0a85840a5da9\") " pod="openstack/heat-db-create-n599s" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.602921 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4971957b-b209-42b3-8f60-49fd69abde47-operator-scripts\") pod \"barbican-db-create-mhtt4\" (UID: \"4971957b-b209-42b3-8f60-49fd69abde47\") " pod="openstack/barbican-db-create-mhtt4" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.602982 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8bdb72a-3792-4705-8601-a78cb69b4226-operator-scripts\") pod \"heat-4eba-account-create-update-c7l6v\" (UID: \"c8bdb72a-3792-4705-8601-a78cb69b4226\") " pod="openstack/heat-4eba-account-create-update-c7l6v" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.604745 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69948f24-a054-4969-8449-0a85840a5da9-operator-scripts\") pod \"heat-db-create-n599s\" (UID: \"69948f24-a054-4969-8449-0a85840a5da9\") " pod="openstack/heat-db-create-n599s" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.604839 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8bdb72a-3792-4705-8601-a78cb69b4226-operator-scripts\") pod \"heat-4eba-account-create-update-c7l6v\" (UID: \"c8bdb72a-3792-4705-8601-a78cb69b4226\") " pod="openstack/heat-4eba-account-create-update-c7l6v" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.636777 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm6zd\" (UniqueName: \"kubernetes.io/projected/69948f24-a054-4969-8449-0a85840a5da9-kube-api-access-gm6zd\") pod \"heat-db-create-n599s\" (UID: \"69948f24-a054-4969-8449-0a85840a5da9\") " pod="openstack/heat-db-create-n599s" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.643736 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cds9d\" (UniqueName: \"kubernetes.io/projected/c8bdb72a-3792-4705-8601-a78cb69b4226-kube-api-access-cds9d\") pod \"heat-4eba-account-create-update-c7l6v\" (UID: \"c8bdb72a-3792-4705-8601-a78cb69b4226\") " pod="openstack/heat-4eba-account-create-update-c7l6v" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.674284 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4eba-account-create-update-c7l6v" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.688830 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-n599s" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.704784 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pzff\" (UniqueName: \"kubernetes.io/projected/4971957b-b209-42b3-8f60-49fd69abde47-kube-api-access-6pzff\") pod \"barbican-db-create-mhtt4\" (UID: \"4971957b-b209-42b3-8f60-49fd69abde47\") " pod="openstack/barbican-db-create-mhtt4" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.704849 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4971957b-b209-42b3-8f60-49fd69abde47-operator-scripts\") pod \"barbican-db-create-mhtt4\" (UID: \"4971957b-b209-42b3-8f60-49fd69abde47\") " pod="openstack/barbican-db-create-mhtt4" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.704896 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmqrf\" (UniqueName: \"kubernetes.io/projected/a07dd5f3-2e99-4c1d-985a-d47b7f889b54-kube-api-access-rmqrf\") pod \"cinder-eecc-account-create-update-zjhmj\" (UID: \"a07dd5f3-2e99-4c1d-985a-d47b7f889b54\") " pod="openstack/cinder-eecc-account-create-update-zjhmj" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.704990 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a07dd5f3-2e99-4c1d-985a-d47b7f889b54-operator-scripts\") pod \"cinder-eecc-account-create-update-zjhmj\" (UID: \"a07dd5f3-2e99-4c1d-985a-d47b7f889b54\") " pod="openstack/cinder-eecc-account-create-update-zjhmj" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.705954 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4971957b-b209-42b3-8f60-49fd69abde47-operator-scripts\") pod \"barbican-db-create-mhtt4\" (UID: \"4971957b-b209-42b3-8f60-49fd69abde47\") " pod="openstack/barbican-db-create-mhtt4" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.708811 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a07dd5f3-2e99-4c1d-985a-d47b7f889b54-operator-scripts\") pod \"cinder-eecc-account-create-update-zjhmj\" (UID: \"a07dd5f3-2e99-4c1d-985a-d47b7f889b54\") " pod="openstack/cinder-eecc-account-create-update-zjhmj" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.768946 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-189b-account-create-update-5svgh"] Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.770685 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-189b-account-create-update-5svgh" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.772743 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pzff\" (UniqueName: \"kubernetes.io/projected/4971957b-b209-42b3-8f60-49fd69abde47-kube-api-access-6pzff\") pod \"barbican-db-create-mhtt4\" (UID: \"4971957b-b209-42b3-8f60-49fd69abde47\") " pod="openstack/barbican-db-create-mhtt4" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.784168 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmqrf\" (UniqueName: \"kubernetes.io/projected/a07dd5f3-2e99-4c1d-985a-d47b7f889b54-kube-api-access-rmqrf\") pod \"cinder-eecc-account-create-update-zjhmj\" (UID: \"a07dd5f3-2e99-4c1d-985a-d47b7f889b54\") " pod="openstack/cinder-eecc-account-create-update-zjhmj" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.784257 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.804807 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mhtt4" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.819253 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-189b-account-create-update-5svgh"] Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.836432 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eecc-account-create-update-zjhmj" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.904632 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-5l7x7"] Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.907281 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/724d020f-8b7e-454d-a956-d34a9d6bcd6b-operator-scripts\") pod \"barbican-189b-account-create-update-5svgh\" (UID: \"724d020f-8b7e-454d-a956-d34a9d6bcd6b\") " pod="openstack/barbican-189b-account-create-update-5svgh" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.907373 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48slp\" (UniqueName: \"kubernetes.io/projected/724d020f-8b7e-454d-a956-d34a9d6bcd6b-kube-api-access-48slp\") pod \"barbican-189b-account-create-update-5svgh\" (UID: \"724d020f-8b7e-454d-a956-d34a9d6bcd6b\") " pod="openstack/barbican-189b-account-create-update-5svgh" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.915219 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5l7x7" Feb 26 11:29:24 crc kubenswrapper[4724]: I0226 11:29:24.919655 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5l7x7"] Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.008912 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/724d020f-8b7e-454d-a956-d34a9d6bcd6b-operator-scripts\") pod \"barbican-189b-account-create-update-5svgh\" (UID: \"724d020f-8b7e-454d-a956-d34a9d6bcd6b\") " pod="openstack/barbican-189b-account-create-update-5svgh" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.008964 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48slp\" (UniqueName: \"kubernetes.io/projected/724d020f-8b7e-454d-a956-d34a9d6bcd6b-kube-api-access-48slp\") pod \"barbican-189b-account-create-update-5svgh\" (UID: \"724d020f-8b7e-454d-a956-d34a9d6bcd6b\") " pod="openstack/barbican-189b-account-create-update-5svgh" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.009005 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/445386f8-9d5a-4cae-b0ef-3838172cb946-operator-scripts\") pod \"neutron-db-create-5l7x7\" (UID: \"445386f8-9d5a-4cae-b0ef-3838172cb946\") " pod="openstack/neutron-db-create-5l7x7" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.009112 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htpst\" (UniqueName: \"kubernetes.io/projected/445386f8-9d5a-4cae-b0ef-3838172cb946-kube-api-access-htpst\") pod \"neutron-db-create-5l7x7\" (UID: \"445386f8-9d5a-4cae-b0ef-3838172cb946\") " pod="openstack/neutron-db-create-5l7x7" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.010207 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/724d020f-8b7e-454d-a956-d34a9d6bcd6b-operator-scripts\") pod \"barbican-189b-account-create-update-5svgh\" (UID: \"724d020f-8b7e-454d-a956-d34a9d6bcd6b\") " pod="openstack/barbican-189b-account-create-update-5svgh" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.012029 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c054-account-create-update-pdzqj"] Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.013351 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c054-account-create-update-pdzqj" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.027487 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.040511 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-sp2k2"] Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.041732 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sp2k2" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.054652 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.054901 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.055018 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l4lrz" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.055086 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.068960 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c054-account-create-update-pdzqj"] Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.091940 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d7fdccb-4fd0-4a6e-9241-add667b9a537","Type":"ContainerStarted","Data":"e3aa4be8f3a3d0b0dae151ea4bd94a34026eef07c9435b74466154ffb9d6add8"} Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.092492 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.104300 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sp2k2"] Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.104937 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48slp\" (UniqueName: \"kubernetes.io/projected/724d020f-8b7e-454d-a956-d34a9d6bcd6b-kube-api-access-48slp\") pod \"barbican-189b-account-create-update-5svgh\" (UID: \"724d020f-8b7e-454d-a956-d34a9d6bcd6b\") " pod="openstack/barbican-189b-account-create-update-5svgh" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.112303 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ddc7ed-7a58-4859-acc1-f6e9796dff95-config-data\") pod \"keystone-db-sync-sp2k2\" (UID: \"94ddc7ed-7a58-4859-acc1-f6e9796dff95\") " pod="openstack/keystone-db-sync-sp2k2" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.112378 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/445386f8-9d5a-4cae-b0ef-3838172cb946-operator-scripts\") pod \"neutron-db-create-5l7x7\" (UID: \"445386f8-9d5a-4cae-b0ef-3838172cb946\") " pod="openstack/neutron-db-create-5l7x7" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.112439 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ddc7ed-7a58-4859-acc1-f6e9796dff95-combined-ca-bundle\") pod \"keystone-db-sync-sp2k2\" (UID: \"94ddc7ed-7a58-4859-acc1-f6e9796dff95\") " pod="openstack/keystone-db-sync-sp2k2" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.112465 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9v8z\" (UniqueName: \"kubernetes.io/projected/ab0dd31d-c5ce-4d29-a9b4-56497a14e09d-kube-api-access-q9v8z\") pod \"neutron-c054-account-create-update-pdzqj\" (UID: \"ab0dd31d-c5ce-4d29-a9b4-56497a14e09d\") " pod="openstack/neutron-c054-account-create-update-pdzqj" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.112519 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d9th\" (UniqueName: \"kubernetes.io/projected/94ddc7ed-7a58-4859-acc1-f6e9796dff95-kube-api-access-9d9th\") pod \"keystone-db-sync-sp2k2\" (UID: \"94ddc7ed-7a58-4859-acc1-f6e9796dff95\") " pod="openstack/keystone-db-sync-sp2k2" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.112545 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htpst\" (UniqueName: \"kubernetes.io/projected/445386f8-9d5a-4cae-b0ef-3838172cb946-kube-api-access-htpst\") pod \"neutron-db-create-5l7x7\" (UID: \"445386f8-9d5a-4cae-b0ef-3838172cb946\") " pod="openstack/neutron-db-create-5l7x7" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.112586 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab0dd31d-c5ce-4d29-a9b4-56497a14e09d-operator-scripts\") pod \"neutron-c054-account-create-update-pdzqj\" (UID: \"ab0dd31d-c5ce-4d29-a9b4-56497a14e09d\") " pod="openstack/neutron-c054-account-create-update-pdzqj" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.113419 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/445386f8-9d5a-4cae-b0ef-3838172cb946-operator-scripts\") pod \"neutron-db-create-5l7x7\" (UID: \"445386f8-9d5a-4cae-b0ef-3838172cb946\") " pod="openstack/neutron-db-create-5l7x7" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.160501 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htpst\" (UniqueName: \"kubernetes.io/projected/445386f8-9d5a-4cae-b0ef-3838172cb946-kube-api-access-htpst\") pod \"neutron-db-create-5l7x7\" (UID: \"445386f8-9d5a-4cae-b0ef-3838172cb946\") " pod="openstack/neutron-db-create-5l7x7" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.160833 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-lmgxz"] Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.161032 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"ebb2d23e248769f92b28b9c25cbf7c645fb8abcb990c4da49c44ed70646d1420"} Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.166144 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-189b-account-create-update-5svgh" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.188284 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-x9682-config-wv48g"] Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.217790 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ddc7ed-7a58-4859-acc1-f6e9796dff95-config-data\") pod \"keystone-db-sync-sp2k2\" (UID: \"94ddc7ed-7a58-4859-acc1-f6e9796dff95\") " pod="openstack/keystone-db-sync-sp2k2" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.217890 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ddc7ed-7a58-4859-acc1-f6e9796dff95-combined-ca-bundle\") pod \"keystone-db-sync-sp2k2\" (UID: \"94ddc7ed-7a58-4859-acc1-f6e9796dff95\") " pod="openstack/keystone-db-sync-sp2k2" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.217928 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9v8z\" (UniqueName: \"kubernetes.io/projected/ab0dd31d-c5ce-4d29-a9b4-56497a14e09d-kube-api-access-q9v8z\") pod \"neutron-c054-account-create-update-pdzqj\" (UID: \"ab0dd31d-c5ce-4d29-a9b4-56497a14e09d\") " pod="openstack/neutron-c054-account-create-update-pdzqj" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.217981 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d9th\" (UniqueName: \"kubernetes.io/projected/94ddc7ed-7a58-4859-acc1-f6e9796dff95-kube-api-access-9d9th\") pod \"keystone-db-sync-sp2k2\" (UID: \"94ddc7ed-7a58-4859-acc1-f6e9796dff95\") " pod="openstack/keystone-db-sync-sp2k2" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.218027 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab0dd31d-c5ce-4d29-a9b4-56497a14e09d-operator-scripts\") pod \"neutron-c054-account-create-update-pdzqj\" (UID: \"ab0dd31d-c5ce-4d29-a9b4-56497a14e09d\") " pod="openstack/neutron-c054-account-create-update-pdzqj" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.231908 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ddc7ed-7a58-4859-acc1-f6e9796dff95-combined-ca-bundle\") pod \"keystone-db-sync-sp2k2\" (UID: \"94ddc7ed-7a58-4859-acc1-f6e9796dff95\") " pod="openstack/keystone-db-sync-sp2k2" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.234367 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab0dd31d-c5ce-4d29-a9b4-56497a14e09d-operator-scripts\") pod \"neutron-c054-account-create-update-pdzqj\" (UID: \"ab0dd31d-c5ce-4d29-a9b4-56497a14e09d\") " pod="openstack/neutron-c054-account-create-update-pdzqj" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.237727 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ddc7ed-7a58-4859-acc1-f6e9796dff95-config-data\") pod \"keystone-db-sync-sp2k2\" (UID: \"94ddc7ed-7a58-4859-acc1-f6e9796dff95\") " pod="openstack/keystone-db-sync-sp2k2" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.237807 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-x9682-config-wv48g"] Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.243744 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371953.611055 podStartE2EDuration="1m23.243720648s" podCreationTimestamp="2026-02-26 11:28:02 +0000 UTC" firstStartedPulling="2026-02-26 11:28:04.622588402 +0000 UTC m=+1351.278327517" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:25.185640464 +0000 UTC m=+1431.841379589" watchObservedRunningTime="2026-02-26 11:29:25.243720648 +0000 UTC m=+1431.899459763" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.251105 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5l7x7" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.359225 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9v8z\" (UniqueName: \"kubernetes.io/projected/ab0dd31d-c5ce-4d29-a9b4-56497a14e09d-kube-api-access-q9v8z\") pod \"neutron-c054-account-create-update-pdzqj\" (UID: \"ab0dd31d-c5ce-4d29-a9b4-56497a14e09d\") " pod="openstack/neutron-c054-account-create-update-pdzqj" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.359238 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d9th\" (UniqueName: \"kubernetes.io/projected/94ddc7ed-7a58-4859-acc1-f6e9796dff95-kube-api-access-9d9th\") pod \"keystone-db-sync-sp2k2\" (UID: \"94ddc7ed-7a58-4859-acc1-f6e9796dff95\") " pod="openstack/keystone-db-sync-sp2k2" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.395609 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sp2k2" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.657765 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c054-account-create-update-pdzqj" Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.810155 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-n599s"] Feb 26 11:29:25 crc kubenswrapper[4724]: I0226 11:29:25.855274 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mhtt4"] Feb 26 11:29:26 crc kubenswrapper[4724]: I0226 11:29:26.037449 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="241342cc-e734-44bf-bf46-aef3e2c31098" path="/var/lib/kubelet/pods/241342cc-e734-44bf-bf46-aef3e2c31098/volumes" Feb 26 11:29:26 crc kubenswrapper[4724]: I0226 11:29:26.076780 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-4eba-account-create-update-c7l6v"] Feb 26 11:29:26 crc kubenswrapper[4724]: I0226 11:29:26.082529 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eecc-account-create-update-zjhmj"] Feb 26 11:29:26 crc kubenswrapper[4724]: W0226 11:29:26.119128 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda07dd5f3_2e99_4c1d_985a_d47b7f889b54.slice/crio-7e57b43e1d918ea7e2e7e829b6d4d95a2f772960a6bee74ad7a1bbb46dcf1c3d WatchSource:0}: Error finding container 7e57b43e1d918ea7e2e7e829b6d4d95a2f772960a6bee74ad7a1bbb46dcf1c3d: Status 404 returned error can't find the container with id 7e57b43e1d918ea7e2e7e829b6d4d95a2f772960a6bee74ad7a1bbb46dcf1c3d Feb 26 11:29:26 crc kubenswrapper[4724]: I0226 11:29:26.139137 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-189b-account-create-update-5svgh"] Feb 26 11:29:26 crc kubenswrapper[4724]: I0226 11:29:26.164540 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5l7x7"] Feb 26 11:29:26 crc kubenswrapper[4724]: I0226 11:29:26.185471 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lmgxz" event={"ID":"b463c40e-2552-4c4a-97b4-4a0aba53b68a","Type":"ContainerStarted","Data":"e79dce8d3c67b715cfdb3148bab9a6e27d2568eb40b08b3221eb3421b3c4f4bc"} Feb 26 11:29:26 crc kubenswrapper[4724]: I0226 11:29:26.185629 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lmgxz" event={"ID":"b463c40e-2552-4c4a-97b4-4a0aba53b68a","Type":"ContainerStarted","Data":"e90c01a97e4b8e364891c568239eb4519b76bc2fa80f21a7d4ec03bc4ef7c58a"} Feb 26 11:29:26 crc kubenswrapper[4724]: I0226 11:29:26.187875 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eecc-account-create-update-zjhmj" event={"ID":"a07dd5f3-2e99-4c1d-985a-d47b7f889b54","Type":"ContainerStarted","Data":"7e57b43e1d918ea7e2e7e829b6d4d95a2f772960a6bee74ad7a1bbb46dcf1c3d"} Feb 26 11:29:26 crc kubenswrapper[4724]: I0226 11:29:26.192098 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4eba-account-create-update-c7l6v" event={"ID":"c8bdb72a-3792-4705-8601-a78cb69b4226","Type":"ContainerStarted","Data":"897923ed5e6eb902aa523f29b9aaba341f6cacf96515b8a0657ceb4b0e6100b3"} Feb 26 11:29:26 crc kubenswrapper[4724]: I0226 11:29:26.199348 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-n599s" event={"ID":"69948f24-a054-4969-8449-0a85840a5da9","Type":"ContainerStarted","Data":"c856e0a3a5838e66f7997b32a8cf7648afeba703809df8dcdb24ce9d6be3e926"} Feb 26 11:29:26 crc kubenswrapper[4724]: I0226 11:29:26.212235 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mhtt4" event={"ID":"4971957b-b209-42b3-8f60-49fd69abde47","Type":"ContainerStarted","Data":"73318f9d9fc3fdf6d6be367310851122ec97246eb08618265724809b93d43ba4"} Feb 26 11:29:26 crc kubenswrapper[4724]: I0226 11:29:26.220777 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"a5b06e046baf767636312bd4904a1d8e4e16f1fb6a84f40efcbde16b8f57ed18"} Feb 26 11:29:26 crc kubenswrapper[4724]: I0226 11:29:26.331866 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sp2k2"] Feb 26 11:29:26 crc kubenswrapper[4724]: I0226 11:29:26.590692 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c054-account-create-update-pdzqj"] Feb 26 11:29:26 crc kubenswrapper[4724]: W0226 11:29:26.695169 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod724d020f_8b7e_454d_a956_d34a9d6bcd6b.slice/crio-95b9841532095a8d999da4c65f4c3bb738e13f405bb542bdafba2d8b2677dfea WatchSource:0}: Error finding container 95b9841532095a8d999da4c65f4c3bb738e13f405bb542bdafba2d8b2677dfea: Status 404 returned error can't find the container with id 95b9841532095a8d999da4c65f4c3bb738e13f405bb542bdafba2d8b2677dfea Feb 26 11:29:26 crc kubenswrapper[4724]: W0226 11:29:26.696845 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab0dd31d_c5ce_4d29_a9b4_56497a14e09d.slice/crio-1cbc7b6a88eb4404b358d3bece2d47c6a05397232b3db0d1b77084e813d54f36 WatchSource:0}: Error finding container 1cbc7b6a88eb4404b358d3bece2d47c6a05397232b3db0d1b77084e813d54f36: Status 404 returned error can't find the container with id 1cbc7b6a88eb4404b358d3bece2d47c6a05397232b3db0d1b77084e813d54f36 Feb 26 11:29:26 crc kubenswrapper[4724]: W0226 11:29:26.698831 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94ddc7ed_7a58_4859_acc1_f6e9796dff95.slice/crio-9b57243d8d156a6299412d0c5da85e210607201bb51a876016bb7a03a8a2e387 WatchSource:0}: Error finding container 9b57243d8d156a6299412d0c5da85e210607201bb51a876016bb7a03a8a2e387: Status 404 returned error can't find the container with id 9b57243d8d156a6299412d0c5da85e210607201bb51a876016bb7a03a8a2e387 Feb 26 11:29:27 crc kubenswrapper[4724]: I0226 11:29:27.233763 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sp2k2" event={"ID":"94ddc7ed-7a58-4859-acc1-f6e9796dff95","Type":"ContainerStarted","Data":"9b57243d8d156a6299412d0c5da85e210607201bb51a876016bb7a03a8a2e387"} Feb 26 11:29:27 crc kubenswrapper[4724]: I0226 11:29:27.242108 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c054-account-create-update-pdzqj" event={"ID":"ab0dd31d-c5ce-4d29-a9b4-56497a14e09d","Type":"ContainerStarted","Data":"d143bfa34abc036a7a83f3ced969efba4f761e89c2d7a62db54c37be67c471da"} Feb 26 11:29:27 crc kubenswrapper[4724]: I0226 11:29:27.242389 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c054-account-create-update-pdzqj" event={"ID":"ab0dd31d-c5ce-4d29-a9b4-56497a14e09d","Type":"ContainerStarted","Data":"1cbc7b6a88eb4404b358d3bece2d47c6a05397232b3db0d1b77084e813d54f36"} Feb 26 11:29:27 crc kubenswrapper[4724]: I0226 11:29:27.247067 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5l7x7" event={"ID":"445386f8-9d5a-4cae-b0ef-3838172cb946","Type":"ContainerStarted","Data":"e37aca4fb7b017d1e3a02d104d29844faf6010d6f25bf5cafcaa974a28db8a92"} Feb 26 11:29:27 crc kubenswrapper[4724]: I0226 11:29:27.249262 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mhtt4" event={"ID":"4971957b-b209-42b3-8f60-49fd69abde47","Type":"ContainerStarted","Data":"96eed766d4870393f6f54c6af52c022e01e8758dc73bec5501dc654f759c0c56"} Feb 26 11:29:27 crc kubenswrapper[4724]: I0226 11:29:27.259390 4724 generic.go:334] "Generic (PLEG): container finished" podID="b463c40e-2552-4c4a-97b4-4a0aba53b68a" containerID="e79dce8d3c67b715cfdb3148bab9a6e27d2568eb40b08b3221eb3421b3c4f4bc" exitCode=0 Feb 26 11:29:27 crc kubenswrapper[4724]: I0226 11:29:27.259456 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lmgxz" event={"ID":"b463c40e-2552-4c4a-97b4-4a0aba53b68a","Type":"ContainerDied","Data":"e79dce8d3c67b715cfdb3148bab9a6e27d2568eb40b08b3221eb3421b3c4f4bc"} Feb 26 11:29:27 crc kubenswrapper[4724]: I0226 11:29:27.266281 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-189b-account-create-update-5svgh" event={"ID":"724d020f-8b7e-454d-a956-d34a9d6bcd6b","Type":"ContainerStarted","Data":"95b9841532095a8d999da4c65f4c3bb738e13f405bb542bdafba2d8b2677dfea"} Feb 26 11:29:27 crc kubenswrapper[4724]: I0226 11:29:27.281378 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c054-account-create-update-pdzqj" podStartSLOduration=3.2813533010000002 podStartE2EDuration="3.281353301s" podCreationTimestamp="2026-02-26 11:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:27.275558593 +0000 UTC m=+1433.931297728" watchObservedRunningTime="2026-02-26 11:29:27.281353301 +0000 UTC m=+1433.937092416" Feb 26 11:29:27 crc kubenswrapper[4724]: I0226 11:29:27.326759 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-mhtt4" podStartSLOduration=3.326738421 podStartE2EDuration="3.326738421s" podCreationTimestamp="2026-02-26 11:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:27.324613326 +0000 UTC m=+1433.980352441" watchObservedRunningTime="2026-02-26 11:29:27.326738421 +0000 UTC m=+1433.982477536" Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.328768 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"b10171fa7c6d90a6ff37e6ef9429eae289e902f2c3881af1402339a2c71eac42"} Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.329065 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d5750fa4-34c3-4c23-b0cc-af9726d3034c","Type":"ContainerStarted","Data":"f4bdb27898bed3951c7ba9c1934045f877ec0cd9e9106504631e7307554e5a7d"} Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.386991 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eecc-account-create-update-zjhmj" event={"ID":"a07dd5f3-2e99-4c1d-985a-d47b7f889b54","Type":"ContainerStarted","Data":"8122adcd5d7ac1923df354d69a5299acf98a8a65a40b47120df7eb1625d0ad9e"} Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.397468 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-189b-account-create-update-5svgh" event={"ID":"724d020f-8b7e-454d-a956-d34a9d6bcd6b","Type":"ContainerStarted","Data":"0d4d9a38cfea90a768d263d089365ccd094611cb59711921bf8c684118a170f2"} Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.400026 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4eba-account-create-update-c7l6v" event={"ID":"c8bdb72a-3792-4705-8601-a78cb69b4226","Type":"ContainerStarted","Data":"5283b1bf7f17d10b3c1ef3cf7f7708d7a06576df4acbc7e41a0b4770e2af9392"} Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.409698 4724 generic.go:334] "Generic (PLEG): container finished" podID="69948f24-a054-4969-8449-0a85840a5da9" containerID="0b5afc72420088c44db251ca4328abc3f87e7aa7d21eeef10810463c036615a9" exitCode=0 Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.409832 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-n599s" event={"ID":"69948f24-a054-4969-8449-0a85840a5da9","Type":"ContainerDied","Data":"0b5afc72420088c44db251ca4328abc3f87e7aa7d21eeef10810463c036615a9"} Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.416259 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5l7x7" event={"ID":"445386f8-9d5a-4cae-b0ef-3838172cb946","Type":"ContainerStarted","Data":"5eb150436a51f707d5b2b1c9c73b54c2a7d6c68558b1cd03bce9952ef768d1f1"} Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.448767 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=31.441615869 podStartE2EDuration="39.448739159s" podCreationTimestamp="2026-02-26 11:28:49 +0000 UTC" firstStartedPulling="2026-02-26 11:29:06.978943045 +0000 UTC m=+1413.634682160" lastFinishedPulling="2026-02-26 11:29:14.986066335 +0000 UTC m=+1421.641805450" observedRunningTime="2026-02-26 11:29:28.424660434 +0000 UTC m=+1435.080399559" watchObservedRunningTime="2026-02-26 11:29:28.448739159 +0000 UTC m=+1435.104478274" Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.475645 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-4eba-account-create-update-c7l6v" podStartSLOduration=4.475622316 podStartE2EDuration="4.475622316s" podCreationTimestamp="2026-02-26 11:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:28.45504635 +0000 UTC m=+1435.110785495" watchObservedRunningTime="2026-02-26 11:29:28.475622316 +0000 UTC m=+1435.131361441" Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.483769 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-5l7x7" podStartSLOduration=4.483750524 podStartE2EDuration="4.483750524s" podCreationTimestamp="2026-02-26 11:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:28.479772302 +0000 UTC m=+1435.135511427" watchObservedRunningTime="2026-02-26 11:29:28.483750524 +0000 UTC m=+1435.139489639" Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.536079 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-eecc-account-create-update-zjhmj" podStartSLOduration=4.53605345 podStartE2EDuration="4.53605345s" podCreationTimestamp="2026-02-26 11:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:28.527254045 +0000 UTC m=+1435.182993160" watchObservedRunningTime="2026-02-26 11:29:28.53605345 +0000 UTC m=+1435.191792565" Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.562437 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-189b-account-create-update-5svgh" podStartSLOduration=4.562419424 podStartE2EDuration="4.562419424s" podCreationTimestamp="2026-02-26 11:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:28.55873625 +0000 UTC m=+1435.214475365" watchObservedRunningTime="2026-02-26 11:29:28.562419424 +0000 UTC m=+1435.218158539" Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.945315 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-vctpx"] Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.946841 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.969555 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 26 11:29:28 crc kubenswrapper[4724]: I0226 11:29:28.991051 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-vctpx"] Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.034005 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.034079 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.034126 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.034165 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v2sm\" (UniqueName: \"kubernetes.io/projected/936380ab-8283-489b-a609-f583e11b71eb-kube-api-access-8v2sm\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.034239 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.034350 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-config\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.110145 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lmgxz" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.135918 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b463c40e-2552-4c4a-97b4-4a0aba53b68a-operator-scripts\") pod \"b463c40e-2552-4c4a-97b4-4a0aba53b68a\" (UID: \"b463c40e-2552-4c4a-97b4-4a0aba53b68a\") " Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.136025 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78cps\" (UniqueName: \"kubernetes.io/projected/b463c40e-2552-4c4a-97b4-4a0aba53b68a-kube-api-access-78cps\") pod \"b463c40e-2552-4c4a-97b4-4a0aba53b68a\" (UID: \"b463c40e-2552-4c4a-97b4-4a0aba53b68a\") " Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.136500 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b463c40e-2552-4c4a-97b4-4a0aba53b68a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b463c40e-2552-4c4a-97b4-4a0aba53b68a" (UID: "b463c40e-2552-4c4a-97b4-4a0aba53b68a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.136714 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.136761 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-config\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.136807 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.136839 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.136882 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.137028 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v2sm\" (UniqueName: \"kubernetes.io/projected/936380ab-8283-489b-a609-f583e11b71eb-kube-api-access-8v2sm\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.137094 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b463c40e-2552-4c4a-97b4-4a0aba53b68a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.137696 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.138596 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.143134 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.143718 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-config\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.149690 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.158888 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b463c40e-2552-4c4a-97b4-4a0aba53b68a-kube-api-access-78cps" (OuterVolumeSpecName: "kube-api-access-78cps") pod "b463c40e-2552-4c4a-97b4-4a0aba53b68a" (UID: "b463c40e-2552-4c4a-97b4-4a0aba53b68a"). InnerVolumeSpecName "kube-api-access-78cps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.175838 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v2sm\" (UniqueName: \"kubernetes.io/projected/936380ab-8283-489b-a609-f583e11b71eb-kube-api-access-8v2sm\") pod \"dnsmasq-dns-6d5b6d6b67-vctpx\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.239009 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78cps\" (UniqueName: \"kubernetes.io/projected/b463c40e-2552-4c4a-97b4-4a0aba53b68a-kube-api-access-78cps\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.407918 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.430049 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lmgxz" event={"ID":"b463c40e-2552-4c4a-97b4-4a0aba53b68a","Type":"ContainerDied","Data":"e90c01a97e4b8e364891c568239eb4519b76bc2fa80f21a7d4ec03bc4ef7c58a"} Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.430097 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e90c01a97e4b8e364891c568239eb4519b76bc2fa80f21a7d4ec03bc4ef7c58a" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.430188 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lmgxz" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.454505 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6v8t4" event={"ID":"f5a58b47-8a63-4ec7-aad6-5b7668e56faa","Type":"ContainerStarted","Data":"2f173bf1c648b98533051df7b5eedc8205255da152e3ac406e3e4d9813f0fb00"} Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.469878 4724 generic.go:334] "Generic (PLEG): container finished" podID="ab0dd31d-c5ce-4d29-a9b4-56497a14e09d" containerID="d143bfa34abc036a7a83f3ced969efba4f761e89c2d7a62db54c37be67c471da" exitCode=0 Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.470123 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c054-account-create-update-pdzqj" event={"ID":"ab0dd31d-c5ce-4d29-a9b4-56497a14e09d","Type":"ContainerDied","Data":"d143bfa34abc036a7a83f3ced969efba4f761e89c2d7a62db54c37be67c471da"} Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.499939 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6v8t4" podStartSLOduration=3.945250627 podStartE2EDuration="22.499920608s" podCreationTimestamp="2026-02-26 11:29:07 +0000 UTC" firstStartedPulling="2026-02-26 11:29:08.200914517 +0000 UTC m=+1414.856653632" lastFinishedPulling="2026-02-26 11:29:26.755584498 +0000 UTC m=+1433.411323613" observedRunningTime="2026-02-26 11:29:29.486348051 +0000 UTC m=+1436.142087176" watchObservedRunningTime="2026-02-26 11:29:29.499920608 +0000 UTC m=+1436.155659723" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.837544 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-n599s" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.855603 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm6zd\" (UniqueName: \"kubernetes.io/projected/69948f24-a054-4969-8449-0a85840a5da9-kube-api-access-gm6zd\") pod \"69948f24-a054-4969-8449-0a85840a5da9\" (UID: \"69948f24-a054-4969-8449-0a85840a5da9\") " Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.855700 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69948f24-a054-4969-8449-0a85840a5da9-operator-scripts\") pod \"69948f24-a054-4969-8449-0a85840a5da9\" (UID: \"69948f24-a054-4969-8449-0a85840a5da9\") " Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.856641 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69948f24-a054-4969-8449-0a85840a5da9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69948f24-a054-4969-8449-0a85840a5da9" (UID: "69948f24-a054-4969-8449-0a85840a5da9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.860928 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69948f24-a054-4969-8449-0a85840a5da9-kube-api-access-gm6zd" (OuterVolumeSpecName: "kube-api-access-gm6zd") pod "69948f24-a054-4969-8449-0a85840a5da9" (UID: "69948f24-a054-4969-8449-0a85840a5da9"). InnerVolumeSpecName "kube-api-access-gm6zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.957563 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69948f24-a054-4969-8449-0a85840a5da9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:29 crc kubenswrapper[4724]: I0226 11:29:29.957595 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm6zd\" (UniqueName: \"kubernetes.io/projected/69948f24-a054-4969-8449-0a85840a5da9-kube-api-access-gm6zd\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.062928 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-vctpx"] Feb 26 11:29:30 crc kubenswrapper[4724]: W0226 11:29:30.073321 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod936380ab_8283_489b_a609_f583e11b71eb.slice/crio-43082e2ce23d78d70907078610b58f3df085ba52cd773a1aad5b99cc9ad57877 WatchSource:0}: Error finding container 43082e2ce23d78d70907078610b58f3df085ba52cd773a1aad5b99cc9ad57877: Status 404 returned error can't find the container with id 43082e2ce23d78d70907078610b58f3df085ba52cd773a1aad5b99cc9ad57877 Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.485049 4724 generic.go:334] "Generic (PLEG): container finished" podID="a07dd5f3-2e99-4c1d-985a-d47b7f889b54" containerID="8122adcd5d7ac1923df354d69a5299acf98a8a65a40b47120df7eb1625d0ad9e" exitCode=0 Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.485145 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eecc-account-create-update-zjhmj" event={"ID":"a07dd5f3-2e99-4c1d-985a-d47b7f889b54","Type":"ContainerDied","Data":"8122adcd5d7ac1923df354d69a5299acf98a8a65a40b47120df7eb1625d0ad9e"} Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.496089 4724 generic.go:334] "Generic (PLEG): container finished" podID="724d020f-8b7e-454d-a956-d34a9d6bcd6b" containerID="0d4d9a38cfea90a768d263d089365ccd094611cb59711921bf8c684118a170f2" exitCode=0 Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.496383 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-189b-account-create-update-5svgh" event={"ID":"724d020f-8b7e-454d-a956-d34a9d6bcd6b","Type":"ContainerDied","Data":"0d4d9a38cfea90a768d263d089365ccd094611cb59711921bf8c684118a170f2"} Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.497865 4724 generic.go:334] "Generic (PLEG): container finished" podID="c8bdb72a-3792-4705-8601-a78cb69b4226" containerID="5283b1bf7f17d10b3c1ef3cf7f7708d7a06576df4acbc7e41a0b4770e2af9392" exitCode=0 Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.497911 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4eba-account-create-update-c7l6v" event={"ID":"c8bdb72a-3792-4705-8601-a78cb69b4226","Type":"ContainerDied","Data":"5283b1bf7f17d10b3c1ef3cf7f7708d7a06576df4acbc7e41a0b4770e2af9392"} Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.500503 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-n599s" event={"ID":"69948f24-a054-4969-8449-0a85840a5da9","Type":"ContainerDied","Data":"c856e0a3a5838e66f7997b32a8cf7648afeba703809df8dcdb24ce9d6be3e926"} Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.500538 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c856e0a3a5838e66f7997b32a8cf7648afeba703809df8dcdb24ce9d6be3e926" Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.500592 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-n599s" Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.503343 4724 generic.go:334] "Generic (PLEG): container finished" podID="445386f8-9d5a-4cae-b0ef-3838172cb946" containerID="5eb150436a51f707d5b2b1c9c73b54c2a7d6c68558b1cd03bce9952ef768d1f1" exitCode=0 Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.503401 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5l7x7" event={"ID":"445386f8-9d5a-4cae-b0ef-3838172cb946","Type":"ContainerDied","Data":"5eb150436a51f707d5b2b1c9c73b54c2a7d6c68558b1cd03bce9952ef768d1f1"} Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.511438 4724 generic.go:334] "Generic (PLEG): container finished" podID="4971957b-b209-42b3-8f60-49fd69abde47" containerID="96eed766d4870393f6f54c6af52c022e01e8758dc73bec5501dc654f759c0c56" exitCode=0 Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.511495 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mhtt4" event={"ID":"4971957b-b209-42b3-8f60-49fd69abde47","Type":"ContainerDied","Data":"96eed766d4870393f6f54c6af52c022e01e8758dc73bec5501dc654f759c0c56"} Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.514279 4724 generic.go:334] "Generic (PLEG): container finished" podID="936380ab-8283-489b-a609-f583e11b71eb" containerID="b4c39caa22b6a2c7994eb3594399cd4490118a64a4337dc3bc63f443016dc109" exitCode=0 Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.515134 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" event={"ID":"936380ab-8283-489b-a609-f583e11b71eb","Type":"ContainerDied","Data":"b4c39caa22b6a2c7994eb3594399cd4490118a64a4337dc3bc63f443016dc109"} Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.515165 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" event={"ID":"936380ab-8283-489b-a609-f583e11b71eb","Type":"ContainerStarted","Data":"43082e2ce23d78d70907078610b58f3df085ba52cd773a1aad5b99cc9ad57877"} Feb 26 11:29:30 crc kubenswrapper[4724]: I0226 11:29:30.916090 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c054-account-create-update-pdzqj" Feb 26 11:29:31 crc kubenswrapper[4724]: I0226 11:29:31.082824 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab0dd31d-c5ce-4d29-a9b4-56497a14e09d-operator-scripts\") pod \"ab0dd31d-c5ce-4d29-a9b4-56497a14e09d\" (UID: \"ab0dd31d-c5ce-4d29-a9b4-56497a14e09d\") " Feb 26 11:29:31 crc kubenswrapper[4724]: I0226 11:29:31.083076 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9v8z\" (UniqueName: \"kubernetes.io/projected/ab0dd31d-c5ce-4d29-a9b4-56497a14e09d-kube-api-access-q9v8z\") pod \"ab0dd31d-c5ce-4d29-a9b4-56497a14e09d\" (UID: \"ab0dd31d-c5ce-4d29-a9b4-56497a14e09d\") " Feb 26 11:29:31 crc kubenswrapper[4724]: I0226 11:29:31.083669 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab0dd31d-c5ce-4d29-a9b4-56497a14e09d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab0dd31d-c5ce-4d29-a9b4-56497a14e09d" (UID: "ab0dd31d-c5ce-4d29-a9b4-56497a14e09d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:31 crc kubenswrapper[4724]: I0226 11:29:31.098590 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab0dd31d-c5ce-4d29-a9b4-56497a14e09d-kube-api-access-q9v8z" (OuterVolumeSpecName: "kube-api-access-q9v8z") pod "ab0dd31d-c5ce-4d29-a9b4-56497a14e09d" (UID: "ab0dd31d-c5ce-4d29-a9b4-56497a14e09d"). InnerVolumeSpecName "kube-api-access-q9v8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:31 crc kubenswrapper[4724]: I0226 11:29:31.184576 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9v8z\" (UniqueName: \"kubernetes.io/projected/ab0dd31d-c5ce-4d29-a9b4-56497a14e09d-kube-api-access-q9v8z\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:31 crc kubenswrapper[4724]: I0226 11:29:31.184613 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab0dd31d-c5ce-4d29-a9b4-56497a14e09d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:31 crc kubenswrapper[4724]: I0226 11:29:31.526254 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c054-account-create-update-pdzqj" Feb 26 11:29:31 crc kubenswrapper[4724]: I0226 11:29:31.526281 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c054-account-create-update-pdzqj" event={"ID":"ab0dd31d-c5ce-4d29-a9b4-56497a14e09d","Type":"ContainerDied","Data":"1cbc7b6a88eb4404b358d3bece2d47c6a05397232b3db0d1b77084e813d54f36"} Feb 26 11:29:31 crc kubenswrapper[4724]: I0226 11:29:31.526646 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cbc7b6a88eb4404b358d3bece2d47c6a05397232b3db0d1b77084e813d54f36" Feb 26 11:29:31 crc kubenswrapper[4724]: I0226 11:29:31.533234 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" event={"ID":"936380ab-8283-489b-a609-f583e11b71eb","Type":"ContainerStarted","Data":"b78f93f30a002ba683174deab2048b34bddc010ad23122c2deeaa7d467a1c2fa"} Feb 26 11:29:31 crc kubenswrapper[4724]: I0226 11:29:31.533613 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:31 crc kubenswrapper[4724]: I0226 11:29:31.582552 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" podStartSLOduration=3.5825272200000002 podStartE2EDuration="3.58252722s" podCreationTimestamp="2026-02-26 11:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:31.574449524 +0000 UTC m=+1438.230188649" watchObservedRunningTime="2026-02-26 11:29:31.58252722 +0000 UTC m=+1438.238266345" Feb 26 11:29:39 crc kubenswrapper[4724]: I0226 11:29:39.409361 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:29:39 crc kubenswrapper[4724]: I0226 11:29:39.481327 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jkf2d"] Feb 26 11:29:39 crc kubenswrapper[4724]: I0226 11:29:39.481589 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" podUID="39d817a7-9237-4683-88aa-20bbbd487d49" containerName="dnsmasq-dns" containerID="cri-o://8f05001cead92858e49ae1d7c9b130d932f2b0bf5ecbb9b718dafe8b26f89571" gracePeriod=10 Feb 26 11:29:39 crc kubenswrapper[4724]: I0226 11:29:39.775049 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" podUID="39d817a7-9237-4683-88aa-20bbbd487d49" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.080616 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4eba-account-create-update-c7l6v" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.089574 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5l7x7" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.102568 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mhtt4" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.128645 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-189b-account-create-update-5svgh" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.131742 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eecc-account-create-update-zjhmj" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.244979 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a07dd5f3-2e99-4c1d-985a-d47b7f889b54-operator-scripts\") pod \"a07dd5f3-2e99-4c1d-985a-d47b7f889b54\" (UID: \"a07dd5f3-2e99-4c1d-985a-d47b7f889b54\") " Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.245056 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htpst\" (UniqueName: \"kubernetes.io/projected/445386f8-9d5a-4cae-b0ef-3838172cb946-kube-api-access-htpst\") pod \"445386f8-9d5a-4cae-b0ef-3838172cb946\" (UID: \"445386f8-9d5a-4cae-b0ef-3838172cb946\") " Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.245088 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/724d020f-8b7e-454d-a956-d34a9d6bcd6b-operator-scripts\") pod \"724d020f-8b7e-454d-a956-d34a9d6bcd6b\" (UID: \"724d020f-8b7e-454d-a956-d34a9d6bcd6b\") " Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.245463 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8bdb72a-3792-4705-8601-a78cb69b4226-operator-scripts\") pod \"c8bdb72a-3792-4705-8601-a78cb69b4226\" (UID: \"c8bdb72a-3792-4705-8601-a78cb69b4226\") " Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.245593 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/445386f8-9d5a-4cae-b0ef-3838172cb946-operator-scripts\") pod \"445386f8-9d5a-4cae-b0ef-3838172cb946\" (UID: \"445386f8-9d5a-4cae-b0ef-3838172cb946\") " Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.245659 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4971957b-b209-42b3-8f60-49fd69abde47-operator-scripts\") pod \"4971957b-b209-42b3-8f60-49fd69abde47\" (UID: \"4971957b-b209-42b3-8f60-49fd69abde47\") " Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.245688 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cds9d\" (UniqueName: \"kubernetes.io/projected/c8bdb72a-3792-4705-8601-a78cb69b4226-kube-api-access-cds9d\") pod \"c8bdb72a-3792-4705-8601-a78cb69b4226\" (UID: \"c8bdb72a-3792-4705-8601-a78cb69b4226\") " Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.245715 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a07dd5f3-2e99-4c1d-985a-d47b7f889b54-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a07dd5f3-2e99-4c1d-985a-d47b7f889b54" (UID: "a07dd5f3-2e99-4c1d-985a-d47b7f889b54"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.245757 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmqrf\" (UniqueName: \"kubernetes.io/projected/a07dd5f3-2e99-4c1d-985a-d47b7f889b54-kube-api-access-rmqrf\") pod \"a07dd5f3-2e99-4c1d-985a-d47b7f889b54\" (UID: \"a07dd5f3-2e99-4c1d-985a-d47b7f889b54\") " Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.245798 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48slp\" (UniqueName: \"kubernetes.io/projected/724d020f-8b7e-454d-a956-d34a9d6bcd6b-kube-api-access-48slp\") pod \"724d020f-8b7e-454d-a956-d34a9d6bcd6b\" (UID: \"724d020f-8b7e-454d-a956-d34a9d6bcd6b\") " Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.245866 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pzff\" (UniqueName: \"kubernetes.io/projected/4971957b-b209-42b3-8f60-49fd69abde47-kube-api-access-6pzff\") pod \"4971957b-b209-42b3-8f60-49fd69abde47\" (UID: \"4971957b-b209-42b3-8f60-49fd69abde47\") " Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.246141 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8bdb72a-3792-4705-8601-a78cb69b4226-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8bdb72a-3792-4705-8601-a78cb69b4226" (UID: "c8bdb72a-3792-4705-8601-a78cb69b4226"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.246525 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a07dd5f3-2e99-4c1d-985a-d47b7f889b54-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.246541 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8bdb72a-3792-4705-8601-a78cb69b4226-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.249546 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4971957b-b209-42b3-8f60-49fd69abde47-kube-api-access-6pzff" (OuterVolumeSpecName: "kube-api-access-6pzff") pod "4971957b-b209-42b3-8f60-49fd69abde47" (UID: "4971957b-b209-42b3-8f60-49fd69abde47"). InnerVolumeSpecName "kube-api-access-6pzff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.250119 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/445386f8-9d5a-4cae-b0ef-3838172cb946-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "445386f8-9d5a-4cae-b0ef-3838172cb946" (UID: "445386f8-9d5a-4cae-b0ef-3838172cb946"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.251154 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4971957b-b209-42b3-8f60-49fd69abde47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4971957b-b209-42b3-8f60-49fd69abde47" (UID: "4971957b-b209-42b3-8f60-49fd69abde47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.251822 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/445386f8-9d5a-4cae-b0ef-3838172cb946-kube-api-access-htpst" (OuterVolumeSpecName: "kube-api-access-htpst") pod "445386f8-9d5a-4cae-b0ef-3838172cb946" (UID: "445386f8-9d5a-4cae-b0ef-3838172cb946"). InnerVolumeSpecName "kube-api-access-htpst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.254036 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/724d020f-8b7e-454d-a956-d34a9d6bcd6b-kube-api-access-48slp" (OuterVolumeSpecName: "kube-api-access-48slp") pod "724d020f-8b7e-454d-a956-d34a9d6bcd6b" (UID: "724d020f-8b7e-454d-a956-d34a9d6bcd6b"). InnerVolumeSpecName "kube-api-access-48slp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.255715 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/724d020f-8b7e-454d-a956-d34a9d6bcd6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "724d020f-8b7e-454d-a956-d34a9d6bcd6b" (UID: "724d020f-8b7e-454d-a956-d34a9d6bcd6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.263357 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a07dd5f3-2e99-4c1d-985a-d47b7f889b54-kube-api-access-rmqrf" (OuterVolumeSpecName: "kube-api-access-rmqrf") pod "a07dd5f3-2e99-4c1d-985a-d47b7f889b54" (UID: "a07dd5f3-2e99-4c1d-985a-d47b7f889b54"). InnerVolumeSpecName "kube-api-access-rmqrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.263418 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8bdb72a-3792-4705-8601-a78cb69b4226-kube-api-access-cds9d" (OuterVolumeSpecName: "kube-api-access-cds9d") pod "c8bdb72a-3792-4705-8601-a78cb69b4226" (UID: "c8bdb72a-3792-4705-8601-a78cb69b4226"). InnerVolumeSpecName "kube-api-access-cds9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.349738 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmqrf\" (UniqueName: \"kubernetes.io/projected/a07dd5f3-2e99-4c1d-985a-d47b7f889b54-kube-api-access-rmqrf\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.351082 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48slp\" (UniqueName: \"kubernetes.io/projected/724d020f-8b7e-454d-a956-d34a9d6bcd6b-kube-api-access-48slp\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.351096 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pzff\" (UniqueName: \"kubernetes.io/projected/4971957b-b209-42b3-8f60-49fd69abde47-kube-api-access-6pzff\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.351114 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htpst\" (UniqueName: \"kubernetes.io/projected/445386f8-9d5a-4cae-b0ef-3838172cb946-kube-api-access-htpst\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.351125 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/724d020f-8b7e-454d-a956-d34a9d6bcd6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.351135 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/445386f8-9d5a-4cae-b0ef-3838172cb946-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.351144 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4971957b-b209-42b3-8f60-49fd69abde47-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.351154 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cds9d\" (UniqueName: \"kubernetes.io/projected/c8bdb72a-3792-4705-8601-a78cb69b4226-kube-api-access-cds9d\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.470327 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.613251 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-189b-account-create-update-5svgh" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.613757 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-189b-account-create-update-5svgh" event={"ID":"724d020f-8b7e-454d-a956-d34a9d6bcd6b","Type":"ContainerDied","Data":"95b9841532095a8d999da4c65f4c3bb738e13f405bb542bdafba2d8b2677dfea"} Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.614135 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95b9841532095a8d999da4c65f4c3bb738e13f405bb542bdafba2d8b2677dfea" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.618919 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sp2k2" event={"ID":"94ddc7ed-7a58-4859-acc1-f6e9796dff95","Type":"ContainerStarted","Data":"5d64263900c6d4441fb7e16054cea6d07bffe8f76a42f8f6a297bd1bbf9b370d"} Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.622606 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4eba-account-create-update-c7l6v" event={"ID":"c8bdb72a-3792-4705-8601-a78cb69b4226","Type":"ContainerDied","Data":"897923ed5e6eb902aa523f29b9aaba341f6cacf96515b8a0657ceb4b0e6100b3"} Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.622794 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="897923ed5e6eb902aa523f29b9aaba341f6cacf96515b8a0657ceb4b0e6100b3" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.622653 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4eba-account-create-update-c7l6v" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.624196 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5l7x7" event={"ID":"445386f8-9d5a-4cae-b0ef-3838172cb946","Type":"ContainerDied","Data":"e37aca4fb7b017d1e3a02d104d29844faf6010d6f25bf5cafcaa974a28db8a92"} Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.624225 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e37aca4fb7b017d1e3a02d104d29844faf6010d6f25bf5cafcaa974a28db8a92" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.624289 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5l7x7" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.625253 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mhtt4" event={"ID":"4971957b-b209-42b3-8f60-49fd69abde47","Type":"ContainerDied","Data":"73318f9d9fc3fdf6d6be367310851122ec97246eb08618265724809b93d43ba4"} Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.625273 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73318f9d9fc3fdf6d6be367310851122ec97246eb08618265724809b93d43ba4" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.625315 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mhtt4" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.642827 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-sp2k2" podStartSLOduration=3.136623284 podStartE2EDuration="16.64281044s" podCreationTimestamp="2026-02-26 11:29:24 +0000 UTC" firstStartedPulling="2026-02-26 11:29:26.730302032 +0000 UTC m=+1433.386041157" lastFinishedPulling="2026-02-26 11:29:40.236489198 +0000 UTC m=+1446.892228313" observedRunningTime="2026-02-26 11:29:40.638753386 +0000 UTC m=+1447.294492501" watchObservedRunningTime="2026-02-26 11:29:40.64281044 +0000 UTC m=+1447.298549555" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.648058 4724 generic.go:334] "Generic (PLEG): container finished" podID="39d817a7-9237-4683-88aa-20bbbd487d49" containerID="8f05001cead92858e49ae1d7c9b130d932f2b0bf5ecbb9b718dafe8b26f89571" exitCode=0 Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.648261 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" event={"ID":"39d817a7-9237-4683-88aa-20bbbd487d49","Type":"ContainerDied","Data":"8f05001cead92858e49ae1d7c9b130d932f2b0bf5ecbb9b718dafe8b26f89571"} Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.648305 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" event={"ID":"39d817a7-9237-4683-88aa-20bbbd487d49","Type":"ContainerDied","Data":"67a5cd1c59279449319086ea1c4586d927b84ac1988de79de6c1c0b5e7e62156"} Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.648322 4724 scope.go:117] "RemoveContainer" containerID="8f05001cead92858e49ae1d7c9b130d932f2b0bf5ecbb9b718dafe8b26f89571" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.648650 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jkf2d" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.655402 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-ovsdbserver-nb\") pod \"39d817a7-9237-4683-88aa-20bbbd487d49\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.655529 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlttz\" (UniqueName: \"kubernetes.io/projected/39d817a7-9237-4683-88aa-20bbbd487d49-kube-api-access-rlttz\") pod \"39d817a7-9237-4683-88aa-20bbbd487d49\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.655622 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-config\") pod \"39d817a7-9237-4683-88aa-20bbbd487d49\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.655646 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-dns-svc\") pod \"39d817a7-9237-4683-88aa-20bbbd487d49\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.655666 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-ovsdbserver-sb\") pod \"39d817a7-9237-4683-88aa-20bbbd487d49\" (UID: \"39d817a7-9237-4683-88aa-20bbbd487d49\") " Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.662268 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eecc-account-create-update-zjhmj" event={"ID":"a07dd5f3-2e99-4c1d-985a-d47b7f889b54","Type":"ContainerDied","Data":"7e57b43e1d918ea7e2e7e829b6d4d95a2f772960a6bee74ad7a1bbb46dcf1c3d"} Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.662308 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e57b43e1d918ea7e2e7e829b6d4d95a2f772960a6bee74ad7a1bbb46dcf1c3d" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.662828 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eecc-account-create-update-zjhmj" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.690244 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39d817a7-9237-4683-88aa-20bbbd487d49-kube-api-access-rlttz" (OuterVolumeSpecName: "kube-api-access-rlttz") pod "39d817a7-9237-4683-88aa-20bbbd487d49" (UID: "39d817a7-9237-4683-88aa-20bbbd487d49"). InnerVolumeSpecName "kube-api-access-rlttz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.693304 4724 scope.go:117] "RemoveContainer" containerID="7c562a223b3f141228aeab8387c0596740f7becc4574a79fd4a0a2d9462621b5" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.719705 4724 scope.go:117] "RemoveContainer" containerID="8f05001cead92858e49ae1d7c9b130d932f2b0bf5ecbb9b718dafe8b26f89571" Feb 26 11:29:40 crc kubenswrapper[4724]: E0226 11:29:40.720121 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f05001cead92858e49ae1d7c9b130d932f2b0bf5ecbb9b718dafe8b26f89571\": container with ID starting with 8f05001cead92858e49ae1d7c9b130d932f2b0bf5ecbb9b718dafe8b26f89571 not found: ID does not exist" containerID="8f05001cead92858e49ae1d7c9b130d932f2b0bf5ecbb9b718dafe8b26f89571" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.720170 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f05001cead92858e49ae1d7c9b130d932f2b0bf5ecbb9b718dafe8b26f89571"} err="failed to get container status \"8f05001cead92858e49ae1d7c9b130d932f2b0bf5ecbb9b718dafe8b26f89571\": rpc error: code = NotFound desc = could not find container \"8f05001cead92858e49ae1d7c9b130d932f2b0bf5ecbb9b718dafe8b26f89571\": container with ID starting with 8f05001cead92858e49ae1d7c9b130d932f2b0bf5ecbb9b718dafe8b26f89571 not found: ID does not exist" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.720199 4724 scope.go:117] "RemoveContainer" containerID="7c562a223b3f141228aeab8387c0596740f7becc4574a79fd4a0a2d9462621b5" Feb 26 11:29:40 crc kubenswrapper[4724]: E0226 11:29:40.720531 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c562a223b3f141228aeab8387c0596740f7becc4574a79fd4a0a2d9462621b5\": container with ID starting with 7c562a223b3f141228aeab8387c0596740f7becc4574a79fd4a0a2d9462621b5 not found: ID does not exist" containerID="7c562a223b3f141228aeab8387c0596740f7becc4574a79fd4a0a2d9462621b5" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.720572 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c562a223b3f141228aeab8387c0596740f7becc4574a79fd4a0a2d9462621b5"} err="failed to get container status \"7c562a223b3f141228aeab8387c0596740f7becc4574a79fd4a0a2d9462621b5\": rpc error: code = NotFound desc = could not find container \"7c562a223b3f141228aeab8387c0596740f7becc4574a79fd4a0a2d9462621b5\": container with ID starting with 7c562a223b3f141228aeab8387c0596740f7becc4574a79fd4a0a2d9462621b5 not found: ID does not exist" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.730890 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "39d817a7-9237-4683-88aa-20bbbd487d49" (UID: "39d817a7-9237-4683-88aa-20bbbd487d49"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.731442 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "39d817a7-9237-4683-88aa-20bbbd487d49" (UID: "39d817a7-9237-4683-88aa-20bbbd487d49"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.734437 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "39d817a7-9237-4683-88aa-20bbbd487d49" (UID: "39d817a7-9237-4683-88aa-20bbbd487d49"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.747060 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-config" (OuterVolumeSpecName: "config") pod "39d817a7-9237-4683-88aa-20bbbd487d49" (UID: "39d817a7-9237-4683-88aa-20bbbd487d49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.758627 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.758656 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.758665 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.758675 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39d817a7-9237-4683-88aa-20bbbd487d49-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:40 crc kubenswrapper[4724]: I0226 11:29:40.758684 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlttz\" (UniqueName: \"kubernetes.io/projected/39d817a7-9237-4683-88aa-20bbbd487d49-kube-api-access-rlttz\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:41 crc kubenswrapper[4724]: I0226 11:29:41.014781 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jkf2d"] Feb 26 11:29:41 crc kubenswrapper[4724]: I0226 11:29:41.023124 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jkf2d"] Feb 26 11:29:41 crc kubenswrapper[4724]: I0226 11:29:41.986520 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39d817a7-9237-4683-88aa-20bbbd487d49" path="/var/lib/kubelet/pods/39d817a7-9237-4683-88aa-20bbbd487d49/volumes" Feb 26 11:29:43 crc kubenswrapper[4724]: I0226 11:29:43.706295 4724 generic.go:334] "Generic (PLEG): container finished" podID="94ddc7ed-7a58-4859-acc1-f6e9796dff95" containerID="5d64263900c6d4441fb7e16054cea6d07bffe8f76a42f8f6a297bd1bbf9b370d" exitCode=0 Feb 26 11:29:43 crc kubenswrapper[4724]: I0226 11:29:43.706385 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sp2k2" event={"ID":"94ddc7ed-7a58-4859-acc1-f6e9796dff95","Type":"ContainerDied","Data":"5d64263900c6d4441fb7e16054cea6d07bffe8f76a42f8f6a297bd1bbf9b370d"} Feb 26 11:29:43 crc kubenswrapper[4724]: I0226 11:29:43.922014 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:29:45 crc kubenswrapper[4724]: I0226 11:29:45.073027 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sp2k2" Feb 26 11:29:45 crc kubenswrapper[4724]: I0226 11:29:45.134878 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d9th\" (UniqueName: \"kubernetes.io/projected/94ddc7ed-7a58-4859-acc1-f6e9796dff95-kube-api-access-9d9th\") pod \"94ddc7ed-7a58-4859-acc1-f6e9796dff95\" (UID: \"94ddc7ed-7a58-4859-acc1-f6e9796dff95\") " Feb 26 11:29:45 crc kubenswrapper[4724]: I0226 11:29:45.136377 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ddc7ed-7a58-4859-acc1-f6e9796dff95-combined-ca-bundle\") pod \"94ddc7ed-7a58-4859-acc1-f6e9796dff95\" (UID: \"94ddc7ed-7a58-4859-acc1-f6e9796dff95\") " Feb 26 11:29:45 crc kubenswrapper[4724]: I0226 11:29:45.136431 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ddc7ed-7a58-4859-acc1-f6e9796dff95-config-data\") pod \"94ddc7ed-7a58-4859-acc1-f6e9796dff95\" (UID: \"94ddc7ed-7a58-4859-acc1-f6e9796dff95\") " Feb 26 11:29:45 crc kubenswrapper[4724]: I0226 11:29:45.161559 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94ddc7ed-7a58-4859-acc1-f6e9796dff95-kube-api-access-9d9th" (OuterVolumeSpecName: "kube-api-access-9d9th") pod "94ddc7ed-7a58-4859-acc1-f6e9796dff95" (UID: "94ddc7ed-7a58-4859-acc1-f6e9796dff95"). InnerVolumeSpecName "kube-api-access-9d9th". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:45 crc kubenswrapper[4724]: I0226 11:29:45.178426 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94ddc7ed-7a58-4859-acc1-f6e9796dff95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94ddc7ed-7a58-4859-acc1-f6e9796dff95" (UID: "94ddc7ed-7a58-4859-acc1-f6e9796dff95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:29:45 crc kubenswrapper[4724]: I0226 11:29:45.196011 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94ddc7ed-7a58-4859-acc1-f6e9796dff95-config-data" (OuterVolumeSpecName: "config-data") pod "94ddc7ed-7a58-4859-acc1-f6e9796dff95" (UID: "94ddc7ed-7a58-4859-acc1-f6e9796dff95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:29:45 crc kubenswrapper[4724]: I0226 11:29:45.238409 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d9th\" (UniqueName: \"kubernetes.io/projected/94ddc7ed-7a58-4859-acc1-f6e9796dff95-kube-api-access-9d9th\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:45 crc kubenswrapper[4724]: I0226 11:29:45.238456 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ddc7ed-7a58-4859-acc1-f6e9796dff95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:45 crc kubenswrapper[4724]: I0226 11:29:45.238468 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ddc7ed-7a58-4859-acc1-f6e9796dff95-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:45 crc kubenswrapper[4724]: I0226 11:29:45.795453 4724 generic.go:334] "Generic (PLEG): container finished" podID="f5a58b47-8a63-4ec7-aad6-5b7668e56faa" containerID="2f173bf1c648b98533051df7b5eedc8205255da152e3ac406e3e4d9813f0fb00" exitCode=0 Feb 26 11:29:45 crc kubenswrapper[4724]: I0226 11:29:45.795505 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6v8t4" event={"ID":"f5a58b47-8a63-4ec7-aad6-5b7668e56faa","Type":"ContainerDied","Data":"2f173bf1c648b98533051df7b5eedc8205255da152e3ac406e3e4d9813f0fb00"} Feb 26 11:29:45 crc kubenswrapper[4724]: I0226 11:29:45.799630 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sp2k2" event={"ID":"94ddc7ed-7a58-4859-acc1-f6e9796dff95","Type":"ContainerDied","Data":"9b57243d8d156a6299412d0c5da85e210607201bb51a876016bb7a03a8a2e387"} Feb 26 11:29:45 crc kubenswrapper[4724]: I0226 11:29:45.799681 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b57243d8d156a6299412d0c5da85e210607201bb51a876016bb7a03a8a2e387" Feb 26 11:29:45 crc kubenswrapper[4724]: I0226 11:29:45.799751 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sp2k2" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.075791 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vb9m8"] Feb 26 11:29:46 crc kubenswrapper[4724]: E0226 11:29:46.076412 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b463c40e-2552-4c4a-97b4-4a0aba53b68a" containerName="mariadb-database-create" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076427 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b463c40e-2552-4c4a-97b4-4a0aba53b68a" containerName="mariadb-database-create" Feb 26 11:29:46 crc kubenswrapper[4724]: E0226 11:29:46.076440 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445386f8-9d5a-4cae-b0ef-3838172cb946" containerName="mariadb-database-create" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076446 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="445386f8-9d5a-4cae-b0ef-3838172cb946" containerName="mariadb-database-create" Feb 26 11:29:46 crc kubenswrapper[4724]: E0226 11:29:46.076458 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4971957b-b209-42b3-8f60-49fd69abde47" containerName="mariadb-database-create" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076465 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4971957b-b209-42b3-8f60-49fd69abde47" containerName="mariadb-database-create" Feb 26 11:29:46 crc kubenswrapper[4724]: E0226 11:29:46.076478 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="724d020f-8b7e-454d-a956-d34a9d6bcd6b" containerName="mariadb-account-create-update" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076485 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="724d020f-8b7e-454d-a956-d34a9d6bcd6b" containerName="mariadb-account-create-update" Feb 26 11:29:46 crc kubenswrapper[4724]: E0226 11:29:46.076498 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69948f24-a054-4969-8449-0a85840a5da9" containerName="mariadb-database-create" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076516 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="69948f24-a054-4969-8449-0a85840a5da9" containerName="mariadb-database-create" Feb 26 11:29:46 crc kubenswrapper[4724]: E0226 11:29:46.076538 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39d817a7-9237-4683-88aa-20bbbd487d49" containerName="init" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076545 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d817a7-9237-4683-88aa-20bbbd487d49" containerName="init" Feb 26 11:29:46 crc kubenswrapper[4724]: E0226 11:29:46.076556 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39d817a7-9237-4683-88aa-20bbbd487d49" containerName="dnsmasq-dns" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076562 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d817a7-9237-4683-88aa-20bbbd487d49" containerName="dnsmasq-dns" Feb 26 11:29:46 crc kubenswrapper[4724]: E0226 11:29:46.076571 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab0dd31d-c5ce-4d29-a9b4-56497a14e09d" containerName="mariadb-account-create-update" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076577 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab0dd31d-c5ce-4d29-a9b4-56497a14e09d" containerName="mariadb-account-create-update" Feb 26 11:29:46 crc kubenswrapper[4724]: E0226 11:29:46.076585 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a07dd5f3-2e99-4c1d-985a-d47b7f889b54" containerName="mariadb-account-create-update" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076591 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a07dd5f3-2e99-4c1d-985a-d47b7f889b54" containerName="mariadb-account-create-update" Feb 26 11:29:46 crc kubenswrapper[4724]: E0226 11:29:46.076604 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ddc7ed-7a58-4859-acc1-f6e9796dff95" containerName="keystone-db-sync" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076610 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ddc7ed-7a58-4859-acc1-f6e9796dff95" containerName="keystone-db-sync" Feb 26 11:29:46 crc kubenswrapper[4724]: E0226 11:29:46.076620 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8bdb72a-3792-4705-8601-a78cb69b4226" containerName="mariadb-account-create-update" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076626 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8bdb72a-3792-4705-8601-a78cb69b4226" containerName="mariadb-account-create-update" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076802 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab0dd31d-c5ce-4d29-a9b4-56497a14e09d" containerName="mariadb-account-create-update" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076850 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="69948f24-a054-4969-8449-0a85840a5da9" containerName="mariadb-database-create" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076859 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="94ddc7ed-7a58-4859-acc1-f6e9796dff95" containerName="keystone-db-sync" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076870 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8bdb72a-3792-4705-8601-a78cb69b4226" containerName="mariadb-account-create-update" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076882 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a07dd5f3-2e99-4c1d-985a-d47b7f889b54" containerName="mariadb-account-create-update" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076900 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="724d020f-8b7e-454d-a956-d34a9d6bcd6b" containerName="mariadb-account-create-update" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076923 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="445386f8-9d5a-4cae-b0ef-3838172cb946" containerName="mariadb-database-create" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076942 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b463c40e-2552-4c4a-97b4-4a0aba53b68a" containerName="mariadb-database-create" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076955 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4971957b-b209-42b3-8f60-49fd69abde47" containerName="mariadb-database-create" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.076969 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="39d817a7-9237-4683-88aa-20bbbd487d49" containerName="dnsmasq-dns" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.077482 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.085594 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l4lrz" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.085623 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.085916 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.086043 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.089172 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.091310 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-zqsvm"] Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.092847 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.102348 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-zqsvm"] Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.110095 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vb9m8"] Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.153811 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-scripts\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.153916 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-combined-ca-bundle\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.153976 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vsbs\" (UniqueName: \"kubernetes.io/projected/a7effd79-5961-474b-b3b3-4a41b89db380-kube-api-access-9vsbs\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.154082 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-credential-keys\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.154110 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-fernet-keys\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.154150 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-config-data\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.262919 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-credential-keys\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.263033 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-fernet-keys\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.263075 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.263105 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.263130 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-config-data\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.263167 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-scripts\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.272914 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqwvg\" (UniqueName: \"kubernetes.io/projected/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-kube-api-access-zqwvg\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.273005 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.273057 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-combined-ca-bundle\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.273192 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vsbs\" (UniqueName: \"kubernetes.io/projected/a7effd79-5961-474b-b3b3-4a41b89db380-kube-api-access-9vsbs\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.273282 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.273323 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-config\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.305238 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-fernet-keys\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.306053 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-credential-keys\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.308991 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-config-data\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.318711 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-scripts\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.319679 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-combined-ca-bundle\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.363783 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vsbs\" (UniqueName: \"kubernetes.io/projected/a7effd79-5961-474b-b3b3-4a41b89db380-kube-api-access-9vsbs\") pod \"keystone-bootstrap-vb9m8\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.364469 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-jrqgs"] Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.365815 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jrqgs" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.378249 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqwvg\" (UniqueName: \"kubernetes.io/projected/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-kube-api-access-zqwvg\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.378496 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.378675 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.378778 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-config\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.378880 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.379033 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.380168 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.380216 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.380936 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-config\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.380954 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.381554 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.386077 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7489d86c77-spnp8"] Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.387789 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.393773 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.394026 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.394159 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.402870 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-6w9bw" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.403096 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-hcmvc" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.407149 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.407848 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.412628 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jrqgs"] Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.435263 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7489d86c77-spnp8"] Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.481015 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65202f21-3756-4083-b158-9f06dca33deb-combined-ca-bundle\") pod \"heat-db-sync-jrqgs\" (UID: \"65202f21-3756-4083-b158-9f06dca33deb\") " pod="openstack/heat-db-sync-jrqgs" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.481065 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k5tp\" (UniqueName: \"kubernetes.io/projected/65202f21-3756-4083-b158-9f06dca33deb-kube-api-access-5k5tp\") pod \"heat-db-sync-jrqgs\" (UID: \"65202f21-3756-4083-b158-9f06dca33deb\") " pod="openstack/heat-db-sync-jrqgs" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.481093 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-scripts\") pod \"horizon-7489d86c77-spnp8\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.481117 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-config-data\") pod \"horizon-7489d86c77-spnp8\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.481149 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-horizon-secret-key\") pod \"horizon-7489d86c77-spnp8\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.481202 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-logs\") pod \"horizon-7489d86c77-spnp8\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.481268 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65202f21-3756-4083-b158-9f06dca33deb-config-data\") pod \"heat-db-sync-jrqgs\" (UID: \"65202f21-3756-4083-b158-9f06dca33deb\") " pod="openstack/heat-db-sync-jrqgs" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.481308 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftfjd\" (UniqueName: \"kubernetes.io/projected/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-kube-api-access-ftfjd\") pod \"horizon-7489d86c77-spnp8\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.488725 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqwvg\" (UniqueName: \"kubernetes.io/projected/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-kube-api-access-zqwvg\") pod \"dnsmasq-dns-6f8c45789f-zqsvm\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.540719 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.543331 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.655663 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.655968 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.659402 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65202f21-3756-4083-b158-9f06dca33deb-combined-ca-bundle\") pod \"heat-db-sync-jrqgs\" (UID: \"65202f21-3756-4083-b158-9f06dca33deb\") " pod="openstack/heat-db-sync-jrqgs" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.659484 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k5tp\" (UniqueName: \"kubernetes.io/projected/65202f21-3756-4083-b158-9f06dca33deb-kube-api-access-5k5tp\") pod \"heat-db-sync-jrqgs\" (UID: \"65202f21-3756-4083-b158-9f06dca33deb\") " pod="openstack/heat-db-sync-jrqgs" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.659559 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-scripts\") pod \"horizon-7489d86c77-spnp8\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.659634 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-config-data\") pod \"horizon-7489d86c77-spnp8\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.659737 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-horizon-secret-key\") pod \"horizon-7489d86c77-spnp8\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.659819 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-logs\") pod \"horizon-7489d86c77-spnp8\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.659954 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65202f21-3756-4083-b158-9f06dca33deb-config-data\") pod \"heat-db-sync-jrqgs\" (UID: \"65202f21-3756-4083-b158-9f06dca33deb\") " pod="openstack/heat-db-sync-jrqgs" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.660048 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftfjd\" (UniqueName: \"kubernetes.io/projected/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-kube-api-access-ftfjd\") pod \"horizon-7489d86c77-spnp8\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.661470 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-logs\") pod \"horizon-7489d86c77-spnp8\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.661890 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-scripts\") pod \"horizon-7489d86c77-spnp8\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.665200 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-config-data\") pod \"horizon-7489d86c77-spnp8\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.668606 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-horizon-secret-key\") pod \"horizon-7489d86c77-spnp8\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.668753 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65202f21-3756-4083-b158-9f06dca33deb-combined-ca-bundle\") pod \"heat-db-sync-jrqgs\" (UID: \"65202f21-3756-4083-b158-9f06dca33deb\") " pod="openstack/heat-db-sync-jrqgs" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.680878 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65202f21-3756-4083-b158-9f06dca33deb-config-data\") pod \"heat-db-sync-jrqgs\" (UID: \"65202f21-3756-4083-b158-9f06dca33deb\") " pod="openstack/heat-db-sync-jrqgs" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.708780 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k5tp\" (UniqueName: \"kubernetes.io/projected/65202f21-3756-4083-b158-9f06dca33deb-kube-api-access-5k5tp\") pod \"heat-db-sync-jrqgs\" (UID: \"65202f21-3756-4083-b158-9f06dca33deb\") " pod="openstack/heat-db-sync-jrqgs" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.733633 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.736016 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jrqgs" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.764334 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-config-data\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.764420 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-scripts\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.764459 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl9tk\" (UniqueName: \"kubernetes.io/projected/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-kube-api-access-pl9tk\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.764559 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-run-httpd\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.764587 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.764611 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.764638 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-log-httpd\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.776511 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.865791 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-run-httpd\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.865833 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.865850 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.865886 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-log-httpd\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.865926 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-config-data\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.865961 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-scripts\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.865984 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl9tk\" (UniqueName: \"kubernetes.io/projected/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-kube-api-access-pl9tk\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.866747 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-run-httpd\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.868615 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-log-httpd\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.878872 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.880921 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-scripts\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.881285 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-config-data\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.881641 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.935457 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl9tk\" (UniqueName: \"kubernetes.io/projected/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-kube-api-access-pl9tk\") pod \"ceilometer-0\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.963268 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-fllvh"] Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.964347 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.986802 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.990524 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-b6cqc"] Feb 26 11:29:46 crc kubenswrapper[4724]: I0226 11:29:46.993734 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b6cqc" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.003025 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zsnlw" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.003401 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.003595 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.015263 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-fllvh"] Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.037381 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.049919 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ppnzv" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.049939 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.070879 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b6cqc"] Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.087099 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-config-data\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.087519 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7m4n\" (UniqueName: \"kubernetes.io/projected/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-kube-api-access-b7m4n\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.087592 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-scripts\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.087625 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-db-sync-config-data\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.087653 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-combined-ca-bundle\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.087680 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-etc-machine-id\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.189285 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5fb0ea-707e-4123-8510-b1d1f9976c34-combined-ca-bundle\") pod \"neutron-db-sync-b6cqc\" (UID: \"ba5fb0ea-707e-4123-8510-b1d1f9976c34\") " pod="openstack/neutron-db-sync-b6cqc" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.189336 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmgks\" (UniqueName: \"kubernetes.io/projected/ba5fb0ea-707e-4123-8510-b1d1f9976c34-kube-api-access-cmgks\") pod \"neutron-db-sync-b6cqc\" (UID: \"ba5fb0ea-707e-4123-8510-b1d1f9976c34\") " pod="openstack/neutron-db-sync-b6cqc" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.189385 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-scripts\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.189409 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba5fb0ea-707e-4123-8510-b1d1f9976c34-config\") pod \"neutron-db-sync-b6cqc\" (UID: \"ba5fb0ea-707e-4123-8510-b1d1f9976c34\") " pod="openstack/neutron-db-sync-b6cqc" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.189433 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-db-sync-config-data\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.189456 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-combined-ca-bundle\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.189474 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-etc-machine-id\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.189504 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-config-data\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.189558 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7m4n\" (UniqueName: \"kubernetes.io/projected/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-kube-api-access-b7m4n\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.194668 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b5845cdd9-d7d56"] Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.195989 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.196127 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-scripts\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.196442 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-etc-machine-id\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.231537 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-rkvvl"] Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.233155 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-config-data\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.239436 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rkvvl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.256983 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-msvw2" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.257322 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.263614 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-db-sync-config-data\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.275777 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7m4n\" (UniqueName: \"kubernetes.io/projected/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-kube-api-access-b7m4n\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.276314 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-combined-ca-bundle\") pod \"cinder-db-sync-fllvh\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.276979 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-zqsvm"] Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.304778 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftfjd\" (UniqueName: \"kubernetes.io/projected/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-kube-api-access-ftfjd\") pod \"horizon-7489d86c77-spnp8\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.309845 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b5845cdd9-d7d56"] Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.317416 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba5fb0ea-707e-4123-8510-b1d1f9976c34-config\") pod \"neutron-db-sync-b6cqc\" (UID: \"ba5fb0ea-707e-4123-8510-b1d1f9976c34\") " pod="openstack/neutron-db-sync-b6cqc" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.317550 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93f9ce1f-2294-454b-bda1-d114e3ab9422-scripts\") pod \"horizon-5b5845cdd9-d7d56\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.317615 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpd6d\" (UniqueName: \"kubernetes.io/projected/93f9ce1f-2294-454b-bda1-d114e3ab9422-kube-api-access-zpd6d\") pod \"horizon-5b5845cdd9-d7d56\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.317702 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93f9ce1f-2294-454b-bda1-d114e3ab9422-logs\") pod \"horizon-5b5845cdd9-d7d56\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.317926 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5fb0ea-707e-4123-8510-b1d1f9976c34-combined-ca-bundle\") pod \"neutron-db-sync-b6cqc\" (UID: \"ba5fb0ea-707e-4123-8510-b1d1f9976c34\") " pod="openstack/neutron-db-sync-b6cqc" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.317961 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmgks\" (UniqueName: \"kubernetes.io/projected/ba5fb0ea-707e-4123-8510-b1d1f9976c34-kube-api-access-cmgks\") pod \"neutron-db-sync-b6cqc\" (UID: \"ba5fb0ea-707e-4123-8510-b1d1f9976c34\") " pod="openstack/neutron-db-sync-b6cqc" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.317985 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93f9ce1f-2294-454b-bda1-d114e3ab9422-config-data\") pod \"horizon-5b5845cdd9-d7d56\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.318041 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/93f9ce1f-2294-454b-bda1-d114e3ab9422-horizon-secret-key\") pod \"horizon-5b5845cdd9-d7d56\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.318259 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fllvh" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.338041 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba5fb0ea-707e-4123-8510-b1d1f9976c34-config\") pod \"neutron-db-sync-b6cqc\" (UID: \"ba5fb0ea-707e-4123-8510-b1d1f9976c34\") " pod="openstack/neutron-db-sync-b6cqc" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.345808 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5fb0ea-707e-4123-8510-b1d1f9976c34-combined-ca-bundle\") pod \"neutron-db-sync-b6cqc\" (UID: \"ba5fb0ea-707e-4123-8510-b1d1f9976c34\") " pod="openstack/neutron-db-sync-b6cqc" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.349951 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-bnckl"] Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.351072 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.366509 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rkvvl"] Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.372415 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.376885 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gtfzr" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.377100 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.407935 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmgks\" (UniqueName: \"kubernetes.io/projected/ba5fb0ea-707e-4123-8510-b1d1f9976c34-kube-api-access-cmgks\") pod \"neutron-db-sync-b6cqc\" (UID: \"ba5fb0ea-707e-4123-8510-b1d1f9976c34\") " pod="openstack/neutron-db-sync-b6cqc" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.425506 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpd6d\" (UniqueName: \"kubernetes.io/projected/93f9ce1f-2294-454b-bda1-d114e3ab9422-kube-api-access-zpd6d\") pod \"horizon-5b5845cdd9-d7d56\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.425756 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93f9ce1f-2294-454b-bda1-d114e3ab9422-logs\") pod \"horizon-5b5845cdd9-d7d56\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.425926 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dedd4492-c73a-4f47-8243-fea2dd842a4f-combined-ca-bundle\") pod \"barbican-db-sync-rkvvl\" (UID: \"dedd4492-c73a-4f47-8243-fea2dd842a4f\") " pod="openstack/barbican-db-sync-rkvvl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.426064 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dedd4492-c73a-4f47-8243-fea2dd842a4f-db-sync-config-data\") pod \"barbican-db-sync-rkvvl\" (UID: \"dedd4492-c73a-4f47-8243-fea2dd842a4f\") " pod="openstack/barbican-db-sync-rkvvl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.426168 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93f9ce1f-2294-454b-bda1-d114e3ab9422-config-data\") pod \"horizon-5b5845cdd9-d7d56\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.426352 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/93f9ce1f-2294-454b-bda1-d114e3ab9422-horizon-secret-key\") pod \"horizon-5b5845cdd9-d7d56\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.426493 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp5v6\" (UniqueName: \"kubernetes.io/projected/dedd4492-c73a-4f47-8243-fea2dd842a4f-kube-api-access-kp5v6\") pod \"barbican-db-sync-rkvvl\" (UID: \"dedd4492-c73a-4f47-8243-fea2dd842a4f\") " pod="openstack/barbican-db-sync-rkvvl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.426623 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93f9ce1f-2294-454b-bda1-d114e3ab9422-scripts\") pod \"horizon-5b5845cdd9-d7d56\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.427738 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93f9ce1f-2294-454b-bda1-d114e3ab9422-logs\") pod \"horizon-5b5845cdd9-d7d56\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.427893 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93f9ce1f-2294-454b-bda1-d114e3ab9422-scripts\") pod \"horizon-5b5845cdd9-d7d56\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.428800 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93f9ce1f-2294-454b-bda1-d114e3ab9422-config-data\") pod \"horizon-5b5845cdd9-d7d56\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.438111 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bnckl"] Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.440690 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/93f9ce1f-2294-454b-bda1-d114e3ab9422-horizon-secret-key\") pod \"horizon-5b5845cdd9-d7d56\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.458849 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.532826 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpd6d\" (UniqueName: \"kubernetes.io/projected/93f9ce1f-2294-454b-bda1-d114e3ab9422-kube-api-access-zpd6d\") pod \"horizon-5b5845cdd9-d7d56\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.533206 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-dhjzj"] Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.546646 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp5v6\" (UniqueName: \"kubernetes.io/projected/dedd4492-c73a-4f47-8243-fea2dd842a4f-kube-api-access-kp5v6\") pod \"barbican-db-sync-rkvvl\" (UID: \"dedd4492-c73a-4f47-8243-fea2dd842a4f\") " pod="openstack/barbican-db-sync-rkvvl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.547022 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-scripts\") pod \"placement-db-sync-bnckl\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.564771 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.566234 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-combined-ca-bundle\") pod \"placement-db-sync-bnckl\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.566264 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f54af76-4781-4532-b8fc-5100f18b0579-logs\") pod \"placement-db-sync-bnckl\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.566302 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-config-data\") pod \"placement-db-sync-bnckl\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.566328 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rn9p\" (UniqueName: \"kubernetes.io/projected/3f54af76-4781-4532-b8fc-5100f18b0579-kube-api-access-8rn9p\") pod \"placement-db-sync-bnckl\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.566406 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dedd4492-c73a-4f47-8243-fea2dd842a4f-combined-ca-bundle\") pod \"barbican-db-sync-rkvvl\" (UID: \"dedd4492-c73a-4f47-8243-fea2dd842a4f\") " pod="openstack/barbican-db-sync-rkvvl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.566472 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dedd4492-c73a-4f47-8243-fea2dd842a4f-db-sync-config-data\") pod \"barbican-db-sync-rkvvl\" (UID: \"dedd4492-c73a-4f47-8243-fea2dd842a4f\") " pod="openstack/barbican-db-sync-rkvvl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.574940 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.587106 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dedd4492-c73a-4f47-8243-fea2dd842a4f-combined-ca-bundle\") pod \"barbican-db-sync-rkvvl\" (UID: \"dedd4492-c73a-4f47-8243-fea2dd842a4f\") " pod="openstack/barbican-db-sync-rkvvl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.633213 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dedd4492-c73a-4f47-8243-fea2dd842a4f-db-sync-config-data\") pod \"barbican-db-sync-rkvvl\" (UID: \"dedd4492-c73a-4f47-8243-fea2dd842a4f\") " pod="openstack/barbican-db-sync-rkvvl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.634422 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp5v6\" (UniqueName: \"kubernetes.io/projected/dedd4492-c73a-4f47-8243-fea2dd842a4f-kube-api-access-kp5v6\") pod \"barbican-db-sync-rkvvl\" (UID: \"dedd4492-c73a-4f47-8243-fea2dd842a4f\") " pod="openstack/barbican-db-sync-rkvvl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.655742 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b6cqc" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.670163 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-dhjzj"] Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.680435 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-scripts\") pod \"placement-db-sync-bnckl\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.680491 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.680533 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-combined-ca-bundle\") pod \"placement-db-sync-bnckl\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.680557 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f54af76-4781-4532-b8fc-5100f18b0579-logs\") pod \"placement-db-sync-bnckl\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.680598 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-config-data\") pod \"placement-db-sync-bnckl\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.680626 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.680650 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-config\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.680888 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rn9p\" (UniqueName: \"kubernetes.io/projected/3f54af76-4781-4532-b8fc-5100f18b0579-kube-api-access-8rn9p\") pod \"placement-db-sync-bnckl\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.681064 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjq89\" (UniqueName: \"kubernetes.io/projected/a12e3c20-9594-4fff-8f15-47e10d1c3f08-kube-api-access-kjq89\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.681106 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.681321 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.683124 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f54af76-4781-4532-b8fc-5100f18b0579-logs\") pod \"placement-db-sync-bnckl\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.701574 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-scripts\") pod \"placement-db-sync-bnckl\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.744432 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-combined-ca-bundle\") pod \"placement-db-sync-bnckl\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.744956 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vb9m8"] Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.749751 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-config-data\") pod \"placement-db-sync-bnckl\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.779447 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rn9p\" (UniqueName: \"kubernetes.io/projected/3f54af76-4781-4532-b8fc-5100f18b0579-kube-api-access-8rn9p\") pod \"placement-db-sync-bnckl\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.784024 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.784088 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.784126 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.784160 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-config\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.784322 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjq89\" (UniqueName: \"kubernetes.io/projected/a12e3c20-9594-4fff-8f15-47e10d1c3f08-kube-api-access-kjq89\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.784356 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.785007 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.785537 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.786097 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-config\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.788687 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.795236 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.801498 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bnckl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.806051 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="b0d66ab1-513b-452a-9f31-bfc4b4be6c18" containerName="galera" probeResult="failure" output="command timed out" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.816558 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b0d66ab1-513b-452a-9f31-bfc4b4be6c18" containerName="galera" probeResult="failure" output="command timed out" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.817503 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjq89\" (UniqueName: \"kubernetes.io/projected/a12e3c20-9594-4fff-8f15-47e10d1c3f08-kube-api-access-kjq89\") pod \"dnsmasq-dns-fcfdd6f9f-dhjzj\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.868100 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rkvvl" Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.889445 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vb9m8" event={"ID":"a7effd79-5961-474b-b3b3-4a41b89db380","Type":"ContainerStarted","Data":"653e5b9117aeecbe6e54e3d35fc752e082db09719543f0c3fe58501ce0923522"} Feb 26 11:29:47 crc kubenswrapper[4724]: I0226 11:29:47.920064 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.510866 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.519305 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7489d86c77-spnp8"] Feb 26 11:29:48 crc kubenswrapper[4724]: W0226 11:29:48.519927 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccd3a5e4_52a8_4f94_a4d3_c2099e118b30.slice/crio-0661f982c31bf8e02d2b29095e3b154123bffeb32b9ec424e2a8452e39a0843b WatchSource:0}: Error finding container 0661f982c31bf8e02d2b29095e3b154123bffeb32b9ec424e2a8452e39a0843b: Status 404 returned error can't find the container with id 0661f982c31bf8e02d2b29095e3b154123bffeb32b9ec424e2a8452e39a0843b Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.594695 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-zqsvm"] Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.640341 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.671115 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jrqgs"] Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.689772 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-fllvh"] Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.709469 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-combined-ca-bundle\") pod \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.709593 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-db-sync-config-data\") pod \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.709720 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-config-data\") pod \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.709753 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdc84\" (UniqueName: \"kubernetes.io/projected/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-kube-api-access-mdc84\") pod \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\" (UID: \"f5a58b47-8a63-4ec7-aad6-5b7668e56faa\") " Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.718099 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-kube-api-access-mdc84" (OuterVolumeSpecName: "kube-api-access-mdc84") pod "f5a58b47-8a63-4ec7-aad6-5b7668e56faa" (UID: "f5a58b47-8a63-4ec7-aad6-5b7668e56faa"). InnerVolumeSpecName "kube-api-access-mdc84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.718288 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f5a58b47-8a63-4ec7-aad6-5b7668e56faa" (UID: "f5a58b47-8a63-4ec7-aad6-5b7668e56faa"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.750883 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bnckl"] Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.761577 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5a58b47-8a63-4ec7-aad6-5b7668e56faa" (UID: "f5a58b47-8a63-4ec7-aad6-5b7668e56faa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.794434 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-config-data" (OuterVolumeSpecName: "config-data") pod "f5a58b47-8a63-4ec7-aad6-5b7668e56faa" (UID: "f5a58b47-8a63-4ec7-aad6-5b7668e56faa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.811731 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.811773 4724 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.811786 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.811797 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdc84\" (UniqueName: \"kubernetes.io/projected/f5a58b47-8a63-4ec7-aad6-5b7668e56faa-kube-api-access-mdc84\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.910825 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ce9a592-28c1-40fc-b4e5-90523b59c6d5","Type":"ContainerStarted","Data":"5a7f99e884f3917960046e92266601052e01ba600fa9cf01245b4d4d1ffc3e14"} Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.924370 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vb9m8" event={"ID":"a7effd79-5961-474b-b3b3-4a41b89db380","Type":"ContainerStarted","Data":"978c9f1dadd2fe427d90affee829023bd6e57a29c4e230020d3e9d63c9331b19"} Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.929483 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b5845cdd9-d7d56"] Feb 26 11:29:48 crc kubenswrapper[4724]: W0226 11:29:48.932291 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93f9ce1f_2294_454b_bda1_d114e3ab9422.slice/crio-10c5a56776c775a82ddb0a626b8f5798d98f42180344d9340138c325943096c9 WatchSource:0}: Error finding container 10c5a56776c775a82ddb0a626b8f5798d98f42180344d9340138c325943096c9: Status 404 returned error can't find the container with id 10c5a56776c775a82ddb0a626b8f5798d98f42180344d9340138c325943096c9 Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.940635 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bnckl" event={"ID":"3f54af76-4781-4532-b8fc-5100f18b0579","Type":"ContainerStarted","Data":"1b5b63930d3504b43251344e93566145128262fe1db95c30c38f9f7bdb376646"} Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.945253 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jrqgs" event={"ID":"65202f21-3756-4083-b158-9f06dca33deb","Type":"ContainerStarted","Data":"5e233ac070bc7d12c710cb67e63e229665cec758442e2c5155015ac6972eb021"} Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.952835 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7489d86c77-spnp8" event={"ID":"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30","Type":"ContainerStarted","Data":"0661f982c31bf8e02d2b29095e3b154123bffeb32b9ec424e2a8452e39a0843b"} Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.954706 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" event={"ID":"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6","Type":"ContainerStarted","Data":"a3e92eaf8595ea5852901e879bae4199731fc384c32efd7b1be5199180ee2c79"} Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.956893 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rkvvl"] Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.967698 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vb9m8" podStartSLOduration=2.967677367 podStartE2EDuration="2.967677367s" podCreationTimestamp="2026-02-26 11:29:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:48.965769338 +0000 UTC m=+1455.621508463" watchObservedRunningTime="2026-02-26 11:29:48.967677367 +0000 UTC m=+1455.623416492" Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.977171 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fllvh" event={"ID":"f6f963de-7cc1-40fa-93ce-5f1facd31ffc","Type":"ContainerStarted","Data":"96e2d0c3ee3903b58d23ef780d3bcfc64a3837d4d08e07dda1c8686c2721e1e9"} Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.991649 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6v8t4" event={"ID":"f5a58b47-8a63-4ec7-aad6-5b7668e56faa","Type":"ContainerDied","Data":"b93734af6d1de38273f04ce1e7c034a259b1a06069e6e6948ee058734c3267fe"} Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.991699 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b93734af6d1de38273f04ce1e7c034a259b1a06069e6e6948ee058734c3267fe" Feb 26 11:29:48 crc kubenswrapper[4724]: I0226 11:29:48.991771 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6v8t4" Feb 26 11:29:49 crc kubenswrapper[4724]: I0226 11:29:49.063728 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b6cqc"] Feb 26 11:29:49 crc kubenswrapper[4724]: I0226 11:29:49.073048 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-dhjzj"] Feb 26 11:29:49 crc kubenswrapper[4724]: W0226 11:29:49.095256 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda12e3c20_9594_4fff_8f15_47e10d1c3f08.slice/crio-04acfc6f6e57ebfb0984f01ed9ee85a5f3dceb6d79302220b9c429563d3039d4 WatchSource:0}: Error finding container 04acfc6f6e57ebfb0984f01ed9ee85a5f3dceb6d79302220b9c429563d3039d4: Status 404 returned error can't find the container with id 04acfc6f6e57ebfb0984f01ed9ee85a5f3dceb6d79302220b9c429563d3039d4 Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.059714 4724 generic.go:334] "Generic (PLEG): container finished" podID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerID="7564d9507bf6de308cbc7a9df902bec921f2b88ff4914c260d5f31aaf50fc1f1" exitCode=0 Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.073604 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" event={"ID":"a12e3c20-9594-4fff-8f15-47e10d1c3f08","Type":"ContainerDied","Data":"7564d9507bf6de308cbc7a9df902bec921f2b88ff4914c260d5f31aaf50fc1f1"} Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.073651 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" event={"ID":"a12e3c20-9594-4fff-8f15-47e10d1c3f08","Type":"ContainerStarted","Data":"04acfc6f6e57ebfb0984f01ed9ee85a5f3dceb6d79302220b9c429563d3039d4"} Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.073665 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-dhjzj"] Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.092912 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-rh42r"] Feb 26 11:29:50 crc kubenswrapper[4724]: E0226 11:29:50.093886 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a58b47-8a63-4ec7-aad6-5b7668e56faa" containerName="glance-db-sync" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.093909 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a58b47-8a63-4ec7-aad6-5b7668e56faa" containerName="glance-db-sync" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.111828 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5a58b47-8a63-4ec7-aad6-5b7668e56faa" containerName="glance-db-sync" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.136340 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.134806 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rkvvl" event={"ID":"dedd4492-c73a-4f47-8243-fea2dd842a4f","Type":"ContainerStarted","Data":"c0b67cf2e1d1f2caf3ec17cabd18c74a668de9c1987cc0997e04cd659c32404c"} Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.162406 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-rh42r"] Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.166249 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b6cqc" event={"ID":"ba5fb0ea-707e-4123-8510-b1d1f9976c34","Type":"ContainerStarted","Data":"6531ce102a318f4e1d9c9d45ec01a52344227633d6e92c79d338f39d229919e8"} Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.166434 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b6cqc" event={"ID":"ba5fb0ea-707e-4123-8510-b1d1f9976c34","Type":"ContainerStarted","Data":"6c9349dd70f1f0f3e6cd43f9385ce0f7c9b250d38f62de9c7299531abe71a18a"} Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.171917 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.177989 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5845cdd9-d7d56" event={"ID":"93f9ce1f-2294-454b-bda1-d114e3ab9422","Type":"ContainerStarted","Data":"10c5a56776c775a82ddb0a626b8f5798d98f42180344d9340138c325943096c9"} Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.184332 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.184426 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-config\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.184486 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbzqn\" (UniqueName: \"kubernetes.io/projected/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-kube-api-access-qbzqn\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.184612 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.184650 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.179757 4724 generic.go:334] "Generic (PLEG): container finished" podID="91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6" containerID="1c1b79c5ec07d0556ff7057c65a51e22e6ca904eacbbc145d4bbdc6707dc6c62" exitCode=0 Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.186319 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" event={"ID":"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6","Type":"ContainerDied","Data":"1c1b79c5ec07d0556ff7057c65a51e22e6ca904eacbbc145d4bbdc6707dc6c62"} Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.258962 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-b6cqc" podStartSLOduration=4.258942871 podStartE2EDuration="4.258942871s" podCreationTimestamp="2026-02-26 11:29:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:50.233866 +0000 UTC m=+1456.889605125" watchObservedRunningTime="2026-02-26 11:29:50.258942871 +0000 UTC m=+1456.914681986" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.302897 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.303009 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.303070 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-config\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.303119 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbzqn\" (UniqueName: \"kubernetes.io/projected/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-kube-api-access-qbzqn\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.303212 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.303241 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.306453 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.306955 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.307204 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-config\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.307518 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.307834 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.356986 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbzqn\" (UniqueName: \"kubernetes.io/projected/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-kube-api-access-qbzqn\") pod \"dnsmasq-dns-57c957c4ff-rh42r\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.392893 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.812803 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b5845cdd9-d7d56"] Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.885770 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.891060 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.899579 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.899611 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4k8sf" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.899893 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.913524 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:50 crc kubenswrapper[4724]: I0226 11:29:50.975505 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.006767 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5cf69d49-66r7l"] Feb 26 11:29:51 crc kubenswrapper[4724]: E0226 11:29:51.007341 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6" containerName="init" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.007357 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6" containerName="init" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.007602 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6" containerName="init" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.009188 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.033510 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-dns-swift-storage-0\") pod \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.033563 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqwvg\" (UniqueName: \"kubernetes.io/projected/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-kube-api-access-zqwvg\") pod \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.033607 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-ovsdbserver-sb\") pod \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.033698 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-ovsdbserver-nb\") pod \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.033748 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-dns-svc\") pod \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.033839 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-config\") pod \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\" (UID: \"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6\") " Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.034201 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-logs\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.034235 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.034320 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.034361 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.034426 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.034503 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gztzg\" (UniqueName: \"kubernetes.io/projected/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-kube-api-access-gztzg\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.034593 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.054535 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-kube-api-access-zqwvg" (OuterVolumeSpecName: "kube-api-access-zqwvg") pod "91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6" (UID: "91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6"). InnerVolumeSpecName "kube-api-access-zqwvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.062954 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cf69d49-66r7l"] Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.124711 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6" (UID: "91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.124725 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-config" (OuterVolumeSpecName: "config") pod "91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6" (UID: "91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.136623 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.137046 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.137082 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.137239 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m98bt\" (UniqueName: \"kubernetes.io/projected/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-kube-api-access-m98bt\") pod \"horizon-5cf69d49-66r7l\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.137411 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gztzg\" (UniqueName: \"kubernetes.io/projected/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-kube-api-access-gztzg\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.137503 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-logs\") pod \"horizon-5cf69d49-66r7l\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.137538 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.137554 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-config-data\") pod \"horizon-5cf69d49-66r7l\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.137580 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-scripts\") pod \"horizon-5cf69d49-66r7l\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.137622 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-logs\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.137637 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-horizon-secret-key\") pod \"horizon-5cf69d49-66r7l\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.137655 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.137711 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqwvg\" (UniqueName: \"kubernetes.io/projected/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-kube-api-access-zqwvg\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.137721 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.137730 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.138278 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.142451 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-logs\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.145469 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.165724 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.168351 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.169057 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.170985 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gztzg\" (UniqueName: \"kubernetes.io/projected/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-kube-api-access-gztzg\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.193055 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.206800 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.210590 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.213878 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.240150 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-scripts\") pod \"horizon-5cf69d49-66r7l\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.240410 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.240550 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-horizon-secret-key\") pod \"horizon-5cf69d49-66r7l\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.240677 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-logs\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.240786 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-config-data\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.240878 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-scripts\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.241023 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b6n2\" (UniqueName: \"kubernetes.io/projected/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-kube-api-access-7b6n2\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.241136 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m98bt\" (UniqueName: \"kubernetes.io/projected/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-kube-api-access-m98bt\") pod \"horizon-5cf69d49-66r7l\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.241290 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.241387 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.241496 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-logs\") pod \"horizon-5cf69d49-66r7l\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.241622 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-config-data\") pod \"horizon-5cf69d49-66r7l\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.243040 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-config-data\") pod \"horizon-5cf69d49-66r7l\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.244366 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-scripts\") pod \"horizon-5cf69d49-66r7l\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.244806 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.245788 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-logs\") pod \"horizon-5cf69d49-66r7l\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.265190 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m98bt\" (UniqueName: \"kubernetes.io/projected/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-kube-api-access-m98bt\") pod \"horizon-5cf69d49-66r7l\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.290751 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-horizon-secret-key\") pod \"horizon-5cf69d49-66r7l\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.297969 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6" (UID: "91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.302232 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.304859 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-zqsvm" event={"ID":"91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6","Type":"ContainerDied","Data":"a3e92eaf8595ea5852901e879bae4199731fc384c32efd7b1be5199180ee2c79"} Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.304915 4724 scope.go:117] "RemoveContainer" containerID="1c1b79c5ec07d0556ff7057c65a51e22e6ca904eacbbc145d4bbdc6707dc6c62" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.331003 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6" (UID: "91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.344795 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.344859 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-logs\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.344885 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-config-data\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.344904 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-scripts\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.344978 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b6n2\" (UniqueName: \"kubernetes.io/projected/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-kube-api-access-7b6n2\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.345028 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.345049 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.345414 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.346072 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-logs\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.346308 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.346963 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.346990 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.347744 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6" (UID: "91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.357361 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.362776 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-scripts\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.370668 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-config-data\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.397955 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.425788 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.452864 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.489846 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.523905 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b6n2\" (UniqueName: \"kubernetes.io/projected/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-kube-api-access-7b6n2\") pod \"glance-default-external-api-0\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.534296 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.534408 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.722617 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-zqsvm"] Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.738650 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-zqsvm"] Feb 26 11:29:51 crc kubenswrapper[4724]: I0226 11:29:51.748281 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-rh42r"] Feb 26 11:29:51 crc kubenswrapper[4724]: W0226 11:29:51.867255 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae2d0ec9_77c9_4a19_b783_b40613d55eb5.slice/crio-66487d535353f5e1c1b5be705ee6dff0006bcdc9f5da25c4eafb5e394803bbaf WatchSource:0}: Error finding container 66487d535353f5e1c1b5be705ee6dff0006bcdc9f5da25c4eafb5e394803bbaf: Status 404 returned error can't find the container with id 66487d535353f5e1c1b5be705ee6dff0006bcdc9f5da25c4eafb5e394803bbaf Feb 26 11:29:52 crc kubenswrapper[4724]: I0226 11:29:52.061513 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6" path="/var/lib/kubelet/pods/91ce0c8e-626b-4b7d-bbe0-0e0e99ac62b6/volumes" Feb 26 11:29:52 crc kubenswrapper[4724]: I0226 11:29:52.286070 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cf69d49-66r7l"] Feb 26 11:29:52 crc kubenswrapper[4724]: I0226 11:29:52.334901 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" event={"ID":"ae2d0ec9-77c9-4a19-b783-b40613d55eb5","Type":"ContainerStarted","Data":"66487d535353f5e1c1b5be705ee6dff0006bcdc9f5da25c4eafb5e394803bbaf"} Feb 26 11:29:52 crc kubenswrapper[4724]: W0226 11:29:52.767379 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15eaf092_bdf8_4c23_91a5_3a3d8011b77e.slice/crio-4bb54e3cc98c08e0aa512ae1b93672fddcc7563ae74029ea20d5f919fe520487 WatchSource:0}: Error finding container 4bb54e3cc98c08e0aa512ae1b93672fddcc7563ae74029ea20d5f919fe520487: Status 404 returned error can't find the container with id 4bb54e3cc98c08e0aa512ae1b93672fddcc7563ae74029ea20d5f919fe520487 Feb 26 11:29:52 crc kubenswrapper[4724]: I0226 11:29:52.783690 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:29:52 crc kubenswrapper[4724]: I0226 11:29:52.907097 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:29:52 crc kubenswrapper[4724]: W0226 11:29:52.907387 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97d129ee_f8b1_4cad_94c2_9f1cdb871c80.slice/crio-5d9b6b2450fb846c43cce7e83bcfab5fc7a952bbfa2a9c330c798319c4101c88 WatchSource:0}: Error finding container 5d9b6b2450fb846c43cce7e83bcfab5fc7a952bbfa2a9c330c798319c4101c88: Status 404 returned error can't find the container with id 5d9b6b2450fb846c43cce7e83bcfab5fc7a952bbfa2a9c330c798319c4101c88 Feb 26 11:29:53 crc kubenswrapper[4724]: I0226 11:29:53.399351 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf69d49-66r7l" event={"ID":"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e","Type":"ContainerStarted","Data":"d4c2715d538522412287cb9e7735f5c0102322b239c13a597b87c598635ee75a"} Feb 26 11:29:53 crc kubenswrapper[4724]: I0226 11:29:53.407628 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" event={"ID":"a12e3c20-9594-4fff-8f15-47e10d1c3f08","Type":"ContainerStarted","Data":"acf0cb39174f509c83a364d34c1fa4d0eede352e59f1dec11b4107aa6a6adf20"} Feb 26 11:29:53 crc kubenswrapper[4724]: I0226 11:29:53.411486 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15eaf092-bdf8-4c23-91a5-3a3d8011b77e","Type":"ContainerStarted","Data":"4bb54e3cc98c08e0aa512ae1b93672fddcc7563ae74029ea20d5f919fe520487"} Feb 26 11:29:53 crc kubenswrapper[4724]: I0226 11:29:53.416746 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97d129ee-f8b1-4cad-94c2-9f1cdb871c80","Type":"ContainerStarted","Data":"5d9b6b2450fb846c43cce7e83bcfab5fc7a952bbfa2a9c330c798319c4101c88"} Feb 26 11:29:54 crc kubenswrapper[4724]: I0226 11:29:54.431309 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" event={"ID":"ae2d0ec9-77c9-4a19-b783-b40613d55eb5","Type":"ContainerStarted","Data":"d4bb3f0f7c3a2e2ce156e70d10b27ee7d942386ec78a8a7269fd471be82efdce"} Feb 26 11:29:55 crc kubenswrapper[4724]: I0226 11:29:55.438114 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" podUID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerName="dnsmasq-dns" containerID="cri-o://acf0cb39174f509c83a364d34c1fa4d0eede352e59f1dec11b4107aa6a6adf20" gracePeriod=10 Feb 26 11:29:55 crc kubenswrapper[4724]: I0226 11:29:55.438217 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:29:55 crc kubenswrapper[4724]: I0226 11:29:55.460500 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" podStartSLOduration=8.460482286 podStartE2EDuration="8.460482286s" podCreationTimestamp="2026-02-26 11:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:55.459362718 +0000 UTC m=+1462.115101843" watchObservedRunningTime="2026-02-26 11:29:55.460482286 +0000 UTC m=+1462.116221401" Feb 26 11:29:56 crc kubenswrapper[4724]: I0226 11:29:56.463566 4724 generic.go:334] "Generic (PLEG): container finished" podID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerID="acf0cb39174f509c83a364d34c1fa4d0eede352e59f1dec11b4107aa6a6adf20" exitCode=0 Feb 26 11:29:56 crc kubenswrapper[4724]: I0226 11:29:56.463660 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" event={"ID":"a12e3c20-9594-4fff-8f15-47e10d1c3f08","Type":"ContainerDied","Data":"acf0cb39174f509c83a364d34c1fa4d0eede352e59f1dec11b4107aa6a6adf20"} Feb 26 11:29:56 crc kubenswrapper[4724]: I0226 11:29:56.473329 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15eaf092-bdf8-4c23-91a5-3a3d8011b77e","Type":"ContainerStarted","Data":"7f9d369554335df684e76b6310803c36b9599308d7257cbd5d7dfaec3e0d6cea"} Feb 26 11:29:56 crc kubenswrapper[4724]: I0226 11:29:56.475401 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97d129ee-f8b1-4cad-94c2-9f1cdb871c80","Type":"ContainerStarted","Data":"8e128566c85aa0947463faed4e79eb920acd9e1f42fa142f14ef157ca5c05c6b"} Feb 26 11:29:56 crc kubenswrapper[4724]: I0226 11:29:56.477426 4724 generic.go:334] "Generic (PLEG): container finished" podID="ae2d0ec9-77c9-4a19-b783-b40613d55eb5" containerID="d4bb3f0f7c3a2e2ce156e70d10b27ee7d942386ec78a8a7269fd471be82efdce" exitCode=0 Feb 26 11:29:56 crc kubenswrapper[4724]: I0226 11:29:56.477453 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" event={"ID":"ae2d0ec9-77c9-4a19-b783-b40613d55eb5","Type":"ContainerDied","Data":"d4bb3f0f7c3a2e2ce156e70d10b27ee7d942386ec78a8a7269fd471be82efdce"} Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.494574 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97d129ee-f8b1-4cad-94c2-9f1cdb871c80","Type":"ContainerStarted","Data":"7f22c480fe4b6d6f2ac450dd40d1b5ae07cb55cceb9584c8875fe2f7e0908678"} Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.508594 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" event={"ID":"ae2d0ec9-77c9-4a19-b783-b40613d55eb5","Type":"ContainerStarted","Data":"ba1a001785853808c0463aa52a382e160b06f195a0731fa7366a9be330f43189"} Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.508930 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.530396 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" podStartSLOduration=8.530374934 podStartE2EDuration="8.530374934s" podCreationTimestamp="2026-02-26 11:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:57.52554157 +0000 UTC m=+1464.181280695" watchObservedRunningTime="2026-02-26 11:29:57.530374934 +0000 UTC m=+1464.186114059" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.640459 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7489d86c77-spnp8"] Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.720363 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-ddfb9fd96-hzc8c"] Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.721884 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.726600 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.736606 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-ddfb9fd96-hzc8c"] Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.768224 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-combined-ca-bundle\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.768318 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-horizon-secret-key\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.768367 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-horizon-tls-certs\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.768404 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-scripts\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.768475 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87dbf\" (UniqueName: \"kubernetes.io/projected/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-kube-api-access-87dbf\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.768591 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-config-data\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.769358 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-logs\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.803941 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.837546 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cf69d49-66r7l"] Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.879441 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-horizon-secret-key\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.879505 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-horizon-tls-certs\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.879552 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-scripts\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.879598 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87dbf\" (UniqueName: \"kubernetes.io/projected/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-kube-api-access-87dbf\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.879663 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-config-data\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.879710 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-logs\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.879772 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-combined-ca-bundle\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.880833 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-scripts\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.882396 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-logs\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.883021 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-config-data\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.901106 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-horizon-tls-certs\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.901157 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-combined-ca-bundle\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.909782 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-horizon-secret-key\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.929238 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57977849d4-8s5ds"] Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.930839 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.931803 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" podUID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: connect: connection refused" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.956888 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87dbf\" (UniqueName: \"kubernetes.io/projected/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-kube-api-access-87dbf\") pod \"horizon-ddfb9fd96-hzc8c\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:57 crc kubenswrapper[4724]: I0226 11:29:57.976984 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57977849d4-8s5ds"] Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.011035 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.060463 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.082730 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e4c4b3ae-030b-4e33-9779-2ffa39196a76-horizon-secret-key\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.082812 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4c4b3ae-030b-4e33-9779-2ffa39196a76-horizon-tls-certs\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.082833 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4c4b3ae-030b-4e33-9779-2ffa39196a76-logs\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.082874 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvfdm\" (UniqueName: \"kubernetes.io/projected/e4c4b3ae-030b-4e33-9779-2ffa39196a76-kube-api-access-rvfdm\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.082897 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c4b3ae-030b-4e33-9779-2ffa39196a76-combined-ca-bundle\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.082964 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4c4b3ae-030b-4e33-9779-2ffa39196a76-scripts\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.083044 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4c4b3ae-030b-4e33-9779-2ffa39196a76-config-data\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.184564 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e4c4b3ae-030b-4e33-9779-2ffa39196a76-horizon-secret-key\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.184645 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4c4b3ae-030b-4e33-9779-2ffa39196a76-horizon-tls-certs\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.184674 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4c4b3ae-030b-4e33-9779-2ffa39196a76-logs\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.184710 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvfdm\" (UniqueName: \"kubernetes.io/projected/e4c4b3ae-030b-4e33-9779-2ffa39196a76-kube-api-access-rvfdm\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.184742 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c4b3ae-030b-4e33-9779-2ffa39196a76-combined-ca-bundle\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.184814 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4c4b3ae-030b-4e33-9779-2ffa39196a76-scripts\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.185022 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4c4b3ae-030b-4e33-9779-2ffa39196a76-config-data\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.187283 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4c4b3ae-030b-4e33-9779-2ffa39196a76-config-data\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.187743 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4c4b3ae-030b-4e33-9779-2ffa39196a76-logs\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.188581 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4c4b3ae-030b-4e33-9779-2ffa39196a76-scripts\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.190159 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4c4b3ae-030b-4e33-9779-2ffa39196a76-horizon-tls-certs\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.200014 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c4b3ae-030b-4e33-9779-2ffa39196a76-combined-ca-bundle\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.213797 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvfdm\" (UniqueName: \"kubernetes.io/projected/e4c4b3ae-030b-4e33-9779-2ffa39196a76-kube-api-access-rvfdm\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.219849 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e4c4b3ae-030b-4e33-9779-2ffa39196a76-horizon-secret-key\") pod \"horizon-57977849d4-8s5ds\" (UID: \"e4c4b3ae-030b-4e33-9779-2ffa39196a76\") " pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.365972 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:29:58 crc kubenswrapper[4724]: I0226 11:29:58.552083 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.552066809 podStartE2EDuration="8.552066809s" podCreationTimestamp="2026-02-26 11:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:58.546395224 +0000 UTC m=+1465.202134339" watchObservedRunningTime="2026-02-26 11:29:58.552066809 +0000 UTC m=+1465.207805924" Feb 26 11:29:59 crc kubenswrapper[4724]: I0226 11:29:59.528469 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15eaf092-bdf8-4c23-91a5-3a3d8011b77e","Type":"ContainerStarted","Data":"f047e98df3577cc95e8c255f4a260177dd51762804310d3a07d08bf1f1df26ed"} Feb 26 11:29:59 crc kubenswrapper[4724]: I0226 11:29:59.528575 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="15eaf092-bdf8-4c23-91a5-3a3d8011b77e" containerName="glance-log" containerID="cri-o://7f9d369554335df684e76b6310803c36b9599308d7257cbd5d7dfaec3e0d6cea" gracePeriod=30 Feb 26 11:29:59 crc kubenswrapper[4724]: I0226 11:29:59.528727 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="15eaf092-bdf8-4c23-91a5-3a3d8011b77e" containerName="glance-httpd" containerID="cri-o://f047e98df3577cc95e8c255f4a260177dd51762804310d3a07d08bf1f1df26ed" gracePeriod=30 Feb 26 11:29:59 crc kubenswrapper[4724]: I0226 11:29:59.528783 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="97d129ee-f8b1-4cad-94c2-9f1cdb871c80" containerName="glance-log" containerID="cri-o://8e128566c85aa0947463faed4e79eb920acd9e1f42fa142f14ef157ca5c05c6b" gracePeriod=30 Feb 26 11:29:59 crc kubenswrapper[4724]: I0226 11:29:59.529022 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="97d129ee-f8b1-4cad-94c2-9f1cdb871c80" containerName="glance-httpd" containerID="cri-o://7f22c480fe4b6d6f2ac450dd40d1b5ae07cb55cceb9584c8875fe2f7e0908678" gracePeriod=30 Feb 26 11:29:59 crc kubenswrapper[4724]: I0226 11:29:59.557374 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.557351875 podStartE2EDuration="8.557351875s" podCreationTimestamp="2026-02-26 11:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:29:59.555011495 +0000 UTC m=+1466.210750630" watchObservedRunningTime="2026-02-26 11:29:59.557351875 +0000 UTC m=+1466.213090990" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.144071 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd"] Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.145744 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.148841 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.164770 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.170385 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535090-lwblq"] Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.178711 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535090-lwblq" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.192126 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.205119 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.205381 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.240743 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535090-lwblq"] Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.242720 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-config-volume\") pod \"collect-profiles-29535090-229qd\" (UID: \"a41aac00-2bbf-4232-bd75-5bf0f9f69f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.242836 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzmbg\" (UniqueName: \"kubernetes.io/projected/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-kube-api-access-rzmbg\") pod \"collect-profiles-29535090-229qd\" (UID: \"a41aac00-2bbf-4232-bd75-5bf0f9f69f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.242872 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-secret-volume\") pod \"collect-profiles-29535090-229qd\" (UID: \"a41aac00-2bbf-4232-bd75-5bf0f9f69f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.276399 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd"] Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.345116 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzmbg\" (UniqueName: \"kubernetes.io/projected/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-kube-api-access-rzmbg\") pod \"collect-profiles-29535090-229qd\" (UID: \"a41aac00-2bbf-4232-bd75-5bf0f9f69f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.345194 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-secret-volume\") pod \"collect-profiles-29535090-229qd\" (UID: \"a41aac00-2bbf-4232-bd75-5bf0f9f69f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.345303 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vtf7\" (UniqueName: \"kubernetes.io/projected/2e9c2690-0081-4d25-9813-e94f387c218d-kube-api-access-5vtf7\") pod \"auto-csr-approver-29535090-lwblq\" (UID: \"2e9c2690-0081-4d25-9813-e94f387c218d\") " pod="openshift-infra/auto-csr-approver-29535090-lwblq" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.345341 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-config-volume\") pod \"collect-profiles-29535090-229qd\" (UID: \"a41aac00-2bbf-4232-bd75-5bf0f9f69f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.346927 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-config-volume\") pod \"collect-profiles-29535090-229qd\" (UID: \"a41aac00-2bbf-4232-bd75-5bf0f9f69f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.354021 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-secret-volume\") pod \"collect-profiles-29535090-229qd\" (UID: \"a41aac00-2bbf-4232-bd75-5bf0f9f69f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.378432 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzmbg\" (UniqueName: \"kubernetes.io/projected/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-kube-api-access-rzmbg\") pod \"collect-profiles-29535090-229qd\" (UID: \"a41aac00-2bbf-4232-bd75-5bf0f9f69f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.446695 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vtf7\" (UniqueName: \"kubernetes.io/projected/2e9c2690-0081-4d25-9813-e94f387c218d-kube-api-access-5vtf7\") pod \"auto-csr-approver-29535090-lwblq\" (UID: \"2e9c2690-0081-4d25-9813-e94f387c218d\") " pod="openshift-infra/auto-csr-approver-29535090-lwblq" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.469999 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vtf7\" (UniqueName: \"kubernetes.io/projected/2e9c2690-0081-4d25-9813-e94f387c218d-kube-api-access-5vtf7\") pod \"auto-csr-approver-29535090-lwblq\" (UID: \"2e9c2690-0081-4d25-9813-e94f387c218d\") " pod="openshift-infra/auto-csr-approver-29535090-lwblq" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.477046 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.522840 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535090-lwblq" Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.554588 4724 generic.go:334] "Generic (PLEG): container finished" podID="15eaf092-bdf8-4c23-91a5-3a3d8011b77e" containerID="7f9d369554335df684e76b6310803c36b9599308d7257cbd5d7dfaec3e0d6cea" exitCode=143 Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.554670 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15eaf092-bdf8-4c23-91a5-3a3d8011b77e","Type":"ContainerDied","Data":"7f9d369554335df684e76b6310803c36b9599308d7257cbd5d7dfaec3e0d6cea"} Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.562502 4724 generic.go:334] "Generic (PLEG): container finished" podID="97d129ee-f8b1-4cad-94c2-9f1cdb871c80" containerID="7f22c480fe4b6d6f2ac450dd40d1b5ae07cb55cceb9584c8875fe2f7e0908678" exitCode=0 Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.562547 4724 generic.go:334] "Generic (PLEG): container finished" podID="97d129ee-f8b1-4cad-94c2-9f1cdb871c80" containerID="8e128566c85aa0947463faed4e79eb920acd9e1f42fa142f14ef157ca5c05c6b" exitCode=143 Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.562570 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97d129ee-f8b1-4cad-94c2-9f1cdb871c80","Type":"ContainerDied","Data":"7f22c480fe4b6d6f2ac450dd40d1b5ae07cb55cceb9584c8875fe2f7e0908678"} Feb 26 11:30:00 crc kubenswrapper[4724]: I0226 11:30:00.562602 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97d129ee-f8b1-4cad-94c2-9f1cdb871c80","Type":"ContainerDied","Data":"8e128566c85aa0947463faed4e79eb920acd9e1f42fa142f14ef157ca5c05c6b"} Feb 26 11:30:01 crc kubenswrapper[4724]: I0226 11:30:01.574231 4724 generic.go:334] "Generic (PLEG): container finished" podID="15eaf092-bdf8-4c23-91a5-3a3d8011b77e" containerID="f047e98df3577cc95e8c255f4a260177dd51762804310d3a07d08bf1f1df26ed" exitCode=0 Feb 26 11:30:01 crc kubenswrapper[4724]: I0226 11:30:01.574293 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15eaf092-bdf8-4c23-91a5-3a3d8011b77e","Type":"ContainerDied","Data":"f047e98df3577cc95e8c255f4a260177dd51762804310d3a07d08bf1f1df26ed"} Feb 26 11:30:05 crc kubenswrapper[4724]: I0226 11:30:05.404413 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:30:05 crc kubenswrapper[4724]: I0226 11:30:05.486396 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-vctpx"] Feb 26 11:30:05 crc kubenswrapper[4724]: I0226 11:30:05.486669 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" podUID="936380ab-8283-489b-a609-f583e11b71eb" containerName="dnsmasq-dns" containerID="cri-o://b78f93f30a002ba683174deab2048b34bddc010ad23122c2deeaa7d467a1c2fa" gracePeriod=10 Feb 26 11:30:06 crc kubenswrapper[4724]: I0226 11:30:06.621190 4724 generic.go:334] "Generic (PLEG): container finished" podID="936380ab-8283-489b-a609-f583e11b71eb" containerID="b78f93f30a002ba683174deab2048b34bddc010ad23122c2deeaa7d467a1c2fa" exitCode=0 Feb 26 11:30:06 crc kubenswrapper[4724]: I0226 11:30:06.621553 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" event={"ID":"936380ab-8283-489b-a609-f583e11b71eb","Type":"ContainerDied","Data":"b78f93f30a002ba683174deab2048b34bddc010ad23122c2deeaa7d467a1c2fa"} Feb 26 11:30:07 crc kubenswrapper[4724]: I0226 11:30:07.920605 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" podUID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: i/o timeout" Feb 26 11:30:09 crc kubenswrapper[4724]: I0226 11:30:09.409530 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" podUID="936380ab-8283-489b-a609-f583e11b71eb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Feb 26 11:30:12 crc kubenswrapper[4724]: I0226 11:30:12.679985 4724 generic.go:334] "Generic (PLEG): container finished" podID="a7effd79-5961-474b-b3b3-4a41b89db380" containerID="978c9f1dadd2fe427d90affee829023bd6e57a29c4e230020d3e9d63c9331b19" exitCode=0 Feb 26 11:30:12 crc kubenswrapper[4724]: I0226 11:30:12.680139 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vb9m8" event={"ID":"a7effd79-5961-474b-b3b3-4a41b89db380","Type":"ContainerDied","Data":"978c9f1dadd2fe427d90affee829023bd6e57a29c4e230020d3e9d63c9331b19"} Feb 26 11:30:12 crc kubenswrapper[4724]: I0226 11:30:12.921843 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" podUID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: i/o timeout" Feb 26 11:30:14 crc kubenswrapper[4724]: I0226 11:30:14.408980 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" podUID="936380ab-8283-489b-a609-f583e11b71eb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Feb 26 11:30:17 crc kubenswrapper[4724]: I0226 11:30:17.922997 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" podUID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: i/o timeout" Feb 26 11:30:19 crc kubenswrapper[4724]: I0226 11:30:19.408845 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" podUID="936380ab-8283-489b-a609-f583e11b71eb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Feb 26 11:30:19 crc kubenswrapper[4724]: I0226 11:30:19.409254 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:30:21 crc kubenswrapper[4724]: I0226 11:30:21.535219 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 11:30:21 crc kubenswrapper[4724]: I0226 11:30:21.535539 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 11:30:21 crc kubenswrapper[4724]: I0226 11:30:21.535551 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 11:30:22 crc kubenswrapper[4724]: I0226 11:30:21.535561 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 11:30:22 crc kubenswrapper[4724]: E0226 11:30:22.619397 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 26 11:30:22 crc kubenswrapper[4724]: E0226 11:30:22.619550 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b7m4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-fllvh_openstack(f6f963de-7cc1-40fa-93ce-5f1facd31ffc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:30:22 crc kubenswrapper[4724]: E0226 11:30:22.620694 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-fllvh" podUID="f6f963de-7cc1-40fa-93ce-5f1facd31ffc" Feb 26 11:30:22 crc kubenswrapper[4724]: E0226 11:30:22.769021 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-fllvh" podUID="f6f963de-7cc1-40fa-93ce-5f1facd31ffc" Feb 26 11:30:22 crc kubenswrapper[4724]: I0226 11:30:22.924556 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" podUID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: i/o timeout" Feb 26 11:30:24 crc kubenswrapper[4724]: I0226 11:30:24.408738 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" podUID="936380ab-8283-489b-a609-f583e11b71eb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Feb 26 11:30:27 crc kubenswrapper[4724]: I0226 11:30:27.925551 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" podUID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: i/o timeout" Feb 26 11:30:29 crc kubenswrapper[4724]: I0226 11:30:29.409496 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" podUID="936380ab-8283-489b-a609-f583e11b71eb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Feb 26 11:30:32 crc kubenswrapper[4724]: I0226 11:30:32.926357 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" podUID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: i/o timeout" Feb 26 11:30:33 crc kubenswrapper[4724]: E0226 11:30:33.922933 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 26 11:30:33 crc kubenswrapper[4724]: E0226 11:30:33.923073 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n664h8chbhbch646hc5h5fh656h677hb8h544h55bh668h64fhcfh58fh5cch5b5h5c7h559h584h5bdh6dhch5dbh674h5f5hb5h5bch6fh5c8h5ddq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m98bt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5cf69d49-66r7l_openstack(3f6eacb5-c0ea-407a-b41a-2a50a048ec9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:30:33 crc kubenswrapper[4724]: E0226 11:30:33.925286 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5cf69d49-66r7l" podUID="3f6eacb5-c0ea-407a-b41a-2a50a048ec9e" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.012774 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.082209 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-dns-svc\") pod \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.082630 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-ovsdbserver-nb\") pod \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.082852 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-ovsdbserver-sb\") pod \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.084847 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-dns-swift-storage-0\") pod \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.084896 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-config\") pod \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.087340 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjq89\" (UniqueName: \"kubernetes.io/projected/a12e3c20-9594-4fff-8f15-47e10d1c3f08-kube-api-access-kjq89\") pod \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.125279 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a12e3c20-9594-4fff-8f15-47e10d1c3f08-kube-api-access-kjq89" (OuterVolumeSpecName: "kube-api-access-kjq89") pod "a12e3c20-9594-4fff-8f15-47e10d1c3f08" (UID: "a12e3c20-9594-4fff-8f15-47e10d1c3f08"). InnerVolumeSpecName "kube-api-access-kjq89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.159637 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a12e3c20-9594-4fff-8f15-47e10d1c3f08" (UID: "a12e3c20-9594-4fff-8f15-47e10d1c3f08"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.178704 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a12e3c20-9594-4fff-8f15-47e10d1c3f08" (UID: "a12e3c20-9594-4fff-8f15-47e10d1c3f08"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.198248 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-config" (OuterVolumeSpecName: "config") pod "a12e3c20-9594-4fff-8f15-47e10d1c3f08" (UID: "a12e3c20-9594-4fff-8f15-47e10d1c3f08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.199773 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a12e3c20-9594-4fff-8f15-47e10d1c3f08" (UID: "a12e3c20-9594-4fff-8f15-47e10d1c3f08"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:34 crc kubenswrapper[4724]: W0226 11:30:34.200345 4724 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a12e3c20-9594-4fff-8f15-47e10d1c3f08/volumes/kubernetes.io~configmap/ovsdbserver-sb Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.200460 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a12e3c20-9594-4fff-8f15-47e10d1c3f08" (UID: "a12e3c20-9594-4fff-8f15-47e10d1c3f08"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.201279 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-ovsdbserver-sb\") pod \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\" (UID: \"a12e3c20-9594-4fff-8f15-47e10d1c3f08\") " Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.202115 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.202141 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.202195 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.202210 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.202223 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjq89\" (UniqueName: \"kubernetes.io/projected/a12e3c20-9594-4fff-8f15-47e10d1c3f08-kube-api-access-kjq89\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.204072 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a12e3c20-9594-4fff-8f15-47e10d1c3f08" (UID: "a12e3c20-9594-4fff-8f15-47e10d1c3f08"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.304371 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a12e3c20-9594-4fff-8f15-47e10d1c3f08-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.409464 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" podUID="936380ab-8283-489b-a609-f583e11b71eb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.906846 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" event={"ID":"a12e3c20-9594-4fff-8f15-47e10d1c3f08","Type":"ContainerDied","Data":"04acfc6f6e57ebfb0984f01ed9ee85a5f3dceb6d79302220b9c429563d3039d4"} Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.906898 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.906913 4724 scope.go:117] "RemoveContainer" containerID="acf0cb39174f509c83a364d34c1fa4d0eede352e59f1dec11b4107aa6a6adf20" Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.969019 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-dhjzj"] Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.976025 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-dhjzj"] Feb 26 11:30:34 crc kubenswrapper[4724]: I0226 11:30:34.979695 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:30:35 crc kubenswrapper[4724]: I0226 11:30:35.992013 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" path="/var/lib/kubelet/pods/a12e3c20-9594-4fff-8f15-47e10d1c3f08/volumes" Feb 26 11:30:37 crc kubenswrapper[4724]: I0226 11:30:37.927353 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfdd6f9f-dhjzj" podUID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: i/o timeout" Feb 26 11:30:38 crc kubenswrapper[4724]: E0226 11:30:38.607836 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 26 11:30:38 crc kubenswrapper[4724]: E0226 11:30:38.608651 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56dh6ch554h5h5bchcbh6bh64dh5c4h68dh547h677h659h668h9h59dh5b9h66dh87hf7h678h5cfh589h597h56fh674h8h84h68bh646h6bh679q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftfjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7489d86c77-spnp8_openstack(ccd3a5e4-52a8-4f94-a4d3-c2099e118b30): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:30:38 crc kubenswrapper[4724]: E0226 11:30:38.611669 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7489d86c77-spnp8" podUID="ccd3a5e4-52a8-4f94-a4d3-c2099e118b30" Feb 26 11:30:39 crc kubenswrapper[4724]: I0226 11:30:39.408639 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" podUID="936380ab-8283-489b-a609-f583e11b71eb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Feb 26 11:30:44 crc kubenswrapper[4724]: I0226 11:30:44.408951 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" podUID="936380ab-8283-489b-a609-f583e11b71eb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Feb 26 11:30:46 crc kubenswrapper[4724]: I0226 11:30:46.906821 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:30:46 crc kubenswrapper[4724]: I0226 11:30:46.907270 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:30:49 crc kubenswrapper[4724]: I0226 11:30:49.411577 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" podUID="936380ab-8283-489b-a609-f583e11b71eb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.503260 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:30:50 crc kubenswrapper[4724]: E0226 11:30:50.510458 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 26 11:30:50 crc kubenswrapper[4724]: E0226 11:30:50.510663 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b7m4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-fllvh_openstack(f6f963de-7cc1-40fa-93ce-5f1facd31ffc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:30:50 crc kubenswrapper[4724]: E0226 11:30:50.512761 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-fllvh" podUID="f6f963de-7cc1-40fa-93ce-5f1facd31ffc" Feb 26 11:30:50 crc kubenswrapper[4724]: E0226 11:30:50.514493 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 26 11:30:50 crc kubenswrapper[4724]: E0226 11:30:50.514639 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n649h596h75hb4h646hdbhf5h5dchdch5bdhcch59h579h7bh89h67ch5d6h674h7chddh66dh95h65hd9h54hb5h5dfh6fh648h577h588hffq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpd6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5b5845cdd9-d7d56_openstack(93f9ce1f-2294-454b-bda1-d114e3ab9422): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:30:50 crc kubenswrapper[4724]: E0226 11:30:50.518378 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5b5845cdd9-d7d56" podUID="93f9ce1f-2294-454b-bda1-d114e3ab9422" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.623103 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.640776 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.668518 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-config-data\") pod \"a7effd79-5961-474b-b3b3-4a41b89db380\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.668578 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-fernet-keys\") pod \"a7effd79-5961-474b-b3b3-4a41b89db380\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.668599 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-scripts\") pod \"a7effd79-5961-474b-b3b3-4a41b89db380\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.668728 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-credential-keys\") pod \"a7effd79-5961-474b-b3b3-4a41b89db380\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.668866 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-combined-ca-bundle\") pod \"a7effd79-5961-474b-b3b3-4a41b89db380\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.668914 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsbs\" (UniqueName: \"kubernetes.io/projected/a7effd79-5961-474b-b3b3-4a41b89db380-kube-api-access-9vsbs\") pod \"a7effd79-5961-474b-b3b3-4a41b89db380\" (UID: \"a7effd79-5961-474b-b3b3-4a41b89db380\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.702766 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7effd79-5961-474b-b3b3-4a41b89db380-kube-api-access-9vsbs" (OuterVolumeSpecName: "kube-api-access-9vsbs") pod "a7effd79-5961-474b-b3b3-4a41b89db380" (UID: "a7effd79-5961-474b-b3b3-4a41b89db380"). InnerVolumeSpecName "kube-api-access-9vsbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.709301 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a7effd79-5961-474b-b3b3-4a41b89db380" (UID: "a7effd79-5961-474b-b3b3-4a41b89db380"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.714791 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a7effd79-5961-474b-b3b3-4a41b89db380" (UID: "a7effd79-5961-474b-b3b3-4a41b89db380"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.716427 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-scripts" (OuterVolumeSpecName: "scripts") pod "a7effd79-5961-474b-b3b3-4a41b89db380" (UID: "a7effd79-5961-474b-b3b3-4a41b89db380"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.719906 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-config-data" (OuterVolumeSpecName: "config-data") pod "a7effd79-5961-474b-b3b3-4a41b89db380" (UID: "a7effd79-5961-474b-b3b3-4a41b89db380"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.745356 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7effd79-5961-474b-b3b3-4a41b89db380" (UID: "a7effd79-5961-474b-b3b3-4a41b89db380"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.771079 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m98bt\" (UniqueName: \"kubernetes.io/projected/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-kube-api-access-m98bt\") pod \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.771143 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-horizon-secret-key\") pod \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.771220 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-config-data\") pod \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.771270 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-scripts\") pod \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.771315 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-scripts\") pod \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.771341 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gztzg\" (UniqueName: \"kubernetes.io/projected/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-kube-api-access-gztzg\") pod \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.771373 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-config-data\") pod \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.771401 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.771485 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-httpd-run\") pod \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.771523 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-logs\") pod \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.771573 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-combined-ca-bundle\") pod \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\" (UID: \"97d129ee-f8b1-4cad-94c2-9f1cdb871c80\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.771668 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-logs\") pod \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\" (UID: \"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e\") " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.771980 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "97d129ee-f8b1-4cad-94c2-9f1cdb871c80" (UID: "97d129ee-f8b1-4cad-94c2-9f1cdb871c80"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.772265 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-logs" (OuterVolumeSpecName: "logs") pod "3f6eacb5-c0ea-407a-b41a-2a50a048ec9e" (UID: "3f6eacb5-c0ea-407a-b41a-2a50a048ec9e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.772364 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-logs" (OuterVolumeSpecName: "logs") pod "97d129ee-f8b1-4cad-94c2-9f1cdb871c80" (UID: "97d129ee-f8b1-4cad-94c2-9f1cdb871c80"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.772383 4724 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.772407 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.772421 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vsbs\" (UniqueName: \"kubernetes.io/projected/a7effd79-5961-474b-b3b3-4a41b89db380-kube-api-access-9vsbs\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.772436 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.772449 4724 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.772461 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7effd79-5961-474b-b3b3-4a41b89db380-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.772474 4724 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.772804 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-scripts" (OuterVolumeSpecName: "scripts") pod "3f6eacb5-c0ea-407a-b41a-2a50a048ec9e" (UID: "3f6eacb5-c0ea-407a-b41a-2a50a048ec9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.773593 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-config-data" (OuterVolumeSpecName: "config-data") pod "3f6eacb5-c0ea-407a-b41a-2a50a048ec9e" (UID: "3f6eacb5-c0ea-407a-b41a-2a50a048ec9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.778196 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3f6eacb5-c0ea-407a-b41a-2a50a048ec9e" (UID: "3f6eacb5-c0ea-407a-b41a-2a50a048ec9e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.778219 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-kube-api-access-gztzg" (OuterVolumeSpecName: "kube-api-access-gztzg") pod "97d129ee-f8b1-4cad-94c2-9f1cdb871c80" (UID: "97d129ee-f8b1-4cad-94c2-9f1cdb871c80"). InnerVolumeSpecName "kube-api-access-gztzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.782813 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-scripts" (OuterVolumeSpecName: "scripts") pod "97d129ee-f8b1-4cad-94c2-9f1cdb871c80" (UID: "97d129ee-f8b1-4cad-94c2-9f1cdb871c80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.783269 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-kube-api-access-m98bt" (OuterVolumeSpecName: "kube-api-access-m98bt") pod "3f6eacb5-c0ea-407a-b41a-2a50a048ec9e" (UID: "3f6eacb5-c0ea-407a-b41a-2a50a048ec9e"). InnerVolumeSpecName "kube-api-access-m98bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.821301 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "97d129ee-f8b1-4cad-94c2-9f1cdb871c80" (UID: "97d129ee-f8b1-4cad-94c2-9f1cdb871c80"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.826610 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97d129ee-f8b1-4cad-94c2-9f1cdb871c80" (UID: "97d129ee-f8b1-4cad-94c2-9f1cdb871c80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.874120 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m98bt\" (UniqueName: \"kubernetes.io/projected/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-kube-api-access-m98bt\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.874158 4724 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.874172 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.874199 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.874211 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gztzg\" (UniqueName: \"kubernetes.io/projected/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-kube-api-access-gztzg\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.874222 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.874247 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.874258 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.874269 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.874279 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.881894 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-config-data" (OuterVolumeSpecName: "config-data") pod "97d129ee-f8b1-4cad-94c2-9f1cdb871c80" (UID: "97d129ee-f8b1-4cad-94c2-9f1cdb871c80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.915297 4724 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.977158 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d129ee-f8b1-4cad-94c2-9f1cdb871c80-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:50 crc kubenswrapper[4724]: I0226 11:30:50.977249 4724 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.063228 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf69d49-66r7l" event={"ID":"3f6eacb5-c0ea-407a-b41a-2a50a048ec9e","Type":"ContainerDied","Data":"d4c2715d538522412287cb9e7735f5c0102322b239c13a597b87c598635ee75a"} Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.063259 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf69d49-66r7l" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.066219 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vb9m8" event={"ID":"a7effd79-5961-474b-b3b3-4a41b89db380","Type":"ContainerDied","Data":"653e5b9117aeecbe6e54e3d35fc752e082db09719543f0c3fe58501ce0923522"} Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.066267 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vb9m8" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.066269 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="653e5b9117aeecbe6e54e3d35fc752e082db09719543f0c3fe58501ce0923522" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.070790 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97d129ee-f8b1-4cad-94c2-9f1cdb871c80","Type":"ContainerDied","Data":"5d9b6b2450fb846c43cce7e83bcfab5fc7a952bbfa2a9c330c798319c4101c88"} Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.070801 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.162718 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cf69d49-66r7l"] Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.179663 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5cf69d49-66r7l"] Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.191769 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.203567 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.219730 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:30:51 crc kubenswrapper[4724]: E0226 11:30:51.220556 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d129ee-f8b1-4cad-94c2-9f1cdb871c80" containerName="glance-log" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.220575 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d129ee-f8b1-4cad-94c2-9f1cdb871c80" containerName="glance-log" Feb 26 11:30:51 crc kubenswrapper[4724]: E0226 11:30:51.220634 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7effd79-5961-474b-b3b3-4a41b89db380" containerName="keystone-bootstrap" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.220646 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7effd79-5961-474b-b3b3-4a41b89db380" containerName="keystone-bootstrap" Feb 26 11:30:51 crc kubenswrapper[4724]: E0226 11:30:51.220668 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerName="dnsmasq-dns" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.220676 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerName="dnsmasq-dns" Feb 26 11:30:51 crc kubenswrapper[4724]: E0226 11:30:51.220723 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerName="init" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.220732 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerName="init" Feb 26 11:30:51 crc kubenswrapper[4724]: E0226 11:30:51.220760 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d129ee-f8b1-4cad-94c2-9f1cdb871c80" containerName="glance-httpd" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.220802 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d129ee-f8b1-4cad-94c2-9f1cdb871c80" containerName="glance-httpd" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.221164 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d129ee-f8b1-4cad-94c2-9f1cdb871c80" containerName="glance-log" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.221244 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7effd79-5961-474b-b3b3-4a41b89db380" containerName="keystone-bootstrap" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.221314 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d129ee-f8b1-4cad-94c2-9f1cdb871c80" containerName="glance-httpd" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.221332 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a12e3c20-9594-4fff-8f15-47e10d1c3f08" containerName="dnsmasq-dns" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.223934 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.234703 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.234871 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.236397 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.387764 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.387808 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf5ef727-2542-4452-aff8-f34f3edea383-logs\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.387855 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.387895 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.387921 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.387956 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf5ef727-2542-4452-aff8-f34f3edea383-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.387998 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x8fd\" (UniqueName: \"kubernetes.io/projected/cf5ef727-2542-4452-aff8-f34f3edea383-kube-api-access-8x8fd\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.388034 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.490089 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.490147 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.490235 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf5ef727-2542-4452-aff8-f34f3edea383-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.490302 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x8fd\" (UniqueName: \"kubernetes.io/projected/cf5ef727-2542-4452-aff8-f34f3edea383-kube-api-access-8x8fd\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.490356 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.490390 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.490418 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf5ef727-2542-4452-aff8-f34f3edea383-logs\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.490480 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.491045 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.491496 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf5ef727-2542-4452-aff8-f34f3edea383-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.494480 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf5ef727-2542-4452-aff8-f34f3edea383-logs\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.495568 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.496273 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.497167 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.501345 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.511166 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x8fd\" (UniqueName: \"kubernetes.io/projected/cf5ef727-2542-4452-aff8-f34f3edea383-kube-api-access-8x8fd\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.532163 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.565300 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.697566 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vb9m8"] Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.705323 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vb9m8"] Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.811324 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-b5xkt"] Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.812998 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.815976 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.816198 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.818377 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.818407 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l4lrz" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.818390 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.878973 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b5xkt"] Feb 26 11:30:51 crc kubenswrapper[4724]: E0226 11:30:51.881713 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 26 11:30:51 crc kubenswrapper[4724]: E0226 11:30:51.881894 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5k5tp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jrqgs_openstack(65202f21-3756-4083-b158-9f06dca33deb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:30:51 crc kubenswrapper[4724]: E0226 11:30:51.882986 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-jrqgs" podUID="65202f21-3756-4083-b158-9f06dca33deb" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.898336 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-scripts\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.898404 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-fernet-keys\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.898458 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-combined-ca-bundle\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.898491 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmhbc\" (UniqueName: \"kubernetes.io/projected/fb3c003b-9f91-4c11-a530-3f39fe5072b3-kube-api-access-lmhbc\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.898512 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-credential-keys\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.898533 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-config-data\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.947456 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.986691 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f6eacb5-c0ea-407a-b41a-2a50a048ec9e" path="/var/lib/kubelet/pods/3f6eacb5-c0ea-407a-b41a-2a50a048ec9e/volumes" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.987140 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97d129ee-f8b1-4cad-94c2-9f1cdb871c80" path="/var/lib/kubelet/pods/97d129ee-f8b1-4cad-94c2-9f1cdb871c80/volumes" Feb 26 11:30:51 crc kubenswrapper[4724]: I0226 11:30:51.987932 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7effd79-5961-474b-b3b3-4a41b89db380" path="/var/lib/kubelet/pods/a7effd79-5961-474b-b3b3-4a41b89db380/volumes" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:51.999578 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-config-data\") pod \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:51.999653 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-horizon-secret-key\") pod \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:51.999707 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-logs\") pod \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:51.999760 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftfjd\" (UniqueName: \"kubernetes.io/projected/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-kube-api-access-ftfjd\") pod \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:51.999851 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-scripts\") pod \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\" (UID: \"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30\") " Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.000199 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-scripts\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.000266 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-fernet-keys\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.000338 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-combined-ca-bundle\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.000370 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmhbc\" (UniqueName: \"kubernetes.io/projected/fb3c003b-9f91-4c11-a530-3f39fe5072b3-kube-api-access-lmhbc\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.000396 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-credential-keys\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.002204 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-config-data" (OuterVolumeSpecName: "config-data") pod "ccd3a5e4-52a8-4f94-a4d3-c2099e118b30" (UID: "ccd3a5e4-52a8-4f94-a4d3-c2099e118b30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.003070 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-config-data\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.003274 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.004281 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ccd3a5e4-52a8-4f94-a4d3-c2099e118b30" (UID: "ccd3a5e4-52a8-4f94-a4d3-c2099e118b30"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.004581 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-logs" (OuterVolumeSpecName: "logs") pod "ccd3a5e4-52a8-4f94-a4d3-c2099e118b30" (UID: "ccd3a5e4-52a8-4f94-a4d3-c2099e118b30"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.004666 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-scripts" (OuterVolumeSpecName: "scripts") pod "ccd3a5e4-52a8-4f94-a4d3-c2099e118b30" (UID: "ccd3a5e4-52a8-4f94-a4d3-c2099e118b30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.004990 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-kube-api-access-ftfjd" (OuterVolumeSpecName: "kube-api-access-ftfjd") pod "ccd3a5e4-52a8-4f94-a4d3-c2099e118b30" (UID: "ccd3a5e4-52a8-4f94-a4d3-c2099e118b30"). InnerVolumeSpecName "kube-api-access-ftfjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.005396 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-combined-ca-bundle\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.005491 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-scripts\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.007314 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-fernet-keys\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.008749 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-config-data\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.009057 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-credential-keys\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.020362 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmhbc\" (UniqueName: \"kubernetes.io/projected/fb3c003b-9f91-4c11-a530-3f39fe5072b3-kube-api-access-lmhbc\") pod \"keystone-bootstrap-b5xkt\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.083592 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7489d86c77-spnp8" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.084580 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7489d86c77-spnp8" event={"ID":"ccd3a5e4-52a8-4f94-a4d3-c2099e118b30","Type":"ContainerDied","Data":"0661f982c31bf8e02d2b29095e3b154123bffeb32b9ec424e2a8452e39a0843b"} Feb 26 11:30:52 crc kubenswrapper[4724]: E0226 11:30:52.096709 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-jrqgs" podUID="65202f21-3756-4083-b158-9f06dca33deb" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.105666 4724 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.105719 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.105774 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftfjd\" (UniqueName: \"kubernetes.io/projected/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-kube-api-access-ftfjd\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.105786 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.152336 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7489d86c77-spnp8"] Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.160434 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7489d86c77-spnp8"] Feb 26 11:30:52 crc kubenswrapper[4724]: I0226 11:30:52.245149 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:30:52 crc kubenswrapper[4724]: E0226 11:30:52.637226 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 26 11:30:52 crc kubenswrapper[4724]: E0226 11:30:52.637801 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64h54bh67bh569h54dh598hc7hfbh567hdh595h5f6h55hd8h684h54dh97h56fh5b6h8bh7fhf4h64dh645h66dhfbhf4h94h557h696h68bh67q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pl9tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9ce9a592-28c1-40fc-b4e5-90523b59c6d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:30:53 crc kubenswrapper[4724]: I0226 11:30:53.551390 4724 scope.go:117] "RemoveContainer" containerID="7564d9507bf6de308cbc7a9df902bec921f2b88ff4914c260d5f31aaf50fc1f1" Feb 26 11:30:53 crc kubenswrapper[4724]: E0226 11:30:53.616367 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 26 11:30:53 crc kubenswrapper[4724]: E0226 11:30:53.616672 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kp5v6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-rkvvl_openstack(dedd4492-c73a-4f47-8243-fea2dd842a4f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:30:53 crc kubenswrapper[4724]: E0226 11:30:53.618396 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-rkvvl" podUID="dedd4492-c73a-4f47-8243-fea2dd842a4f" Feb 26 11:30:53 crc kubenswrapper[4724]: I0226 11:30:53.845748 4724 scope.go:117] "RemoveContainer" containerID="7f22c480fe4b6d6f2ac450dd40d1b5ae07cb55cceb9584c8875fe2f7e0908678" Feb 26 11:30:53 crc kubenswrapper[4724]: I0226 11:30:53.888986 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:30:53 crc kubenswrapper[4724]: I0226 11:30:53.890950 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:30:53 crc kubenswrapper[4724]: I0226 11:30:53.897607 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:30:53 crc kubenswrapper[4724]: I0226 11:30:53.919643 4724 scope.go:117] "RemoveContainer" containerID="8e128566c85aa0947463faed4e79eb920acd9e1f42fa142f14ef157ca5c05c6b" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.023652 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccd3a5e4-52a8-4f94-a4d3-c2099e118b30" path="/var/lib/kubelet/pods/ccd3a5e4-52a8-4f94-a4d3-c2099e118b30/volumes" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058323 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-dns-swift-storage-0\") pod \"936380ab-8283-489b-a609-f583e11b71eb\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058370 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-ovsdbserver-sb\") pod \"936380ab-8283-489b-a609-f583e11b71eb\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058447 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b6n2\" (UniqueName: \"kubernetes.io/projected/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-kube-api-access-7b6n2\") pod \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058471 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-dns-svc\") pod \"936380ab-8283-489b-a609-f583e11b71eb\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058502 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058522 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-logs\") pod \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058543 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93f9ce1f-2294-454b-bda1-d114e3ab9422-config-data\") pod \"93f9ce1f-2294-454b-bda1-d114e3ab9422\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058566 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-ovsdbserver-nb\") pod \"936380ab-8283-489b-a609-f583e11b71eb\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058596 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/93f9ce1f-2294-454b-bda1-d114e3ab9422-horizon-secret-key\") pod \"93f9ce1f-2294-454b-bda1-d114e3ab9422\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058620 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpd6d\" (UniqueName: \"kubernetes.io/projected/93f9ce1f-2294-454b-bda1-d114e3ab9422-kube-api-access-zpd6d\") pod \"93f9ce1f-2294-454b-bda1-d114e3ab9422\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058650 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-combined-ca-bundle\") pod \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058750 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-config-data\") pod \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058786 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-scripts\") pod \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058809 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-httpd-run\") pod \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\" (UID: \"15eaf092-bdf8-4c23-91a5-3a3d8011b77e\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058876 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93f9ce1f-2294-454b-bda1-d114e3ab9422-logs\") pod \"93f9ce1f-2294-454b-bda1-d114e3ab9422\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058904 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v2sm\" (UniqueName: \"kubernetes.io/projected/936380ab-8283-489b-a609-f583e11b71eb-kube-api-access-8v2sm\") pod \"936380ab-8283-489b-a609-f583e11b71eb\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058930 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-config\") pod \"936380ab-8283-489b-a609-f583e11b71eb\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.058948 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93f9ce1f-2294-454b-bda1-d114e3ab9422-scripts\") pod \"93f9ce1f-2294-454b-bda1-d114e3ab9422\" (UID: \"93f9ce1f-2294-454b-bda1-d114e3ab9422\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.059788 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f9ce1f-2294-454b-bda1-d114e3ab9422-scripts" (OuterVolumeSpecName: "scripts") pod "93f9ce1f-2294-454b-bda1-d114e3ab9422" (UID: "93f9ce1f-2294-454b-bda1-d114e3ab9422"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.060971 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "15eaf092-bdf8-4c23-91a5-3a3d8011b77e" (UID: "15eaf092-bdf8-4c23-91a5-3a3d8011b77e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.061480 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93f9ce1f-2294-454b-bda1-d114e3ab9422-logs" (OuterVolumeSpecName: "logs") pod "93f9ce1f-2294-454b-bda1-d114e3ab9422" (UID: "93f9ce1f-2294-454b-bda1-d114e3ab9422"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.073940 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "15eaf092-bdf8-4c23-91a5-3a3d8011b77e" (UID: "15eaf092-bdf8-4c23-91a5-3a3d8011b77e"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.074923 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-logs" (OuterVolumeSpecName: "logs") pod "15eaf092-bdf8-4c23-91a5-3a3d8011b77e" (UID: "15eaf092-bdf8-4c23-91a5-3a3d8011b77e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.075019 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-scripts" (OuterVolumeSpecName: "scripts") pod "15eaf092-bdf8-4c23-91a5-3a3d8011b77e" (UID: "15eaf092-bdf8-4c23-91a5-3a3d8011b77e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.075075 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f9ce1f-2294-454b-bda1-d114e3ab9422-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "93f9ce1f-2294-454b-bda1-d114e3ab9422" (UID: "93f9ce1f-2294-454b-bda1-d114e3ab9422"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.075157 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f9ce1f-2294-454b-bda1-d114e3ab9422-kube-api-access-zpd6d" (OuterVolumeSpecName: "kube-api-access-zpd6d") pod "93f9ce1f-2294-454b-bda1-d114e3ab9422" (UID: "93f9ce1f-2294-454b-bda1-d114e3ab9422"). InnerVolumeSpecName "kube-api-access-zpd6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.075226 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/936380ab-8283-489b-a609-f583e11b71eb-kube-api-access-8v2sm" (OuterVolumeSpecName: "kube-api-access-8v2sm") pod "936380ab-8283-489b-a609-f583e11b71eb" (UID: "936380ab-8283-489b-a609-f583e11b71eb"). InnerVolumeSpecName "kube-api-access-8v2sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.076571 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f9ce1f-2294-454b-bda1-d114e3ab9422-config-data" (OuterVolumeSpecName: "config-data") pod "93f9ce1f-2294-454b-bda1-d114e3ab9422" (UID: "93f9ce1f-2294-454b-bda1-d114e3ab9422"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.080727 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-kube-api-access-7b6n2" (OuterVolumeSpecName: "kube-api-access-7b6n2") pod "15eaf092-bdf8-4c23-91a5-3a3d8011b77e" (UID: "15eaf092-bdf8-4c23-91a5-3a3d8011b77e"). InnerVolumeSpecName "kube-api-access-7b6n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.120062 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.120128 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-vctpx" event={"ID":"936380ab-8283-489b-a609-f583e11b71eb","Type":"ContainerDied","Data":"43082e2ce23d78d70907078610b58f3df085ba52cd773a1aad5b99cc9ad57877"} Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.120297 4724 scope.go:117] "RemoveContainer" containerID="b78f93f30a002ba683174deab2048b34bddc010ad23122c2deeaa7d467a1c2fa" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.150520 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5845cdd9-d7d56" event={"ID":"93f9ce1f-2294-454b-bda1-d114e3ab9422","Type":"ContainerDied","Data":"10c5a56776c775a82ddb0a626b8f5798d98f42180344d9340138c325943096c9"} Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.150912 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5845cdd9-d7d56" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.168730 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b6n2\" (UniqueName: \"kubernetes.io/projected/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-kube-api-access-7b6n2\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.168786 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.168803 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.168820 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93f9ce1f-2294-454b-bda1-d114e3ab9422-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.168833 4724 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/93f9ce1f-2294-454b-bda1-d114e3ab9422-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.168844 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpd6d\" (UniqueName: \"kubernetes.io/projected/93f9ce1f-2294-454b-bda1-d114e3ab9422-kube-api-access-zpd6d\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.168858 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.168869 4724 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.168893 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93f9ce1f-2294-454b-bda1-d114e3ab9422-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.168904 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v2sm\" (UniqueName: \"kubernetes.io/projected/936380ab-8283-489b-a609-f583e11b71eb-kube-api-access-8v2sm\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.168916 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93f9ce1f-2294-454b-bda1-d114e3ab9422-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.177285 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd"] Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.190494 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bnckl" event={"ID":"3f54af76-4781-4532-b8fc-5100f18b0579","Type":"ContainerStarted","Data":"0c3d6803c259df57f6cd352267d647dad45979ecb49ea616bc8093f7a864db34"} Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.196363 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15eaf092-bdf8-4c23-91a5-3a3d8011b77e" (UID: "15eaf092-bdf8-4c23-91a5-3a3d8011b77e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.199973 4724 scope.go:117] "RemoveContainer" containerID="b4c39caa22b6a2c7994eb3594399cd4490118a64a4337dc3bc63f443016dc109" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.203307 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.203633 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15eaf092-bdf8-4c23-91a5-3a3d8011b77e","Type":"ContainerDied","Data":"4bb54e3cc98c08e0aa512ae1b93672fddcc7563ae74029ea20d5f919fe520487"} Feb 26 11:30:54 crc kubenswrapper[4724]: E0226 11:30:54.207672 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-rkvvl" podUID="dedd4492-c73a-4f47-8243-fea2dd842a4f" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.219959 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "936380ab-8283-489b-a609-f583e11b71eb" (UID: "936380ab-8283-489b-a609-f583e11b71eb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.233515 4724 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.270443 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.272720 4724 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.272850 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.284967 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "936380ab-8283-489b-a609-f583e11b71eb" (UID: "936380ab-8283-489b-a609-f583e11b71eb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.298406 4724 scope.go:117] "RemoveContainer" containerID="f047e98df3577cc95e8c255f4a260177dd51762804310d3a07d08bf1f1df26ed" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.302962 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-config-data" (OuterVolumeSpecName: "config-data") pod "15eaf092-bdf8-4c23-91a5-3a3d8011b77e" (UID: "15eaf092-bdf8-4c23-91a5-3a3d8011b77e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.307757 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-config" (OuterVolumeSpecName: "config") pod "936380ab-8283-489b-a609-f583e11b71eb" (UID: "936380ab-8283-489b-a609-f583e11b71eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.338263 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b5845cdd9-d7d56"] Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.370861 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b5845cdd9-d7d56"] Feb 26 11:30:54 crc kubenswrapper[4724]: E0226 11:30:54.395620 4724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-dns-svc podName:936380ab-8283-489b-a609-f583e11b71eb nodeName:}" failed. No retries permitted until 2026-02-26 11:30:54.895596365 +0000 UTC m=+1521.551335550 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-svc" (UniqueName: "kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-dns-svc") pod "936380ab-8283-489b-a609-f583e11b71eb" (UID: "936380ab-8283-489b-a609-f583e11b71eb") : error deleting /var/lib/kubelet/pods/936380ab-8283-489b-a609-f583e11b71eb/volume-subpaths: remove /var/lib/kubelet/pods/936380ab-8283-489b-a609-f583e11b71eb/volume-subpaths: no such file or directory Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.395985 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15eaf092-bdf8-4c23-91a5-3a3d8011b77e-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.396024 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.396037 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.396069 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "936380ab-8283-489b-a609-f583e11b71eb" (UID: "936380ab-8283-489b-a609-f583e11b71eb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.399369 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-bnckl" podStartSLOduration=4.332163852 podStartE2EDuration="1m7.399350691s" podCreationTimestamp="2026-02-26 11:29:47 +0000 UTC" firstStartedPulling="2026-02-26 11:29:48.745948292 +0000 UTC m=+1455.401687407" lastFinishedPulling="2026-02-26 11:30:51.813135131 +0000 UTC m=+1518.468874246" observedRunningTime="2026-02-26 11:30:54.339839681 +0000 UTC m=+1520.995578826" watchObservedRunningTime="2026-02-26 11:30:54.399350691 +0000 UTC m=+1521.055089806" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.415494 4724 scope.go:117] "RemoveContainer" containerID="7f9d369554335df684e76b6310803c36b9599308d7257cbd5d7dfaec3e0d6cea" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.435615 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57977849d4-8s5ds"] Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.454841 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535090-lwblq"] Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.499311 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.591818 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-ddfb9fd96-hzc8c"] Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.725875 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b5xkt"] Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.747502 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.807659 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.847334 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:30:54 crc kubenswrapper[4724]: E0226 11:30:54.848085 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15eaf092-bdf8-4c23-91a5-3a3d8011b77e" containerName="glance-httpd" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.848104 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="15eaf092-bdf8-4c23-91a5-3a3d8011b77e" containerName="glance-httpd" Feb 26 11:30:54 crc kubenswrapper[4724]: E0226 11:30:54.848126 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936380ab-8283-489b-a609-f583e11b71eb" containerName="dnsmasq-dns" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.848134 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="936380ab-8283-489b-a609-f583e11b71eb" containerName="dnsmasq-dns" Feb 26 11:30:54 crc kubenswrapper[4724]: E0226 11:30:54.848146 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15eaf092-bdf8-4c23-91a5-3a3d8011b77e" containerName="glance-log" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.848154 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="15eaf092-bdf8-4c23-91a5-3a3d8011b77e" containerName="glance-log" Feb 26 11:30:54 crc kubenswrapper[4724]: E0226 11:30:54.848192 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936380ab-8283-489b-a609-f583e11b71eb" containerName="init" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.848200 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="936380ab-8283-489b-a609-f583e11b71eb" containerName="init" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.848425 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="15eaf092-bdf8-4c23-91a5-3a3d8011b77e" containerName="glance-httpd" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.848441 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="936380ab-8283-489b-a609-f583e11b71eb" containerName="dnsmasq-dns" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.848453 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="15eaf092-bdf8-4c23-91a5-3a3d8011b77e" containerName="glance-log" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.849561 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.859106 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.870021 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.873612 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.922392 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-dns-svc\") pod \"936380ab-8283-489b-a609-f583e11b71eb\" (UID: \"936380ab-8283-489b-a609-f583e11b71eb\") " Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.923446 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "936380ab-8283-489b-a609-f583e11b71eb" (UID: "936380ab-8283-489b-a609-f583e11b71eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.923955 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.924246 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94be3313-633f-4595-8195-b96e91d607ce-logs\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.924613 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tvdj\" (UniqueName: \"kubernetes.io/projected/94be3313-633f-4595-8195-b96e91d607ce-kube-api-access-2tvdj\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.925220 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-scripts\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.925454 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.925677 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-config-data\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.925875 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.925987 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94be3313-633f-4595-8195-b96e91d607ce-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:54 crc kubenswrapper[4724]: I0226 11:30:54.930724 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936380ab-8283-489b-a609-f583e11b71eb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.012766 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.038113 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-scripts\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.038309 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.038425 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-config-data\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.038522 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.038594 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94be3313-633f-4595-8195-b96e91d607ce-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.038678 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.038759 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94be3313-633f-4595-8195-b96e91d607ce-logs\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.038836 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tvdj\" (UniqueName: \"kubernetes.io/projected/94be3313-633f-4595-8195-b96e91d607ce-kube-api-access-2tvdj\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.040115 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.040276 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94be3313-633f-4595-8195-b96e91d607ce-logs\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.040129 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94be3313-633f-4595-8195-b96e91d607ce-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.064948 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-scripts\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.065194 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.066036 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.066519 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-config-data\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.067748 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tvdj\" (UniqueName: \"kubernetes.io/projected/94be3313-633f-4595-8195-b96e91d607ce-kube-api-access-2tvdj\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: E0226 11:30:55.069563 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15eaf092_bdf8_4c23_91a5_3a3d8011b77e.slice/crio-4bb54e3cc98c08e0aa512ae1b93672fddcc7563ae74029ea20d5f919fe520487\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15eaf092_bdf8_4c23_91a5_3a3d8011b77e.slice\": RecentStats: unable to find data in memory cache]" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.108582 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-vctpx"] Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.126289 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.126756 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-vctpx"] Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.223280 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf5ef727-2542-4452-aff8-f34f3edea383","Type":"ContainerStarted","Data":"bbd24354fc5e252b81eabc20fa8b0a58fc6f310cfadcff1c33370c9b65a12127"} Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.231846 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ddfb9fd96-hzc8c" event={"ID":"fa39614a-db84-4214-baa1-bd7cbc7b5ae0","Type":"ContainerStarted","Data":"c6e21027ba0c7d09f5de31fa4c76eb438c2522455921165156e20f56089e2b47"} Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.242195 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57977849d4-8s5ds" event={"ID":"e4c4b3ae-030b-4e33-9779-2ffa39196a76","Type":"ContainerStarted","Data":"2d05173324f7b9fffef5e1a7011d5691a46b5f715bb1e31c9f78c96c22c30361"} Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.260890 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5xkt" event={"ID":"fb3c003b-9f91-4c11-a530-3f39fe5072b3","Type":"ContainerStarted","Data":"d18746885daa54caaa95fe8bec1dd5ec0a80732abefe55267bf92a3efa0fcb54"} Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.260957 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5xkt" event={"ID":"fb3c003b-9f91-4c11-a530-3f39fe5072b3","Type":"ContainerStarted","Data":"af61a00068ab0bae7a533fa1f767af91a887d8afc18c3272ff760d26a922a6f6"} Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.283368 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.285821 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" event={"ID":"a41aac00-2bbf-4232-bd75-5bf0f9f69f70","Type":"ContainerStarted","Data":"30ebceb35be98207d7a43c771910f507ae7bd49438dbca66d2bedcdf5387c759"} Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.285871 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" event={"ID":"a41aac00-2bbf-4232-bd75-5bf0f9f69f70","Type":"ContainerStarted","Data":"9bd0f96e3bb3ca546225005b5ff76196479d4a616759dd2868e8720cae055f88"} Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.309095 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535090-lwblq" event={"ID":"2e9c2690-0081-4d25-9813-e94f387c218d","Type":"ContainerStarted","Data":"76d76a614fd288d13ea9d1fb8a26b0d3c25f4ae13bba8a17940fba0b28c0ba01"} Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.315163 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-b5xkt" podStartSLOduration=4.315136361 podStartE2EDuration="4.315136361s" podCreationTimestamp="2026-02-26 11:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:30:55.309360123 +0000 UTC m=+1521.965099248" watchObservedRunningTime="2026-02-26 11:30:55.315136361 +0000 UTC m=+1521.970875476" Feb 26 11:30:55 crc kubenswrapper[4724]: I0226 11:30:55.357898 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" podStartSLOduration=55.357875583 podStartE2EDuration="55.357875583s" podCreationTimestamp="2026-02-26 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:30:55.344169153 +0000 UTC m=+1521.999908268" watchObservedRunningTime="2026-02-26 11:30:55.357875583 +0000 UTC m=+1522.013614708" Feb 26 11:30:56 crc kubenswrapper[4724]: I0226 11:30:56.030450 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15eaf092-bdf8-4c23-91a5-3a3d8011b77e" path="/var/lib/kubelet/pods/15eaf092-bdf8-4c23-91a5-3a3d8011b77e/volumes" Feb 26 11:30:56 crc kubenswrapper[4724]: I0226 11:30:56.032441 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="936380ab-8283-489b-a609-f583e11b71eb" path="/var/lib/kubelet/pods/936380ab-8283-489b-a609-f583e11b71eb/volumes" Feb 26 11:30:56 crc kubenswrapper[4724]: I0226 11:30:56.033423 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f9ce1f-2294-454b-bda1-d114e3ab9422" path="/var/lib/kubelet/pods/93f9ce1f-2294-454b-bda1-d114e3ab9422/volumes" Feb 26 11:30:56 crc kubenswrapper[4724]: I0226 11:30:56.104586 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:30:56 crc kubenswrapper[4724]: W0226 11:30:56.139453 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94be3313_633f_4595_8195_b96e91d607ce.slice/crio-3e5a01f7cf1ec7a507b4abf17b06e2721adfdad068151684321aad82917f8944 WatchSource:0}: Error finding container 3e5a01f7cf1ec7a507b4abf17b06e2721adfdad068151684321aad82917f8944: Status 404 returned error can't find the container with id 3e5a01f7cf1ec7a507b4abf17b06e2721adfdad068151684321aad82917f8944 Feb 26 11:30:56 crc kubenswrapper[4724]: I0226 11:30:56.318114 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57977849d4-8s5ds" event={"ID":"e4c4b3ae-030b-4e33-9779-2ffa39196a76","Type":"ContainerStarted","Data":"d2d77a3050ec82557b30ef2d6221dd500eaf3a3a9161730ffc1c169091a5a035"} Feb 26 11:30:56 crc kubenswrapper[4724]: I0226 11:30:56.320024 4724 generic.go:334] "Generic (PLEG): container finished" podID="a41aac00-2bbf-4232-bd75-5bf0f9f69f70" containerID="30ebceb35be98207d7a43c771910f507ae7bd49438dbca66d2bedcdf5387c759" exitCode=0 Feb 26 11:30:56 crc kubenswrapper[4724]: I0226 11:30:56.320383 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" event={"ID":"a41aac00-2bbf-4232-bd75-5bf0f9f69f70","Type":"ContainerDied","Data":"30ebceb35be98207d7a43c771910f507ae7bd49438dbca66d2bedcdf5387c759"} Feb 26 11:30:56 crc kubenswrapper[4724]: I0226 11:30:56.324849 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf5ef727-2542-4452-aff8-f34f3edea383","Type":"ContainerStarted","Data":"221b6476efce42c4aba02169aa0b9f7bdba5d3b098bfe33b19d34eafe9498f83"} Feb 26 11:30:56 crc kubenswrapper[4724]: I0226 11:30:56.327587 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94be3313-633f-4595-8195-b96e91d607ce","Type":"ContainerStarted","Data":"3e5a01f7cf1ec7a507b4abf17b06e2721adfdad068151684321aad82917f8944"} Feb 26 11:30:57 crc kubenswrapper[4724]: I0226 11:30:57.942677 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" Feb 26 11:30:57 crc kubenswrapper[4724]: I0226 11:30:57.960208 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-config-volume\") pod \"a41aac00-2bbf-4232-bd75-5bf0f9f69f70\" (UID: \"a41aac00-2bbf-4232-bd75-5bf0f9f69f70\") " Feb 26 11:30:57 crc kubenswrapper[4724]: I0226 11:30:57.960487 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-secret-volume\") pod \"a41aac00-2bbf-4232-bd75-5bf0f9f69f70\" (UID: \"a41aac00-2bbf-4232-bd75-5bf0f9f69f70\") " Feb 26 11:30:57 crc kubenswrapper[4724]: I0226 11:30:57.960575 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzmbg\" (UniqueName: \"kubernetes.io/projected/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-kube-api-access-rzmbg\") pod \"a41aac00-2bbf-4232-bd75-5bf0f9f69f70\" (UID: \"a41aac00-2bbf-4232-bd75-5bf0f9f69f70\") " Feb 26 11:30:57 crc kubenswrapper[4724]: I0226 11:30:57.974905 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-config-volume" (OuterVolumeSpecName: "config-volume") pod "a41aac00-2bbf-4232-bd75-5bf0f9f69f70" (UID: "a41aac00-2bbf-4232-bd75-5bf0f9f69f70"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:30:57 crc kubenswrapper[4724]: I0226 11:30:57.986699 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a41aac00-2bbf-4232-bd75-5bf0f9f69f70" (UID: "a41aac00-2bbf-4232-bd75-5bf0f9f69f70"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:30:57 crc kubenswrapper[4724]: I0226 11:30:57.994693 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-kube-api-access-rzmbg" (OuterVolumeSpecName: "kube-api-access-rzmbg") pod "a41aac00-2bbf-4232-bd75-5bf0f9f69f70" (UID: "a41aac00-2bbf-4232-bd75-5bf0f9f69f70"). InnerVolumeSpecName "kube-api-access-rzmbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:30:58 crc kubenswrapper[4724]: I0226 11:30:58.066660 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzmbg\" (UniqueName: \"kubernetes.io/projected/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-kube-api-access-rzmbg\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:58 crc kubenswrapper[4724]: I0226 11:30:58.066723 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:58 crc kubenswrapper[4724]: I0226 11:30:58.066736 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a41aac00-2bbf-4232-bd75-5bf0f9f69f70-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:30:58 crc kubenswrapper[4724]: I0226 11:30:58.369207 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" event={"ID":"a41aac00-2bbf-4232-bd75-5bf0f9f69f70","Type":"ContainerDied","Data":"9bd0f96e3bb3ca546225005b5ff76196479d4a616759dd2868e8720cae055f88"} Feb 26 11:30:58 crc kubenswrapper[4724]: I0226 11:30:58.369254 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bd0f96e3bb3ca546225005b5ff76196479d4a616759dd2868e8720cae055f88" Feb 26 11:30:58 crc kubenswrapper[4724]: I0226 11:30:58.369312 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd" Feb 26 11:30:59 crc kubenswrapper[4724]: I0226 11:30:59.384861 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57977849d4-8s5ds" event={"ID":"e4c4b3ae-030b-4e33-9779-2ffa39196a76","Type":"ContainerStarted","Data":"3e5edab1e2c718511750fd9327e7561944102843f5433d3bb1fb9259ca86717b"} Feb 26 11:30:59 crc kubenswrapper[4724]: I0226 11:30:59.388714 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535090-lwblq" event={"ID":"2e9c2690-0081-4d25-9813-e94f387c218d","Type":"ContainerStarted","Data":"1cd908824885fea8e8151befad8384cea2476e615a1b043b266cf513ee595cf5"} Feb 26 11:30:59 crc kubenswrapper[4724]: I0226 11:30:59.391121 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ddfb9fd96-hzc8c" event={"ID":"fa39614a-db84-4214-baa1-bd7cbc7b5ae0","Type":"ContainerStarted","Data":"6c0cf98c9d0fef3ab39c0703b5c93439207fec4b8a3f2f2032db879069cde925"} Feb 26 11:30:59 crc kubenswrapper[4724]: I0226 11:30:59.419078 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-57977849d4-8s5ds" podStartSLOduration=61.444236102 podStartE2EDuration="1m2.41905327s" podCreationTimestamp="2026-02-26 11:29:57 +0000 UTC" firstStartedPulling="2026-02-26 11:30:54.415705619 +0000 UTC m=+1521.071444724" lastFinishedPulling="2026-02-26 11:30:55.390522777 +0000 UTC m=+1522.046261892" observedRunningTime="2026-02-26 11:30:59.411874897 +0000 UTC m=+1526.067614032" watchObservedRunningTime="2026-02-26 11:30:59.41905327 +0000 UTC m=+1526.074792385" Feb 26 11:31:00 crc kubenswrapper[4724]: I0226 11:31:00.403430 4724 generic.go:334] "Generic (PLEG): container finished" podID="2e9c2690-0081-4d25-9813-e94f387c218d" containerID="1cd908824885fea8e8151befad8384cea2476e615a1b043b266cf513ee595cf5" exitCode=0 Feb 26 11:31:00 crc kubenswrapper[4724]: I0226 11:31:00.403964 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535090-lwblq" event={"ID":"2e9c2690-0081-4d25-9813-e94f387c218d","Type":"ContainerDied","Data":"1cd908824885fea8e8151befad8384cea2476e615a1b043b266cf513ee595cf5"} Feb 26 11:31:00 crc kubenswrapper[4724]: I0226 11:31:00.408418 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf5ef727-2542-4452-aff8-f34f3edea383","Type":"ContainerStarted","Data":"c34af2cb6a484a31df2576ead18e4b3a83248d1907d0706d6e0f6f656e3252e9"} Feb 26 11:31:00 crc kubenswrapper[4724]: I0226 11:31:00.411984 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ddfb9fd96-hzc8c" event={"ID":"fa39614a-db84-4214-baa1-bd7cbc7b5ae0","Type":"ContainerStarted","Data":"cf119be6b682f8400345d567636d81c24d1362c00c424d4a82811c66edd703a0"} Feb 26 11:31:00 crc kubenswrapper[4724]: I0226 11:31:00.438543 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94be3313-633f-4595-8195-b96e91d607ce","Type":"ContainerStarted","Data":"6d76122482c7e3a10bd2190d7e3b9365fe187c527c403f205917bdec3ec81cb8"} Feb 26 11:31:00 crc kubenswrapper[4724]: I0226 11:31:00.438611 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94be3313-633f-4595-8195-b96e91d607ce","Type":"ContainerStarted","Data":"16c71b54eec28fe9ff4de59e9710c2c74a9af78ce222076c21ef021233742034"} Feb 26 11:31:00 crc kubenswrapper[4724]: I0226 11:31:00.446304 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ce9a592-28c1-40fc-b4e5-90523b59c6d5","Type":"ContainerStarted","Data":"84dc0f4c8b5a9cd74987b297b4f5ce4fc77e2059e78fa476a7ab9677dff56f72"} Feb 26 11:31:00 crc kubenswrapper[4724]: I0226 11:31:00.457596 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-ddfb9fd96-hzc8c" podStartSLOduration=60.066285504 podStartE2EDuration="1m3.457550054s" podCreationTimestamp="2026-02-26 11:29:57 +0000 UTC" firstStartedPulling="2026-02-26 11:30:54.688054068 +0000 UTC m=+1521.343793183" lastFinishedPulling="2026-02-26 11:30:58.079318618 +0000 UTC m=+1524.735057733" observedRunningTime="2026-02-26 11:31:00.454463455 +0000 UTC m=+1527.110202590" watchObservedRunningTime="2026-02-26 11:31:00.457550054 +0000 UTC m=+1527.113289169" Feb 26 11:31:00 crc kubenswrapper[4724]: I0226 11:31:00.483204 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.483185409 podStartE2EDuration="9.483185409s" podCreationTimestamp="2026-02-26 11:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:00.477258918 +0000 UTC m=+1527.132998033" watchObservedRunningTime="2026-02-26 11:31:00.483185409 +0000 UTC m=+1527.138924524" Feb 26 11:31:00 crc kubenswrapper[4724]: I0226 11:31:00.508535 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.508513266 podStartE2EDuration="6.508513266s" podCreationTimestamp="2026-02-26 11:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:00.502590595 +0000 UTC m=+1527.158329710" watchObservedRunningTime="2026-02-26 11:31:00.508513266 +0000 UTC m=+1527.164252381" Feb 26 11:31:01 crc kubenswrapper[4724]: I0226 11:31:01.566071 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:01 crc kubenswrapper[4724]: I0226 11:31:01.566444 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:01 crc kubenswrapper[4724]: I0226 11:31:01.618995 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:01 crc kubenswrapper[4724]: I0226 11:31:01.670198 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:01 crc kubenswrapper[4724]: I0226 11:31:01.855324 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535090-lwblq" Feb 26 11:31:01 crc kubenswrapper[4724]: I0226 11:31:01.990141 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vtf7\" (UniqueName: \"kubernetes.io/projected/2e9c2690-0081-4d25-9813-e94f387c218d-kube-api-access-5vtf7\") pod \"2e9c2690-0081-4d25-9813-e94f387c218d\" (UID: \"2e9c2690-0081-4d25-9813-e94f387c218d\") " Feb 26 11:31:02 crc kubenswrapper[4724]: I0226 11:31:02.004698 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e9c2690-0081-4d25-9813-e94f387c218d-kube-api-access-5vtf7" (OuterVolumeSpecName: "kube-api-access-5vtf7") pod "2e9c2690-0081-4d25-9813-e94f387c218d" (UID: "2e9c2690-0081-4d25-9813-e94f387c218d"). InnerVolumeSpecName "kube-api-access-5vtf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:02 crc kubenswrapper[4724]: I0226 11:31:02.092702 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vtf7\" (UniqueName: \"kubernetes.io/projected/2e9c2690-0081-4d25-9813-e94f387c218d-kube-api-access-5vtf7\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:02 crc kubenswrapper[4724]: I0226 11:31:02.540625 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535090-lwblq" Feb 26 11:31:02 crc kubenswrapper[4724]: I0226 11:31:02.540617 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535090-lwblq" event={"ID":"2e9c2690-0081-4d25-9813-e94f387c218d","Type":"ContainerDied","Data":"76d76a614fd288d13ea9d1fb8a26b0d3c25f4ae13bba8a17940fba0b28c0ba01"} Feb 26 11:31:02 crc kubenswrapper[4724]: I0226 11:31:02.540981 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d76a614fd288d13ea9d1fb8a26b0d3c25f4ae13bba8a17940fba0b28c0ba01" Feb 26 11:31:02 crc kubenswrapper[4724]: I0226 11:31:02.541022 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:02 crc kubenswrapper[4724]: I0226 11:31:02.541370 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:02 crc kubenswrapper[4724]: I0226 11:31:02.931693 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535084-c5fwq"] Feb 26 11:31:02 crc kubenswrapper[4724]: I0226 11:31:02.950139 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535084-c5fwq"] Feb 26 11:31:02 crc kubenswrapper[4724]: E0226 11:31:02.976908 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-fllvh" podUID="f6f963de-7cc1-40fa-93ce-5f1facd31ffc" Feb 26 11:31:03 crc kubenswrapper[4724]: I0226 11:31:03.990693 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="893f427b-5554-4b22-82de-204e5893f5e3" path="/var/lib/kubelet/pods/893f427b-5554-4b22-82de-204e5893f5e3/volumes" Feb 26 11:31:04 crc kubenswrapper[4724]: I0226 11:31:04.557394 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:31:05 crc kubenswrapper[4724]: I0226 11:31:05.284225 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 11:31:05 crc kubenswrapper[4724]: I0226 11:31:05.284305 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 11:31:05 crc kubenswrapper[4724]: I0226 11:31:05.317875 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 11:31:05 crc kubenswrapper[4724]: I0226 11:31:05.332826 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 11:31:05 crc kubenswrapper[4724]: I0226 11:31:05.566067 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 11:31:05 crc kubenswrapper[4724]: I0226 11:31:05.566777 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 11:31:07 crc kubenswrapper[4724]: I0226 11:31:07.582892 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:31:07 crc kubenswrapper[4724]: I0226 11:31:07.583235 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:31:08 crc kubenswrapper[4724]: I0226 11:31:08.061842 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:31:08 crc kubenswrapper[4724]: I0226 11:31:08.061969 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:31:08 crc kubenswrapper[4724]: I0226 11:31:08.366464 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:31:08 crc kubenswrapper[4724]: I0226 11:31:08.366522 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:31:08 crc kubenswrapper[4724]: I0226 11:31:08.367760 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57977849d4-8s5ds" podUID="e4c4b3ae-030b-4e33-9779-2ffa39196a76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Feb 26 11:31:09 crc kubenswrapper[4724]: I0226 11:31:09.603151 4724 generic.go:334] "Generic (PLEG): container finished" podID="3f54af76-4781-4532-b8fc-5100f18b0579" containerID="0c3d6803c259df57f6cd352267d647dad45979ecb49ea616bc8093f7a864db34" exitCode=0 Feb 26 11:31:09 crc kubenswrapper[4724]: I0226 11:31:09.603257 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bnckl" event={"ID":"3f54af76-4781-4532-b8fc-5100f18b0579","Type":"ContainerDied","Data":"0c3d6803c259df57f6cd352267d647dad45979ecb49ea616bc8093f7a864db34"} Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.031785 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bnckl" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.069811 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-scripts\") pod \"3f54af76-4781-4532-b8fc-5100f18b0579\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.069987 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-config-data\") pod \"3f54af76-4781-4532-b8fc-5100f18b0579\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.070024 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-combined-ca-bundle\") pod \"3f54af76-4781-4532-b8fc-5100f18b0579\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.070131 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rn9p\" (UniqueName: \"kubernetes.io/projected/3f54af76-4781-4532-b8fc-5100f18b0579-kube-api-access-8rn9p\") pod \"3f54af76-4781-4532-b8fc-5100f18b0579\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.070244 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f54af76-4781-4532-b8fc-5100f18b0579-logs\") pod \"3f54af76-4781-4532-b8fc-5100f18b0579\" (UID: \"3f54af76-4781-4532-b8fc-5100f18b0579\") " Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.082468 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f54af76-4781-4532-b8fc-5100f18b0579-logs" (OuterVolumeSpecName: "logs") pod "3f54af76-4781-4532-b8fc-5100f18b0579" (UID: "3f54af76-4781-4532-b8fc-5100f18b0579"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.083891 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f54af76-4781-4532-b8fc-5100f18b0579-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.086898 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-scripts" (OuterVolumeSpecName: "scripts") pod "3f54af76-4781-4532-b8fc-5100f18b0579" (UID: "3f54af76-4781-4532-b8fc-5100f18b0579"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.091154 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f54af76-4781-4532-b8fc-5100f18b0579-kube-api-access-8rn9p" (OuterVolumeSpecName: "kube-api-access-8rn9p") pod "3f54af76-4781-4532-b8fc-5100f18b0579" (UID: "3f54af76-4781-4532-b8fc-5100f18b0579"). InnerVolumeSpecName "kube-api-access-8rn9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.123431 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-config-data" (OuterVolumeSpecName: "config-data") pod "3f54af76-4781-4532-b8fc-5100f18b0579" (UID: "3f54af76-4781-4532-b8fc-5100f18b0579"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.130057 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f54af76-4781-4532-b8fc-5100f18b0579" (UID: "3f54af76-4781-4532-b8fc-5100f18b0579"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.185971 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.186012 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.186031 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rn9p\" (UniqueName: \"kubernetes.io/projected/3f54af76-4781-4532-b8fc-5100f18b0579-kube-api-access-8rn9p\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.186044 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f54af76-4781-4532-b8fc-5100f18b0579-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.645502 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jrqgs" event={"ID":"65202f21-3756-4083-b158-9f06dca33deb","Type":"ContainerStarted","Data":"1c6c39dc2d7757dbed1a2892e1c42c6122582363b7b13fb7765bb627d4ad724b"} Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.648800 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ce9a592-28c1-40fc-b4e5-90523b59c6d5","Type":"ContainerStarted","Data":"f023c34aa3b4f134fcc9d2bdbe799d8e74dd51f093b19e5f802bd9dae4867a1a"} Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.651003 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rkvvl" event={"ID":"dedd4492-c73a-4f47-8243-fea2dd842a4f","Type":"ContainerStarted","Data":"30c67edacf3dbdd37a1504690171351de2cf9b8023717ca71b1d73366dc02fc8"} Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.653080 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bnckl" event={"ID":"3f54af76-4781-4532-b8fc-5100f18b0579","Type":"ContainerDied","Data":"1b5b63930d3504b43251344e93566145128262fe1db95c30c38f9f7bdb376646"} Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.653110 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b5b63930d3504b43251344e93566145128262fe1db95c30c38f9f7bdb376646" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.653162 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bnckl" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.687809 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-jrqgs" podStartSLOduration=3.914451748 podStartE2EDuration="1m25.687787178s" podCreationTimestamp="2026-02-26 11:29:46 +0000 UTC" firstStartedPulling="2026-02-26 11:29:48.624898489 +0000 UTC m=+1455.280637604" lastFinishedPulling="2026-02-26 11:31:10.398233919 +0000 UTC m=+1537.053973034" observedRunningTime="2026-02-26 11:31:11.674280803 +0000 UTC m=+1538.330019918" watchObservedRunningTime="2026-02-26 11:31:11.687787178 +0000 UTC m=+1538.343526313" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.800050 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-rkvvl" podStartSLOduration=3.257833792 podStartE2EDuration="1m24.800027376s" podCreationTimestamp="2026-02-26 11:29:47 +0000 UTC" firstStartedPulling="2026-02-26 11:29:48.977093618 +0000 UTC m=+1455.632832733" lastFinishedPulling="2026-02-26 11:31:10.519287202 +0000 UTC m=+1537.175026317" observedRunningTime="2026-02-26 11:31:11.709976225 +0000 UTC m=+1538.365715340" watchObservedRunningTime="2026-02-26 11:31:11.800027376 +0000 UTC m=+1538.455766491" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.809078 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6c6f668b64-t5tsj"] Feb 26 11:31:11 crc kubenswrapper[4724]: E0226 11:31:11.816712 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f54af76-4781-4532-b8fc-5100f18b0579" containerName="placement-db-sync" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.816873 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f54af76-4781-4532-b8fc-5100f18b0579" containerName="placement-db-sync" Feb 26 11:31:11 crc kubenswrapper[4724]: E0226 11:31:11.816967 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9c2690-0081-4d25-9813-e94f387c218d" containerName="oc" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.817038 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9c2690-0081-4d25-9813-e94f387c218d" containerName="oc" Feb 26 11:31:11 crc kubenswrapper[4724]: E0226 11:31:11.817122 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41aac00-2bbf-4232-bd75-5bf0f9f69f70" containerName="collect-profiles" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.817340 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41aac00-2bbf-4232-bd75-5bf0f9f69f70" containerName="collect-profiles" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.817650 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9c2690-0081-4d25-9813-e94f387c218d" containerName="oc" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.817775 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f54af76-4781-4532-b8fc-5100f18b0579" containerName="placement-db-sync" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.817862 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41aac00-2bbf-4232-bd75-5bf0f9f69f70" containerName="collect-profiles" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.819379 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.828757 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.829047 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gtfzr" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.832913 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.833235 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.833269 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.885000 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c6f668b64-t5tsj"] Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.897344 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-config-data\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.897693 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-internal-tls-certs\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.898045 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-combined-ca-bundle\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.898367 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/123116af-ca93-48d5-95ef-9154cda84c60-logs\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.898486 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-public-tls-certs\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.898713 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psnq8\" (UniqueName: \"kubernetes.io/projected/123116af-ca93-48d5-95ef-9154cda84c60-kube-api-access-psnq8\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:11 crc kubenswrapper[4724]: I0226 11:31:11.898925 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-scripts\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.014899 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psnq8\" (UniqueName: \"kubernetes.io/projected/123116af-ca93-48d5-95ef-9154cda84c60-kube-api-access-psnq8\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.014996 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-scripts\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.015047 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-config-data\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.015075 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-internal-tls-certs\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.015845 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-combined-ca-bundle\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.015924 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-public-tls-certs\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.015944 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/123116af-ca93-48d5-95ef-9154cda84c60-logs\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.016342 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/123116af-ca93-48d5-95ef-9154cda84c60-logs\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.025572 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-internal-tls-certs\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.026601 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-public-tls-certs\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.031038 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-combined-ca-bundle\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.038265 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-config-data\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.048533 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-scripts\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.063891 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psnq8\" (UniqueName: \"kubernetes.io/projected/123116af-ca93-48d5-95ef-9154cda84c60-kube-api-access-psnq8\") pod \"placement-6c6f668b64-t5tsj\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.176663 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.663285 4724 generic.go:334] "Generic (PLEG): container finished" podID="fb3c003b-9f91-4c11-a530-3f39fe5072b3" containerID="d18746885daa54caaa95fe8bec1dd5ec0a80732abefe55267bf92a3efa0fcb54" exitCode=0 Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.663596 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5xkt" event={"ID":"fb3c003b-9f91-4c11-a530-3f39fe5072b3","Type":"ContainerDied","Data":"d18746885daa54caaa95fe8bec1dd5ec0a80732abefe55267bf92a3efa0fcb54"} Feb 26 11:31:12 crc kubenswrapper[4724]: I0226 11:31:12.793715 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c6f668b64-t5tsj"] Feb 26 11:31:13 crc kubenswrapper[4724]: I0226 11:31:13.684302 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c6f668b64-t5tsj" event={"ID":"123116af-ca93-48d5-95ef-9154cda84c60","Type":"ContainerStarted","Data":"b4cc1fec3b8aae9856d581f9f595a4e4629887f44d6b9ff89ce4e94b5030aa9e"} Feb 26 11:31:13 crc kubenswrapper[4724]: I0226 11:31:13.684689 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c6f668b64-t5tsj" event={"ID":"123116af-ca93-48d5-95ef-9154cda84c60","Type":"ContainerStarted","Data":"58e9ca4f8eb246caaabc4b62bd3b5f71753945816dc69ebbac750df5a38a5f04"} Feb 26 11:31:13 crc kubenswrapper[4724]: I0226 11:31:13.684704 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c6f668b64-t5tsj" event={"ID":"123116af-ca93-48d5-95ef-9154cda84c60","Type":"ContainerStarted","Data":"b76b515c1c410baecf94099b606a79634f6fd13e47d57be3fff16496477d2db0"} Feb 26 11:31:13 crc kubenswrapper[4724]: I0226 11:31:13.685070 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:13 crc kubenswrapper[4724]: I0226 11:31:13.685115 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:13 crc kubenswrapper[4724]: I0226 11:31:13.726591 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6c6f668b64-t5tsj" podStartSLOduration=2.7265543709999998 podStartE2EDuration="2.726554371s" podCreationTimestamp="2026-02-26 11:31:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:13.725210496 +0000 UTC m=+1540.380949631" watchObservedRunningTime="2026-02-26 11:31:13.726554371 +0000 UTC m=+1540.382293486" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.214635 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.297923 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-fernet-keys\") pod \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.298053 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-config-data\") pod \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.298111 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-credential-keys\") pod \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.298139 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-scripts\") pod \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.298163 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-combined-ca-bundle\") pod \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.298226 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmhbc\" (UniqueName: \"kubernetes.io/projected/fb3c003b-9f91-4c11-a530-3f39fe5072b3-kube-api-access-lmhbc\") pod \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\" (UID: \"fb3c003b-9f91-4c11-a530-3f39fe5072b3\") " Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.323938 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fb3c003b-9f91-4c11-a530-3f39fe5072b3" (UID: "fb3c003b-9f91-4c11-a530-3f39fe5072b3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.339511 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-scripts" (OuterVolumeSpecName: "scripts") pod "fb3c003b-9f91-4c11-a530-3f39fe5072b3" (UID: "fb3c003b-9f91-4c11-a530-3f39fe5072b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.341634 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb3c003b-9f91-4c11-a530-3f39fe5072b3-kube-api-access-lmhbc" (OuterVolumeSpecName: "kube-api-access-lmhbc") pod "fb3c003b-9f91-4c11-a530-3f39fe5072b3" (UID: "fb3c003b-9f91-4c11-a530-3f39fe5072b3"). InnerVolumeSpecName "kube-api-access-lmhbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.399589 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb3c003b-9f91-4c11-a530-3f39fe5072b3" (UID: "fb3c003b-9f91-4c11-a530-3f39fe5072b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.401459 4724 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.401486 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.401499 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.401515 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmhbc\" (UniqueName: \"kubernetes.io/projected/fb3c003b-9f91-4c11-a530-3f39fe5072b3-kube-api-access-lmhbc\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.408649 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-config-data" (OuterVolumeSpecName: "config-data") pod "fb3c003b-9f91-4c11-a530-3f39fe5072b3" (UID: "fb3c003b-9f91-4c11-a530-3f39fe5072b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.504480 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.592627 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fb3c003b-9f91-4c11-a530-3f39fe5072b3" (UID: "fb3c003b-9f91-4c11-a530-3f39fe5072b3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.606476 4724 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb3c003b-9f91-4c11-a530-3f39fe5072b3-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.714399 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5xkt" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.725661 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5xkt" event={"ID":"fb3c003b-9f91-4c11-a530-3f39fe5072b3","Type":"ContainerDied","Data":"af61a00068ab0bae7a533fa1f767af91a887d8afc18c3272ff760d26a922a6f6"} Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.725715 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af61a00068ab0bae7a533fa1f767af91a887d8afc18c3272ff760d26a922a6f6" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.944906 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-678bf4f784-7wp9n"] Feb 26 11:31:14 crc kubenswrapper[4724]: E0226 11:31:14.945358 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb3c003b-9f91-4c11-a530-3f39fe5072b3" containerName="keystone-bootstrap" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.945377 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb3c003b-9f91-4c11-a530-3f39fe5072b3" containerName="keystone-bootstrap" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.945621 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb3c003b-9f91-4c11-a530-3f39fe5072b3" containerName="keystone-bootstrap" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.950677 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.954034 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.954058 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.954170 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.954269 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.958263 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.958618 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l4lrz" Feb 26 11:31:14 crc kubenswrapper[4724]: I0226 11:31:14.981045 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-678bf4f784-7wp9n"] Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.019130 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwft9\" (UniqueName: \"kubernetes.io/projected/e21108d2-f9c8-4427-80c5-402ec0dbf689-kube-api-access-mwft9\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.019225 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-scripts\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.019268 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-combined-ca-bundle\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.019320 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-config-data\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.019359 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-credential-keys\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.019385 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-fernet-keys\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.019419 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-internal-tls-certs\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.019452 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-public-tls-certs\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.121781 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwft9\" (UniqueName: \"kubernetes.io/projected/e21108d2-f9c8-4427-80c5-402ec0dbf689-kube-api-access-mwft9\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.121925 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-scripts\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.121949 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-combined-ca-bundle\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.122047 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-config-data\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.122116 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-fernet-keys\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.122140 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-credential-keys\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.122252 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-internal-tls-certs\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.122294 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-public-tls-certs\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.131897 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-credential-keys\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.133805 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-internal-tls-certs\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.139711 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-combined-ca-bundle\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.143592 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-scripts\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.146631 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-fernet-keys\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.153460 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-public-tls-certs\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.153971 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e21108d2-f9c8-4427-80c5-402ec0dbf689-config-data\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.169926 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwft9\" (UniqueName: \"kubernetes.io/projected/e21108d2-f9c8-4427-80c5-402ec0dbf689-kube-api-access-mwft9\") pod \"keystone-678bf4f784-7wp9n\" (UID: \"e21108d2-f9c8-4427-80c5-402ec0dbf689\") " pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:15 crc kubenswrapper[4724]: I0226 11:31:15.290121 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:16 crc kubenswrapper[4724]: I0226 11:31:16.217060 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-678bf4f784-7wp9n"] Feb 26 11:31:16 crc kubenswrapper[4724]: I0226 11:31:16.229415 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:16 crc kubenswrapper[4724]: I0226 11:31:16.230489 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:31:16 crc kubenswrapper[4724]: I0226 11:31:16.299366 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 11:31:16 crc kubenswrapper[4724]: I0226 11:31:16.304762 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:31:16 crc kubenswrapper[4724]: I0226 11:31:16.312559 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 11:31:16 crc kubenswrapper[4724]: I0226 11:31:16.464483 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 11:31:16 crc kubenswrapper[4724]: I0226 11:31:16.754761 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-678bf4f784-7wp9n" event={"ID":"e21108d2-f9c8-4427-80c5-402ec0dbf689","Type":"ContainerStarted","Data":"72cdeb3f8129767d99a01daf3a5d2af8068c01f3a924982785893361afec36b8"} Feb 26 11:31:16 crc kubenswrapper[4724]: I0226 11:31:16.755108 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-678bf4f784-7wp9n" event={"ID":"e21108d2-f9c8-4427-80c5-402ec0dbf689","Type":"ContainerStarted","Data":"d4a73ee03749064724bcafbd19c616aa242e5b34abb2786bea8192b075fd2260"} Feb 26 11:31:16 crc kubenswrapper[4724]: I0226 11:31:16.760223 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fllvh" event={"ID":"f6f963de-7cc1-40fa-93ce-5f1facd31ffc","Type":"ContainerStarted","Data":"7dc216225ecc5fac07af675a3ec7380426408abd80a1107e041dcb41e471d115"} Feb 26 11:31:16 crc kubenswrapper[4724]: I0226 11:31:16.788245 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-fllvh" podStartSLOduration=4.8603831379999995 podStartE2EDuration="1m30.788229499s" podCreationTimestamp="2026-02-26 11:29:46 +0000 UTC" firstStartedPulling="2026-02-26 11:29:48.646747747 +0000 UTC m=+1455.302486862" lastFinishedPulling="2026-02-26 11:31:14.574594108 +0000 UTC m=+1541.230333223" observedRunningTime="2026-02-26 11:31:16.785853109 +0000 UTC m=+1543.441592224" watchObservedRunningTime="2026-02-26 11:31:16.788229499 +0000 UTC m=+1543.443968614" Feb 26 11:31:16 crc kubenswrapper[4724]: I0226 11:31:16.906518 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:31:16 crc kubenswrapper[4724]: I0226 11:31:16.906569 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:31:17 crc kubenswrapper[4724]: I0226 11:31:17.790014 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:17 crc kubenswrapper[4724]: I0226 11:31:17.840229 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-678bf4f784-7wp9n" podStartSLOduration=3.840211819 podStartE2EDuration="3.840211819s" podCreationTimestamp="2026-02-26 11:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:17.826833367 +0000 UTC m=+1544.482572482" watchObservedRunningTime="2026-02-26 11:31:17.840211819 +0000 UTC m=+1544.495950934" Feb 26 11:31:18 crc kubenswrapper[4724]: I0226 11:31:18.063697 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.154:8443: connect: connection refused" Feb 26 11:31:18 crc kubenswrapper[4724]: I0226 11:31:18.368437 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57977849d4-8s5ds" podUID="e4c4b3ae-030b-4e33-9779-2ffa39196a76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Feb 26 11:31:22 crc kubenswrapper[4724]: I0226 11:31:22.861311 4724 generic.go:334] "Generic (PLEG): container finished" podID="dedd4492-c73a-4f47-8243-fea2dd842a4f" containerID="30c67edacf3dbdd37a1504690171351de2cf9b8023717ca71b1d73366dc02fc8" exitCode=0 Feb 26 11:31:22 crc kubenswrapper[4724]: I0226 11:31:22.861868 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rkvvl" event={"ID":"dedd4492-c73a-4f47-8243-fea2dd842a4f","Type":"ContainerDied","Data":"30c67edacf3dbdd37a1504690171351de2cf9b8023717ca71b1d73366dc02fc8"} Feb 26 11:31:27 crc kubenswrapper[4724]: I0226 11:31:27.920150 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rkvvl" event={"ID":"dedd4492-c73a-4f47-8243-fea2dd842a4f","Type":"ContainerDied","Data":"c0b67cf2e1d1f2caf3ec17cabd18c74a668de9c1987cc0997e04cd659c32404c"} Feb 26 11:31:27 crc kubenswrapper[4724]: I0226 11:31:27.920796 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0b67cf2e1d1f2caf3ec17cabd18c74a668de9c1987cc0997e04cd659c32404c" Feb 26 11:31:27 crc kubenswrapper[4724]: I0226 11:31:27.945117 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rkvvl" Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.071809 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.154:8443: connect: connection refused" Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.072004 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dedd4492-c73a-4f47-8243-fea2dd842a4f-db-sync-config-data\") pod \"dedd4492-c73a-4f47-8243-fea2dd842a4f\" (UID: \"dedd4492-c73a-4f47-8243-fea2dd842a4f\") " Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.072044 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dedd4492-c73a-4f47-8243-fea2dd842a4f-combined-ca-bundle\") pod \"dedd4492-c73a-4f47-8243-fea2dd842a4f\" (UID: \"dedd4492-c73a-4f47-8243-fea2dd842a4f\") " Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.072088 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp5v6\" (UniqueName: \"kubernetes.io/projected/dedd4492-c73a-4f47-8243-fea2dd842a4f-kube-api-access-kp5v6\") pod \"dedd4492-c73a-4f47-8243-fea2dd842a4f\" (UID: \"dedd4492-c73a-4f47-8243-fea2dd842a4f\") " Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.126474 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dedd4492-c73a-4f47-8243-fea2dd842a4f-kube-api-access-kp5v6" (OuterVolumeSpecName: "kube-api-access-kp5v6") pod "dedd4492-c73a-4f47-8243-fea2dd842a4f" (UID: "dedd4492-c73a-4f47-8243-fea2dd842a4f"). InnerVolumeSpecName "kube-api-access-kp5v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.129080 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dedd4492-c73a-4f47-8243-fea2dd842a4f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "dedd4492-c73a-4f47-8243-fea2dd842a4f" (UID: "dedd4492-c73a-4f47-8243-fea2dd842a4f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.131424 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dedd4492-c73a-4f47-8243-fea2dd842a4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dedd4492-c73a-4f47-8243-fea2dd842a4f" (UID: "dedd4492-c73a-4f47-8243-fea2dd842a4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.226691 4724 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dedd4492-c73a-4f47-8243-fea2dd842a4f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.226730 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dedd4492-c73a-4f47-8243-fea2dd842a4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.226742 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp5v6\" (UniqueName: \"kubernetes.io/projected/dedd4492-c73a-4f47-8243-fea2dd842a4f-kube-api-access-kp5v6\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:28 crc kubenswrapper[4724]: E0226 11:31:28.301752 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.366665 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57977849d4-8s5ds" podUID="e4c4b3ae-030b-4e33-9779-2ffa39196a76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.366747 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.367624 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"3e5edab1e2c718511750fd9327e7561944102843f5433d3bb1fb9259ca86717b"} pod="openstack/horizon-57977849d4-8s5ds" containerMessage="Container horizon failed startup probe, will be restarted" Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.367670 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-57977849d4-8s5ds" podUID="e4c4b3ae-030b-4e33-9779-2ffa39196a76" containerName="horizon" containerID="cri-o://3e5edab1e2c718511750fd9327e7561944102843f5433d3bb1fb9259ca86717b" gracePeriod=30 Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.935529 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ce9a592-28c1-40fc-b4e5-90523b59c6d5","Type":"ContainerStarted","Data":"23513c8e879138282aa73b1330f12595c015b660a0e503a56f757ff00afb81e4"} Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.936343 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.935764 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" containerName="proxy-httpd" containerID="cri-o://23513c8e879138282aa73b1330f12595c015b660a0e503a56f757ff00afb81e4" gracePeriod=30 Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.935585 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rkvvl" Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.935812 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" containerName="ceilometer-notification-agent" containerID="cri-o://84dc0f4c8b5a9cd74987b297b4f5ce4fc77e2059e78fa476a7ab9677dff56f72" gracePeriod=30 Feb 26 11:31:28 crc kubenswrapper[4724]: I0226 11:31:28.935790 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" containerName="sg-core" containerID="cri-o://f023c34aa3b4f134fcc9d2bdbe799d8e74dd51f093b19e5f802bd9dae4867a1a" gracePeriod=30 Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.407207 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-59bb6b4c7b-c52zs"] Feb 26 11:31:29 crc kubenswrapper[4724]: E0226 11:31:29.407562 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedd4492-c73a-4f47-8243-fea2dd842a4f" containerName="barbican-db-sync" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.407583 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedd4492-c73a-4f47-8243-fea2dd842a4f" containerName="barbican-db-sync" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.407777 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="dedd4492-c73a-4f47-8243-fea2dd842a4f" containerName="barbican-db-sync" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.408711 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.418624 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-msvw2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.418864 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.419020 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.434727 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-84bb945b69-xfww2"] Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.437372 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.451168 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84bb945b69-xfww2"] Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.457213 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.472044 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-59bb6b4c7b-c52zs"] Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.551222 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c98d03-1308-4014-8703-2c58516595ca-config-data\") pod \"barbican-worker-84bb945b69-xfww2\" (UID: \"04c98d03-1308-4014-8703-2c58516595ca\") " pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.551284 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04c98d03-1308-4014-8703-2c58516595ca-config-data-custom\") pod \"barbican-worker-84bb945b69-xfww2\" (UID: \"04c98d03-1308-4014-8703-2c58516595ca\") " pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.551307 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c98d03-1308-4014-8703-2c58516595ca-combined-ca-bundle\") pod \"barbican-worker-84bb945b69-xfww2\" (UID: \"04c98d03-1308-4014-8703-2c58516595ca\") " pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.551551 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f8bc69-bc44-4cda-8799-9b3e0786ef81-config-data\") pod \"barbican-keystone-listener-59bb6b4c7b-c52zs\" (UID: \"f4f8bc69-bc44-4cda-8799-9b3e0786ef81\") " pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.551617 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f8bc69-bc44-4cda-8799-9b3e0786ef81-combined-ca-bundle\") pod \"barbican-keystone-listener-59bb6b4c7b-c52zs\" (UID: \"f4f8bc69-bc44-4cda-8799-9b3e0786ef81\") " pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.551661 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frx8f\" (UniqueName: \"kubernetes.io/projected/f4f8bc69-bc44-4cda-8799-9b3e0786ef81-kube-api-access-frx8f\") pod \"barbican-keystone-listener-59bb6b4c7b-c52zs\" (UID: \"f4f8bc69-bc44-4cda-8799-9b3e0786ef81\") " pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.551762 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7qcp\" (UniqueName: \"kubernetes.io/projected/04c98d03-1308-4014-8703-2c58516595ca-kube-api-access-d7qcp\") pod \"barbican-worker-84bb945b69-xfww2\" (UID: \"04c98d03-1308-4014-8703-2c58516595ca\") " pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.551885 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04c98d03-1308-4014-8703-2c58516595ca-logs\") pod \"barbican-worker-84bb945b69-xfww2\" (UID: \"04c98d03-1308-4014-8703-2c58516595ca\") " pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.551938 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4f8bc69-bc44-4cda-8799-9b3e0786ef81-logs\") pod \"barbican-keystone-listener-59bb6b4c7b-c52zs\" (UID: \"f4f8bc69-bc44-4cda-8799-9b3e0786ef81\") " pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.551965 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4f8bc69-bc44-4cda-8799-9b3e0786ef81-config-data-custom\") pod \"barbican-keystone-listener-59bb6b4c7b-c52zs\" (UID: \"f4f8bc69-bc44-4cda-8799-9b3e0786ef81\") " pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.584171 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-8k5rb"] Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.585772 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.604018 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-8k5rb"] Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.654531 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7qcp\" (UniqueName: \"kubernetes.io/projected/04c98d03-1308-4014-8703-2c58516595ca-kube-api-access-d7qcp\") pod \"barbican-worker-84bb945b69-xfww2\" (UID: \"04c98d03-1308-4014-8703-2c58516595ca\") " pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.654605 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04c98d03-1308-4014-8703-2c58516595ca-logs\") pod \"barbican-worker-84bb945b69-xfww2\" (UID: \"04c98d03-1308-4014-8703-2c58516595ca\") " pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.654648 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4f8bc69-bc44-4cda-8799-9b3e0786ef81-logs\") pod \"barbican-keystone-listener-59bb6b4c7b-c52zs\" (UID: \"f4f8bc69-bc44-4cda-8799-9b3e0786ef81\") " pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.654672 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4f8bc69-bc44-4cda-8799-9b3e0786ef81-config-data-custom\") pod \"barbican-keystone-listener-59bb6b4c7b-c52zs\" (UID: \"f4f8bc69-bc44-4cda-8799-9b3e0786ef81\") " pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.654717 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c98d03-1308-4014-8703-2c58516595ca-config-data\") pod \"barbican-worker-84bb945b69-xfww2\" (UID: \"04c98d03-1308-4014-8703-2c58516595ca\") " pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.654750 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04c98d03-1308-4014-8703-2c58516595ca-config-data-custom\") pod \"barbican-worker-84bb945b69-xfww2\" (UID: \"04c98d03-1308-4014-8703-2c58516595ca\") " pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.654770 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c98d03-1308-4014-8703-2c58516595ca-combined-ca-bundle\") pod \"barbican-worker-84bb945b69-xfww2\" (UID: \"04c98d03-1308-4014-8703-2c58516595ca\") " pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.654792 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f8bc69-bc44-4cda-8799-9b3e0786ef81-config-data\") pod \"barbican-keystone-listener-59bb6b4c7b-c52zs\" (UID: \"f4f8bc69-bc44-4cda-8799-9b3e0786ef81\") " pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.654815 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f8bc69-bc44-4cda-8799-9b3e0786ef81-combined-ca-bundle\") pod \"barbican-keystone-listener-59bb6b4c7b-c52zs\" (UID: \"f4f8bc69-bc44-4cda-8799-9b3e0786ef81\") " pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.654834 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frx8f\" (UniqueName: \"kubernetes.io/projected/f4f8bc69-bc44-4cda-8799-9b3e0786ef81-kube-api-access-frx8f\") pod \"barbican-keystone-listener-59bb6b4c7b-c52zs\" (UID: \"f4f8bc69-bc44-4cda-8799-9b3e0786ef81\") " pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.655762 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04c98d03-1308-4014-8703-2c58516595ca-logs\") pod \"barbican-worker-84bb945b69-xfww2\" (UID: \"04c98d03-1308-4014-8703-2c58516595ca\") " pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.658355 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4f8bc69-bc44-4cda-8799-9b3e0786ef81-logs\") pod \"barbican-keystone-listener-59bb6b4c7b-c52zs\" (UID: \"f4f8bc69-bc44-4cda-8799-9b3e0786ef81\") " pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.676149 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c98d03-1308-4014-8703-2c58516595ca-config-data\") pod \"barbican-worker-84bb945b69-xfww2\" (UID: \"04c98d03-1308-4014-8703-2c58516595ca\") " pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.677916 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f8bc69-bc44-4cda-8799-9b3e0786ef81-combined-ca-bundle\") pod \"barbican-keystone-listener-59bb6b4c7b-c52zs\" (UID: \"f4f8bc69-bc44-4cda-8799-9b3e0786ef81\") " pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.678287 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04c98d03-1308-4014-8703-2c58516595ca-config-data-custom\") pod \"barbican-worker-84bb945b69-xfww2\" (UID: \"04c98d03-1308-4014-8703-2c58516595ca\") " pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.679677 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4f8bc69-bc44-4cda-8799-9b3e0786ef81-config-data-custom\") pod \"barbican-keystone-listener-59bb6b4c7b-c52zs\" (UID: \"f4f8bc69-bc44-4cda-8799-9b3e0786ef81\") " pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.684876 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c98d03-1308-4014-8703-2c58516595ca-combined-ca-bundle\") pod \"barbican-worker-84bb945b69-xfww2\" (UID: \"04c98d03-1308-4014-8703-2c58516595ca\") " pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.685828 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f8bc69-bc44-4cda-8799-9b3e0786ef81-config-data\") pod \"barbican-keystone-listener-59bb6b4c7b-c52zs\" (UID: \"f4f8bc69-bc44-4cda-8799-9b3e0786ef81\") " pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.708112 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7qcp\" (UniqueName: \"kubernetes.io/projected/04c98d03-1308-4014-8703-2c58516595ca-kube-api-access-d7qcp\") pod \"barbican-worker-84bb945b69-xfww2\" (UID: \"04c98d03-1308-4014-8703-2c58516595ca\") " pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.731255 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5f5dc64bf8-kjdl8"] Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.733146 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.735105 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frx8f\" (UniqueName: \"kubernetes.io/projected/f4f8bc69-bc44-4cda-8799-9b3e0786ef81-kube-api-access-frx8f\") pod \"barbican-keystone-listener-59bb6b4c7b-c52zs\" (UID: \"f4f8bc69-bc44-4cda-8799-9b3e0786ef81\") " pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.736569 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.756153 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f5dc64bf8-kjdl8"] Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.758321 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-dns-svc\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.764413 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-ovsdbserver-sb\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.764537 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgkb5\" (UniqueName: \"kubernetes.io/projected/5f6c848a-a642-4b86-bfae-e715d8380602-kube-api-access-tgkb5\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.764586 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-dns-swift-storage-0\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.764626 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-config\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.764765 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-ovsdbserver-nb\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.769687 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84bb945b69-xfww2" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.867508 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-ovsdbserver-nb\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.867596 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-config-data-custom\") pod \"barbican-api-5f5dc64bf8-kjdl8\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.867630 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-dns-svc\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.867700 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a360323-97b1-46ae-9379-f340e76bf065-logs\") pod \"barbican-api-5f5dc64bf8-kjdl8\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.867733 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-ovsdbserver-sb\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.867777 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-config-data\") pod \"barbican-api-5f5dc64bf8-kjdl8\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.867820 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwf59\" (UniqueName: \"kubernetes.io/projected/9a360323-97b1-46ae-9379-f340e76bf065-kube-api-access-xwf59\") pod \"barbican-api-5f5dc64bf8-kjdl8\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.867880 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgkb5\" (UniqueName: \"kubernetes.io/projected/5f6c848a-a642-4b86-bfae-e715d8380602-kube-api-access-tgkb5\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.867914 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-combined-ca-bundle\") pod \"barbican-api-5f5dc64bf8-kjdl8\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.867941 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-dns-swift-storage-0\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.867971 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-config\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.868665 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-ovsdbserver-nb\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.868710 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-config\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.869349 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-dns-svc\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.870031 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-ovsdbserver-sb\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.874774 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-dns-swift-storage-0\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.892282 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgkb5\" (UniqueName: \"kubernetes.io/projected/5f6c848a-a642-4b86-bfae-e715d8380602-kube-api-access-tgkb5\") pod \"dnsmasq-dns-6d66f584d7-8k5rb\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.900158 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.969242 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-config-data-custom\") pod \"barbican-api-5f5dc64bf8-kjdl8\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.970054 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a360323-97b1-46ae-9379-f340e76bf065-logs\") pod \"barbican-api-5f5dc64bf8-kjdl8\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.970441 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a360323-97b1-46ae-9379-f340e76bf065-logs\") pod \"barbican-api-5f5dc64bf8-kjdl8\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.970540 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-config-data\") pod \"barbican-api-5f5dc64bf8-kjdl8\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.970587 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwf59\" (UniqueName: \"kubernetes.io/projected/9a360323-97b1-46ae-9379-f340e76bf065-kube-api-access-xwf59\") pod \"barbican-api-5f5dc64bf8-kjdl8\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.970629 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-combined-ca-bundle\") pod \"barbican-api-5f5dc64bf8-kjdl8\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.973434 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-config-data-custom\") pod \"barbican-api-5f5dc64bf8-kjdl8\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.991114 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-combined-ca-bundle\") pod \"barbican-api-5f5dc64bf8-kjdl8\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:29 crc kubenswrapper[4724]: I0226 11:31:29.992928 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-config-data\") pod \"barbican-api-5f5dc64bf8-kjdl8\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:30 crc kubenswrapper[4724]: I0226 11:31:30.013521 4724 generic.go:334] "Generic (PLEG): container finished" podID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" containerID="23513c8e879138282aa73b1330f12595c015b660a0e503a56f757ff00afb81e4" exitCode=0 Feb 26 11:31:30 crc kubenswrapper[4724]: I0226 11:31:30.013554 4724 generic.go:334] "Generic (PLEG): container finished" podID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" containerID="f023c34aa3b4f134fcc9d2bdbe799d8e74dd51f093b19e5f802bd9dae4867a1a" exitCode=2 Feb 26 11:31:30 crc kubenswrapper[4724]: I0226 11:31:30.013574 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ce9a592-28c1-40fc-b4e5-90523b59c6d5","Type":"ContainerDied","Data":"23513c8e879138282aa73b1330f12595c015b660a0e503a56f757ff00afb81e4"} Feb 26 11:31:30 crc kubenswrapper[4724]: I0226 11:31:30.013596 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ce9a592-28c1-40fc-b4e5-90523b59c6d5","Type":"ContainerDied","Data":"f023c34aa3b4f134fcc9d2bdbe799d8e74dd51f093b19e5f802bd9dae4867a1a"} Feb 26 11:31:30 crc kubenswrapper[4724]: I0226 11:31:30.020767 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwf59\" (UniqueName: \"kubernetes.io/projected/9a360323-97b1-46ae-9379-f340e76bf065-kube-api-access-xwf59\") pod \"barbican-api-5f5dc64bf8-kjdl8\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:30 crc kubenswrapper[4724]: I0226 11:31:30.025575 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" Feb 26 11:31:30 crc kubenswrapper[4724]: I0226 11:31:30.109215 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:30 crc kubenswrapper[4724]: I0226 11:31:30.450770 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84bb945b69-xfww2"] Feb 26 11:31:30 crc kubenswrapper[4724]: W0226 11:31:30.454297 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04c98d03_1308_4014_8703_2c58516595ca.slice/crio-42f44b0e137992d262e58a9fe50f50f8d10a3d7c3215e500c99704d437169671 WatchSource:0}: Error finding container 42f44b0e137992d262e58a9fe50f50f8d10a3d7c3215e500c99704d437169671: Status 404 returned error can't find the container with id 42f44b0e137992d262e58a9fe50f50f8d10a3d7c3215e500c99704d437169671 Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.017514 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-8k5rb"] Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.037475 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84bb945b69-xfww2" event={"ID":"04c98d03-1308-4014-8703-2c58516595ca","Type":"ContainerStarted","Data":"42f44b0e137992d262e58a9fe50f50f8d10a3d7c3215e500c99704d437169671"} Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.059941 4724 generic.go:334] "Generic (PLEG): container finished" podID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" containerID="84dc0f4c8b5a9cd74987b297b4f5ce4fc77e2059e78fa476a7ab9677dff56f72" exitCode=0 Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.059991 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ce9a592-28c1-40fc-b4e5-90523b59c6d5","Type":"ContainerDied","Data":"84dc0f4c8b5a9cd74987b297b4f5ce4fc77e2059e78fa476a7ab9677dff56f72"} Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.145255 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-59bb6b4c7b-c52zs"] Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.259440 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f5dc64bf8-kjdl8"] Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.280934 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.393140 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-config-data\") pod \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.393251 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-sg-core-conf-yaml\") pod \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.393392 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl9tk\" (UniqueName: \"kubernetes.io/projected/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-kube-api-access-pl9tk\") pod \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.393414 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-scripts\") pod \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.393461 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-run-httpd\") pod \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.393489 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-log-httpd\") pod \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.393669 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-combined-ca-bundle\") pod \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\" (UID: \"9ce9a592-28c1-40fc-b4e5-90523b59c6d5\") " Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.397496 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9ce9a592-28c1-40fc-b4e5-90523b59c6d5" (UID: "9ce9a592-28c1-40fc-b4e5-90523b59c6d5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.397689 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9ce9a592-28c1-40fc-b4e5-90523b59c6d5" (UID: "9ce9a592-28c1-40fc-b4e5-90523b59c6d5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.406428 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-kube-api-access-pl9tk" (OuterVolumeSpecName: "kube-api-access-pl9tk") pod "9ce9a592-28c1-40fc-b4e5-90523b59c6d5" (UID: "9ce9a592-28c1-40fc-b4e5-90523b59c6d5"). InnerVolumeSpecName "kube-api-access-pl9tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.411379 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-scripts" (OuterVolumeSpecName: "scripts") pod "9ce9a592-28c1-40fc-b4e5-90523b59c6d5" (UID: "9ce9a592-28c1-40fc-b4e5-90523b59c6d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.464308 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9ce9a592-28c1-40fc-b4e5-90523b59c6d5" (UID: "9ce9a592-28c1-40fc-b4e5-90523b59c6d5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.496900 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ce9a592-28c1-40fc-b4e5-90523b59c6d5" (UID: "9ce9a592-28c1-40fc-b4e5-90523b59c6d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.504091 4724 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.504127 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl9tk\" (UniqueName: \"kubernetes.io/projected/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-kube-api-access-pl9tk\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.504139 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.504147 4724 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.504158 4724 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.504166 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.533514 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-config-data" (OuterVolumeSpecName: "config-data") pod "9ce9a592-28c1-40fc-b4e5-90523b59c6d5" (UID: "9ce9a592-28c1-40fc-b4e5-90523b59c6d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:31 crc kubenswrapper[4724]: I0226 11:31:31.612227 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce9a592-28c1-40fc-b4e5-90523b59c6d5-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.080673 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" event={"ID":"9a360323-97b1-46ae-9379-f340e76bf065","Type":"ContainerStarted","Data":"5574f873fa2ff807e63f9d2cad5f5b97bdfebaa74f2a51ee92ec0bae257744bf"} Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.081031 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.081047 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" event={"ID":"9a360323-97b1-46ae-9379-f340e76bf065","Type":"ContainerStarted","Data":"b20bf52af8b01963b2c09904198814aa3c85dbd5830fefe6d30bab32536bae83"} Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.081061 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.081087 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" event={"ID":"9a360323-97b1-46ae-9379-f340e76bf065","Type":"ContainerStarted","Data":"c68e4c9e8103eee91de8a34a550681ffb1526a7be5b751a18ff6c82cee678586"} Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.095647 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.095644 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ce9a592-28c1-40fc-b4e5-90523b59c6d5","Type":"ContainerDied","Data":"5a7f99e884f3917960046e92266601052e01ba600fa9cf01245b4d4d1ffc3e14"} Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.095882 4724 scope.go:117] "RemoveContainer" containerID="23513c8e879138282aa73b1330f12595c015b660a0e503a56f757ff00afb81e4" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.103609 4724 generic.go:334] "Generic (PLEG): container finished" podID="5f6c848a-a642-4b86-bfae-e715d8380602" containerID="895c3649953baa7f9e93632ff36b7f19fcbafbc39d72c8046fb998695e889bec" exitCode=0 Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.103755 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" event={"ID":"5f6c848a-a642-4b86-bfae-e715d8380602","Type":"ContainerDied","Data":"895c3649953baa7f9e93632ff36b7f19fcbafbc39d72c8046fb998695e889bec"} Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.103798 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" event={"ID":"5f6c848a-a642-4b86-bfae-e715d8380602","Type":"ContainerStarted","Data":"78739ee55bddcdccdcb37728888cce2c025e806a2ab82dee7da1aa579591f868"} Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.107582 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" event={"ID":"f4f8bc69-bc44-4cda-8799-9b3e0786ef81","Type":"ContainerStarted","Data":"14ab55ebdf2a7cad336b28cfdf75ed3085e0943abdc29433acd045716ab755f1"} Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.122967 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" podStartSLOduration=3.122946496 podStartE2EDuration="3.122946496s" podCreationTimestamp="2026-02-26 11:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:32.120425072 +0000 UTC m=+1558.776164207" watchObservedRunningTime="2026-02-26 11:31:32.122946496 +0000 UTC m=+1558.778685621" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.219699 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.242302 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.248506 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:31:32 crc kubenswrapper[4724]: E0226 11:31:32.248986 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" containerName="sg-core" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.249009 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" containerName="sg-core" Feb 26 11:31:32 crc kubenswrapper[4724]: E0226 11:31:32.249027 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" containerName="ceilometer-notification-agent" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.249036 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" containerName="ceilometer-notification-agent" Feb 26 11:31:32 crc kubenswrapper[4724]: E0226 11:31:32.249076 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" containerName="proxy-httpd" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.249087 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" containerName="proxy-httpd" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.249327 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" containerName="sg-core" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.249350 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" containerName="proxy-httpd" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.249363 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" containerName="ceilometer-notification-agent" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.251389 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.254050 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.254303 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.323865 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.444430 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f7zw\" (UniqueName: \"kubernetes.io/projected/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-kube-api-access-7f7zw\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.444587 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.444658 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-scripts\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.444687 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-run-httpd\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.444724 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-log-httpd\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.444751 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.444803 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-config-data\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.546716 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.546807 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-scripts\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.546842 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-run-httpd\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.546878 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-log-httpd\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.546905 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.546962 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-config-data\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.547001 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f7zw\" (UniqueName: \"kubernetes.io/projected/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-kube-api-access-7f7zw\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.547944 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-log-httpd\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.548291 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-run-httpd\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.552903 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.559219 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-config-data\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.571005 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-scripts\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.577364 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f7zw\" (UniqueName: \"kubernetes.io/projected/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-kube-api-access-7f7zw\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.579836 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " pod="openstack/ceilometer-0" Feb 26 11:31:32 crc kubenswrapper[4724]: I0226 11:31:32.582838 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.482129 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5466fc4f46-xdj8r"] Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.484247 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.487203 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.487921 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.534799 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5466fc4f46-xdj8r"] Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.669387 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9707878-82b6-46d7-b6c6-65745f7c72c3-combined-ca-bundle\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.669729 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmqpc\" (UniqueName: \"kubernetes.io/projected/f9707878-82b6-46d7-b6c6-65745f7c72c3-kube-api-access-jmqpc\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.669855 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9707878-82b6-46d7-b6c6-65745f7c72c3-config-data\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.669919 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9707878-82b6-46d7-b6c6-65745f7c72c3-public-tls-certs\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.670096 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9707878-82b6-46d7-b6c6-65745f7c72c3-logs\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.670145 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9707878-82b6-46d7-b6c6-65745f7c72c3-config-data-custom\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.670193 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9707878-82b6-46d7-b6c6-65745f7c72c3-internal-tls-certs\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.772116 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9707878-82b6-46d7-b6c6-65745f7c72c3-public-tls-certs\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.772206 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9707878-82b6-46d7-b6c6-65745f7c72c3-logs\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.772235 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9707878-82b6-46d7-b6c6-65745f7c72c3-config-data-custom\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.772254 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9707878-82b6-46d7-b6c6-65745f7c72c3-internal-tls-certs\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.772305 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9707878-82b6-46d7-b6c6-65745f7c72c3-combined-ca-bundle\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.772367 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmqpc\" (UniqueName: \"kubernetes.io/projected/f9707878-82b6-46d7-b6c6-65745f7c72c3-kube-api-access-jmqpc\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.772399 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9707878-82b6-46d7-b6c6-65745f7c72c3-config-data\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.774022 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9707878-82b6-46d7-b6c6-65745f7c72c3-logs\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.777412 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9707878-82b6-46d7-b6c6-65745f7c72c3-combined-ca-bundle\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.791392 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9707878-82b6-46d7-b6c6-65745f7c72c3-config-data-custom\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.792194 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9707878-82b6-46d7-b6c6-65745f7c72c3-public-tls-certs\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.792911 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9707878-82b6-46d7-b6c6-65745f7c72c3-internal-tls-certs\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.795537 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9707878-82b6-46d7-b6c6-65745f7c72c3-config-data\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.821874 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmqpc\" (UniqueName: \"kubernetes.io/projected/f9707878-82b6-46d7-b6c6-65745f7c72c3-kube-api-access-jmqpc\") pod \"barbican-api-5466fc4f46-xdj8r\" (UID: \"f9707878-82b6-46d7-b6c6-65745f7c72c3\") " pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:33 crc kubenswrapper[4724]: I0226 11:31:33.998395 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce9a592-28c1-40fc-b4e5-90523b59c6d5" path="/var/lib/kubelet/pods/9ce9a592-28c1-40fc-b4e5-90523b59c6d5/volumes" Feb 26 11:31:34 crc kubenswrapper[4724]: I0226 11:31:34.113464 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:34 crc kubenswrapper[4724]: I0226 11:31:34.124468 4724 scope.go:117] "RemoveContainer" containerID="f023c34aa3b4f134fcc9d2bdbe799d8e74dd51f093b19e5f802bd9dae4867a1a" Feb 26 11:31:34 crc kubenswrapper[4724]: I0226 11:31:34.166384 4724 scope.go:117] "RemoveContainer" containerID="84dc0f4c8b5a9cd74987b297b4f5ce4fc77e2059e78fa476a7ab9677dff56f72" Feb 26 11:31:34 crc kubenswrapper[4724]: W0226 11:31:34.612266 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb96e9c06_0ce9_46b6_9422_a0729d93d8d6.slice/crio-36c5f8179e5443d7c2914cb830c0bd210cc2b11d15076ace10ac7852e8add760 WatchSource:0}: Error finding container 36c5f8179e5443d7c2914cb830c0bd210cc2b11d15076ace10ac7852e8add760: Status 404 returned error can't find the container with id 36c5f8179e5443d7c2914cb830c0bd210cc2b11d15076ace10ac7852e8add760 Feb 26 11:31:34 crc kubenswrapper[4724]: I0226 11:31:34.636282 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:31:35 crc kubenswrapper[4724]: I0226 11:31:35.038512 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5466fc4f46-xdj8r"] Feb 26 11:31:35 crc kubenswrapper[4724]: I0226 11:31:35.190340 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" event={"ID":"f4f8bc69-bc44-4cda-8799-9b3e0786ef81","Type":"ContainerStarted","Data":"9a427f14e10e31bf1e3a96fe29bd3e3596ea68a97cab6245adeced7c8ba3867c"} Feb 26 11:31:35 crc kubenswrapper[4724]: I0226 11:31:35.213205 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" event={"ID":"5f6c848a-a642-4b86-bfae-e715d8380602","Type":"ContainerStarted","Data":"d3918e5db25cae8282ce1e7cc0897e2807bf0a54c897d8f86079faf866b9b772"} Feb 26 11:31:35 crc kubenswrapper[4724]: I0226 11:31:35.213439 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:35 crc kubenswrapper[4724]: I0226 11:31:35.221796 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96e9c06-0ce9-46b6-9422-a0729d93d8d6","Type":"ContainerStarted","Data":"36c5f8179e5443d7c2914cb830c0bd210cc2b11d15076ace10ac7852e8add760"} Feb 26 11:31:35 crc kubenswrapper[4724]: I0226 11:31:35.225021 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84bb945b69-xfww2" event={"ID":"04c98d03-1308-4014-8703-2c58516595ca","Type":"ContainerStarted","Data":"984545626a880f048d9576c62d4501b8f1404e930c26c03ceb6e73c9031ee002"} Feb 26 11:31:35 crc kubenswrapper[4724]: I0226 11:31:35.227475 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5466fc4f46-xdj8r" event={"ID":"f9707878-82b6-46d7-b6c6-65745f7c72c3","Type":"ContainerStarted","Data":"18af134e8347477541162796856ae2242d3a3d81b66827625f2bc24e7952b34a"} Feb 26 11:31:35 crc kubenswrapper[4724]: I0226 11:31:35.235719 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" podStartSLOduration=6.23569978 podStartE2EDuration="6.23569978s" podCreationTimestamp="2026-02-26 11:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:35.232846567 +0000 UTC m=+1561.888585712" watchObservedRunningTime="2026-02-26 11:31:35.23569978 +0000 UTC m=+1561.891438895" Feb 26 11:31:36 crc kubenswrapper[4724]: I0226 11:31:36.275580 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5466fc4f46-xdj8r" event={"ID":"f9707878-82b6-46d7-b6c6-65745f7c72c3","Type":"ContainerStarted","Data":"25d30f7a7b066f85b359c032399de194f6c817d7308b15172cefe9d616faacdb"} Feb 26 11:31:36 crc kubenswrapper[4724]: I0226 11:31:36.276091 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5466fc4f46-xdj8r" event={"ID":"f9707878-82b6-46d7-b6c6-65745f7c72c3","Type":"ContainerStarted","Data":"442ed65a014f84e4f4cf6f1720dc7321ff39ea810e3edac139a5c7f389a3b736"} Feb 26 11:31:36 crc kubenswrapper[4724]: I0226 11:31:36.277458 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:36 crc kubenswrapper[4724]: I0226 11:31:36.277496 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:36 crc kubenswrapper[4724]: I0226 11:31:36.288742 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" event={"ID":"f4f8bc69-bc44-4cda-8799-9b3e0786ef81","Type":"ContainerStarted","Data":"8fcfb1894fba2198869594b02a724fbf70f16ddd2a354ec795d2977b635d2faf"} Feb 26 11:31:36 crc kubenswrapper[4724]: I0226 11:31:36.301911 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96e9c06-0ce9-46b6-9422-a0729d93d8d6","Type":"ContainerStarted","Data":"a85bc78dafa79d8f07279a4aa337f47c7589205dbd700e113817acf807b1a9bb"} Feb 26 11:31:36 crc kubenswrapper[4724]: I0226 11:31:36.324432 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84bb945b69-xfww2" event={"ID":"04c98d03-1308-4014-8703-2c58516595ca","Type":"ContainerStarted","Data":"bd70d6550cbf6028943fab916f8a2b2b7997a11260c1e0dd5f72071745db3717"} Feb 26 11:31:36 crc kubenswrapper[4724]: I0226 11:31:36.368863 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-84bb945b69-xfww2" podStartSLOduration=3.664391499 podStartE2EDuration="7.368841942s" podCreationTimestamp="2026-02-26 11:31:29 +0000 UTC" firstStartedPulling="2026-02-26 11:31:30.46135382 +0000 UTC m=+1557.117092935" lastFinishedPulling="2026-02-26 11:31:34.165804273 +0000 UTC m=+1560.821543378" observedRunningTime="2026-02-26 11:31:36.346769968 +0000 UTC m=+1563.002509083" watchObservedRunningTime="2026-02-26 11:31:36.368841942 +0000 UTC m=+1563.024581057" Feb 26 11:31:36 crc kubenswrapper[4724]: I0226 11:31:36.372466 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5466fc4f46-xdj8r" podStartSLOduration=3.372453364 podStartE2EDuration="3.372453364s" podCreationTimestamp="2026-02-26 11:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:36.324762636 +0000 UTC m=+1562.980501751" watchObservedRunningTime="2026-02-26 11:31:36.372453364 +0000 UTC m=+1563.028192509" Feb 26 11:31:36 crc kubenswrapper[4724]: I0226 11:31:36.394228 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-59bb6b4c7b-c52zs" podStartSLOduration=4.413741756 podStartE2EDuration="7.394167939s" podCreationTimestamp="2026-02-26 11:31:29 +0000 UTC" firstStartedPulling="2026-02-26 11:31:31.185977065 +0000 UTC m=+1557.841716180" lastFinishedPulling="2026-02-26 11:31:34.166403248 +0000 UTC m=+1560.822142363" observedRunningTime="2026-02-26 11:31:36.39226532 +0000 UTC m=+1563.048004445" watchObservedRunningTime="2026-02-26 11:31:36.394167939 +0000 UTC m=+1563.049907064" Feb 26 11:31:37 crc kubenswrapper[4724]: I0226 11:31:37.365020 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96e9c06-0ce9-46b6-9422-a0729d93d8d6","Type":"ContainerStarted","Data":"80262f68c3a17cdab8a02f47df7f79ab1a05f36ef2ad0ca829ec203bd02216e4"} Feb 26 11:31:38 crc kubenswrapper[4724]: I0226 11:31:38.061911 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.154:8443: connect: connection refused" Feb 26 11:31:38 crc kubenswrapper[4724]: I0226 11:31:38.062036 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:31:38 crc kubenswrapper[4724]: I0226 11:31:38.062789 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"cf119be6b682f8400345d567636d81c24d1362c00c424d4a82811c66edd703a0"} pod="openstack/horizon-ddfb9fd96-hzc8c" containerMessage="Container horizon failed startup probe, will be restarted" Feb 26 11:31:38 crc kubenswrapper[4724]: I0226 11:31:38.062830 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" containerID="cri-o://cf119be6b682f8400345d567636d81c24d1362c00c424d4a82811c66edd703a0" gracePeriod=30 Feb 26 11:31:38 crc kubenswrapper[4724]: I0226 11:31:38.378963 4724 generic.go:334] "Generic (PLEG): container finished" podID="65202f21-3756-4083-b158-9f06dca33deb" containerID="1c6c39dc2d7757dbed1a2892e1c42c6122582363b7b13fb7765bb627d4ad724b" exitCode=0 Feb 26 11:31:38 crc kubenswrapper[4724]: I0226 11:31:38.379072 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jrqgs" event={"ID":"65202f21-3756-4083-b158-9f06dca33deb","Type":"ContainerDied","Data":"1c6c39dc2d7757dbed1a2892e1c42c6122582363b7b13fb7765bb627d4ad724b"} Feb 26 11:31:39 crc kubenswrapper[4724]: I0226 11:31:39.393737 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96e9c06-0ce9-46b6-9422-a0729d93d8d6","Type":"ContainerStarted","Data":"b1755022a67635c13cc93d63ca7f3ebc54ada71b41627fd77443fbfb898c0b3f"} Feb 26 11:31:39 crc kubenswrapper[4724]: I0226 11:31:39.859720 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jrqgs" Feb 26 11:31:39 crc kubenswrapper[4724]: I0226 11:31:39.939617 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65202f21-3756-4083-b158-9f06dca33deb-config-data\") pod \"65202f21-3756-4083-b158-9f06dca33deb\" (UID: \"65202f21-3756-4083-b158-9f06dca33deb\") " Feb 26 11:31:39 crc kubenswrapper[4724]: I0226 11:31:39.939687 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65202f21-3756-4083-b158-9f06dca33deb-combined-ca-bundle\") pod \"65202f21-3756-4083-b158-9f06dca33deb\" (UID: \"65202f21-3756-4083-b158-9f06dca33deb\") " Feb 26 11:31:39 crc kubenswrapper[4724]: I0226 11:31:39.939777 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k5tp\" (UniqueName: \"kubernetes.io/projected/65202f21-3756-4083-b158-9f06dca33deb-kube-api-access-5k5tp\") pod \"65202f21-3756-4083-b158-9f06dca33deb\" (UID: \"65202f21-3756-4083-b158-9f06dca33deb\") " Feb 26 11:31:39 crc kubenswrapper[4724]: I0226 11:31:39.966754 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65202f21-3756-4083-b158-9f06dca33deb-kube-api-access-5k5tp" (OuterVolumeSpecName: "kube-api-access-5k5tp") pod "65202f21-3756-4083-b158-9f06dca33deb" (UID: "65202f21-3756-4083-b158-9f06dca33deb"). InnerVolumeSpecName "kube-api-access-5k5tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:40 crc kubenswrapper[4724]: I0226 11:31:40.023358 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65202f21-3756-4083-b158-9f06dca33deb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65202f21-3756-4083-b158-9f06dca33deb" (UID: "65202f21-3756-4083-b158-9f06dca33deb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:40 crc kubenswrapper[4724]: I0226 11:31:40.042371 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65202f21-3756-4083-b158-9f06dca33deb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:40 crc kubenswrapper[4724]: I0226 11:31:40.042409 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k5tp\" (UniqueName: \"kubernetes.io/projected/65202f21-3756-4083-b158-9f06dca33deb-kube-api-access-5k5tp\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:40 crc kubenswrapper[4724]: I0226 11:31:40.125654 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65202f21-3756-4083-b158-9f06dca33deb-config-data" (OuterVolumeSpecName: "config-data") pod "65202f21-3756-4083-b158-9f06dca33deb" (UID: "65202f21-3756-4083-b158-9f06dca33deb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:40 crc kubenswrapper[4724]: I0226 11:31:40.147228 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65202f21-3756-4083-b158-9f06dca33deb-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:40 crc kubenswrapper[4724]: I0226 11:31:40.402608 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jrqgs" event={"ID":"65202f21-3756-4083-b158-9f06dca33deb","Type":"ContainerDied","Data":"5e233ac070bc7d12c710cb67e63e229665cec758442e2c5155015ac6972eb021"} Feb 26 11:31:40 crc kubenswrapper[4724]: I0226 11:31:40.402907 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e233ac070bc7d12c710cb67e63e229665cec758442e2c5155015ac6972eb021" Feb 26 11:31:40 crc kubenswrapper[4724]: I0226 11:31:40.402669 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jrqgs" Feb 26 11:31:41 crc kubenswrapper[4724]: I0226 11:31:41.191502 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:31:41 crc kubenswrapper[4724]: I0226 11:31:41.191495 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:31:41 crc kubenswrapper[4724]: I0226 11:31:41.426759 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96e9c06-0ce9-46b6-9422-a0729d93d8d6","Type":"ContainerStarted","Data":"5d496e314b4816b519be635885b65799f0c0f04a9d3e5ade9fed904a33bfe612"} Feb 26 11:31:41 crc kubenswrapper[4724]: I0226 11:31:41.428034 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:31:43 crc kubenswrapper[4724]: I0226 11:31:43.442719 4724 generic.go:334] "Generic (PLEG): container finished" podID="f6f963de-7cc1-40fa-93ce-5f1facd31ffc" containerID="7dc216225ecc5fac07af675a3ec7380426408abd80a1107e041dcb41e471d115" exitCode=0 Feb 26 11:31:43 crc kubenswrapper[4724]: I0226 11:31:43.442772 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fllvh" event={"ID":"f6f963de-7cc1-40fa-93ce-5f1facd31ffc","Type":"ContainerDied","Data":"7dc216225ecc5fac07af675a3ec7380426408abd80a1107e041dcb41e471d115"} Feb 26 11:31:43 crc kubenswrapper[4724]: I0226 11:31:43.445729 4724 generic.go:334] "Generic (PLEG): container finished" podID="ba5fb0ea-707e-4123-8510-b1d1f9976c34" containerID="6531ce102a318f4e1d9c9d45ec01a52344227633d6e92c79d338f39d229919e8" exitCode=0 Feb 26 11:31:43 crc kubenswrapper[4724]: I0226 11:31:43.445790 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b6cqc" event={"ID":"ba5fb0ea-707e-4123-8510-b1d1f9976c34","Type":"ContainerDied","Data":"6531ce102a318f4e1d9c9d45ec01a52344227633d6e92c79d338f39d229919e8"} Feb 26 11:31:43 crc kubenswrapper[4724]: I0226 11:31:43.466789 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.267760691 podStartE2EDuration="11.466769351s" podCreationTimestamp="2026-02-26 11:31:32 +0000 UTC" firstStartedPulling="2026-02-26 11:31:34.624669007 +0000 UTC m=+1561.280408122" lastFinishedPulling="2026-02-26 11:31:40.823677667 +0000 UTC m=+1567.479416782" observedRunningTime="2026-02-26 11:31:41.484374189 +0000 UTC m=+1568.140113304" watchObservedRunningTime="2026-02-26 11:31:43.466769351 +0000 UTC m=+1570.122508466" Feb 26 11:31:44 crc kubenswrapper[4724]: I0226 11:31:44.538498 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:44 crc kubenswrapper[4724]: I0226 11:31:44.545862 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:44 crc kubenswrapper[4724]: I0226 11:31:44.908862 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.021965 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-rh42r"] Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.022207 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" podUID="ae2d0ec9-77c9-4a19-b783-b40613d55eb5" containerName="dnsmasq-dns" containerID="cri-o://ba1a001785853808c0463aa52a382e160b06f195a0731fa7366a9be330f43189" gracePeriod=10 Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.057558 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b6cqc" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.082803 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.146388 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmgks\" (UniqueName: \"kubernetes.io/projected/ba5fb0ea-707e-4123-8510-b1d1f9976c34-kube-api-access-cmgks\") pod \"ba5fb0ea-707e-4123-8510-b1d1f9976c34\" (UID: \"ba5fb0ea-707e-4123-8510-b1d1f9976c34\") " Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.164526 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba5fb0ea-707e-4123-8510-b1d1f9976c34-config\") pod \"ba5fb0ea-707e-4123-8510-b1d1f9976c34\" (UID: \"ba5fb0ea-707e-4123-8510-b1d1f9976c34\") " Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.164805 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5fb0ea-707e-4123-8510-b1d1f9976c34-combined-ca-bundle\") pod \"ba5fb0ea-707e-4123-8510-b1d1f9976c34\" (UID: \"ba5fb0ea-707e-4123-8510-b1d1f9976c34\") " Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.165554 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5fb0ea-707e-4123-8510-b1d1f9976c34-kube-api-access-cmgks" (OuterVolumeSpecName: "kube-api-access-cmgks") pod "ba5fb0ea-707e-4123-8510-b1d1f9976c34" (UID: "ba5fb0ea-707e-4123-8510-b1d1f9976c34"). InnerVolumeSpecName "kube-api-access-cmgks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.171417 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5466fc4f46-xdj8r" podUID="f9707878-82b6-46d7-b6c6-65745f7c72c3" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.171786 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5466fc4f46-xdj8r" podUID="f9707878-82b6-46d7-b6c6-65745f7c72c3" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.181410 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmgks\" (UniqueName: \"kubernetes.io/projected/ba5fb0ea-707e-4123-8510-b1d1f9976c34-kube-api-access-cmgks\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.294384 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba5fb0ea-707e-4123-8510-b1d1f9976c34-config" (OuterVolumeSpecName: "config") pod "ba5fb0ea-707e-4123-8510-b1d1f9976c34" (UID: "ba5fb0ea-707e-4123-8510-b1d1f9976c34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.300429 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba5fb0ea-707e-4123-8510-b1d1f9976c34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba5fb0ea-707e-4123-8510-b1d1f9976c34" (UID: "ba5fb0ea-707e-4123-8510-b1d1f9976c34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.330607 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fllvh" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.386312 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba5fb0ea-707e-4123-8510-b1d1f9976c34-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.386337 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5fb0ea-707e-4123-8510-b1d1f9976c34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.395821 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" podUID="ae2d0ec9-77c9-4a19-b783-b40613d55eb5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.491922 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-scripts\") pod \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.492049 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7m4n\" (UniqueName: \"kubernetes.io/projected/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-kube-api-access-b7m4n\") pod \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.492154 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-etc-machine-id\") pod \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.492212 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-db-sync-config-data\") pod \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.492243 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-combined-ca-bundle\") pod \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.492279 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-config-data\") pod \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\" (UID: \"f6f963de-7cc1-40fa-93ce-5f1facd31ffc\") " Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.494291 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f6f963de-7cc1-40fa-93ce-5f1facd31ffc" (UID: "f6f963de-7cc1-40fa-93ce-5f1facd31ffc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.499758 4724 generic.go:334] "Generic (PLEG): container finished" podID="ae2d0ec9-77c9-4a19-b783-b40613d55eb5" containerID="ba1a001785853808c0463aa52a382e160b06f195a0731fa7366a9be330f43189" exitCode=0 Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.499815 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" event={"ID":"ae2d0ec9-77c9-4a19-b783-b40613d55eb5","Type":"ContainerDied","Data":"ba1a001785853808c0463aa52a382e160b06f195a0731fa7366a9be330f43189"} Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.512842 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-scripts" (OuterVolumeSpecName: "scripts") pod "f6f963de-7cc1-40fa-93ce-5f1facd31ffc" (UID: "f6f963de-7cc1-40fa-93ce-5f1facd31ffc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.514037 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b6cqc" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.514150 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b6cqc" event={"ID":"ba5fb0ea-707e-4123-8510-b1d1f9976c34","Type":"ContainerDied","Data":"6c9349dd70f1f0f3e6cd43f9385ce0f7c9b250d38f62de9c7299531abe71a18a"} Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.514267 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c9349dd70f1f0f3e6cd43f9385ce0f7c9b250d38f62de9c7299531abe71a18a" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.514719 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f6f963de-7cc1-40fa-93ce-5f1facd31ffc" (UID: "f6f963de-7cc1-40fa-93ce-5f1facd31ffc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.536334 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-kube-api-access-b7m4n" (OuterVolumeSpecName: "kube-api-access-b7m4n") pod "f6f963de-7cc1-40fa-93ce-5f1facd31ffc" (UID: "f6f963de-7cc1-40fa-93ce-5f1facd31ffc"). InnerVolumeSpecName "kube-api-access-b7m4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.551840 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.566119 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fllvh" event={"ID":"f6f963de-7cc1-40fa-93ce-5f1facd31ffc","Type":"ContainerDied","Data":"96e2d0c3ee3903b58d23ef780d3bcfc64a3837d4d08e07dda1c8686c2721e1e9"} Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.566165 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96e2d0c3ee3903b58d23ef780d3bcfc64a3837d4d08e07dda1c8686c2721e1e9" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.573147 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fllvh" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.601259 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7m4n\" (UniqueName: \"kubernetes.io/projected/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-kube-api-access-b7m4n\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.601278 4724 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.601293 4724 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.601301 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.627526 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6f963de-7cc1-40fa-93ce-5f1facd31ffc" (UID: "f6f963de-7cc1-40fa-93ce-5f1facd31ffc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.700357 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-config-data" (OuterVolumeSpecName: "config-data") pod "f6f963de-7cc1-40fa-93ce-5f1facd31ffc" (UID: "f6f963de-7cc1-40fa-93ce-5f1facd31ffc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.703874 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:45 crc kubenswrapper[4724]: I0226 11:31:45.703917 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6f963de-7cc1-40fa-93ce-5f1facd31ffc-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.001420 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-wtcch"] Feb 26 11:31:46 crc kubenswrapper[4724]: E0226 11:31:46.006634 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5fb0ea-707e-4123-8510-b1d1f9976c34" containerName="neutron-db-sync" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.006857 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5fb0ea-707e-4123-8510-b1d1f9976c34" containerName="neutron-db-sync" Feb 26 11:31:46 crc kubenswrapper[4724]: E0226 11:31:46.006931 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65202f21-3756-4083-b158-9f06dca33deb" containerName="heat-db-sync" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.006983 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="65202f21-3756-4083-b158-9f06dca33deb" containerName="heat-db-sync" Feb 26 11:31:46 crc kubenswrapper[4724]: E0226 11:31:46.007050 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f963de-7cc1-40fa-93ce-5f1facd31ffc" containerName="cinder-db-sync" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.010055 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f963de-7cc1-40fa-93ce-5f1facd31ffc" containerName="cinder-db-sync" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.010508 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="65202f21-3756-4083-b158-9f06dca33deb" containerName="heat-db-sync" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.010591 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5fb0ea-707e-4123-8510-b1d1f9976c34" containerName="neutron-db-sync" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.010671 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6f963de-7cc1-40fa-93ce-5f1facd31ffc" containerName="cinder-db-sync" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.011651 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.034292 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.034370 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.034494 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.034524 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-dns-svc\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.034577 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-config\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.034614 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmmvj\" (UniqueName: \"kubernetes.io/projected/edd3f8dc-e935-462e-ae52-c136ad4fddc2-kube-api-access-fmmvj\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.086881 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-wtcch"] Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.136573 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.136644 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.136765 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.136792 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-dns-svc\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.136832 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-config\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.136867 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmmvj\" (UniqueName: \"kubernetes.io/projected/edd3f8dc-e935-462e-ae52-c136ad4fddc2-kube-api-access-fmmvj\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.137933 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.138000 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-dns-svc\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.147826 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.149520 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.149841 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-config\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.161733 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59b8f9f788-85hsf"] Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.163592 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.232317 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59b8f9f788-85hsf"] Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.243099 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ppnzv" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.243358 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.243579 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.244861 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-httpd-config\") pod \"neutron-59b8f9f788-85hsf\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.244956 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-combined-ca-bundle\") pod \"neutron-59b8f9f788-85hsf\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.245003 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6md9m\" (UniqueName: \"kubernetes.io/projected/cd5f2a9d-eba9-4157-9b34-fba1714fa562-kube-api-access-6md9m\") pod \"neutron-59b8f9f788-85hsf\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.245038 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-ovndb-tls-certs\") pod \"neutron-59b8f9f788-85hsf\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.245070 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-config\") pod \"neutron-59b8f9f788-85hsf\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.250149 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.270355 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-869f945844-vjsk6"] Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.274154 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.274753 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.288070 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmmvj\" (UniqueName: \"kubernetes.io/projected/edd3f8dc-e935-462e-ae52-c136ad4fddc2-kube-api-access-fmmvj\") pod \"dnsmasq-dns-688c87cc99-wtcch\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.348882 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbzqn\" (UniqueName: \"kubernetes.io/projected/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-kube-api-access-qbzqn\") pod \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.348940 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-config\") pod \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.348972 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-ovsdbserver-sb\") pod \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.349031 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-dns-svc\") pod \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.349053 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-dns-swift-storage-0\") pod \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.349087 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-ovsdbserver-nb\") pod \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\" (UID: \"ae2d0ec9-77c9-4a19-b783-b40613d55eb5\") " Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.349236 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8896d359-383e-4f56-a18d-2d8a913d05a4-scripts\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.349269 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8896d359-383e-4f56-a18d-2d8a913d05a4-logs\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.349299 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-httpd-config\") pod \"neutron-59b8f9f788-85hsf\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.349319 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x2xs\" (UniqueName: \"kubernetes.io/projected/8896d359-383e-4f56-a18d-2d8a913d05a4-kube-api-access-2x2xs\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.349339 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8896d359-383e-4f56-a18d-2d8a913d05a4-internal-tls-certs\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.349386 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8896d359-383e-4f56-a18d-2d8a913d05a4-public-tls-certs\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.349410 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8896d359-383e-4f56-a18d-2d8a913d05a4-config-data\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.349432 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-combined-ca-bundle\") pod \"neutron-59b8f9f788-85hsf\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.349448 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8896d359-383e-4f56-a18d-2d8a913d05a4-combined-ca-bundle\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.349468 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6md9m\" (UniqueName: \"kubernetes.io/projected/cd5f2a9d-eba9-4157-9b34-fba1714fa562-kube-api-access-6md9m\") pod \"neutron-59b8f9f788-85hsf\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.349491 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-ovndb-tls-certs\") pod \"neutron-59b8f9f788-85hsf\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.349513 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-config\") pod \"neutron-59b8f9f788-85hsf\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.370878 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.380360 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-config\") pod \"neutron-59b8f9f788-85hsf\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.393826 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-combined-ca-bundle\") pod \"neutron-59b8f9f788-85hsf\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.403256 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-ovndb-tls-certs\") pod \"neutron-59b8f9f788-85hsf\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.424701 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-869f945844-vjsk6"] Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.447829 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-kube-api-access-qbzqn" (OuterVolumeSpecName: "kube-api-access-qbzqn") pod "ae2d0ec9-77c9-4a19-b783-b40613d55eb5" (UID: "ae2d0ec9-77c9-4a19-b783-b40613d55eb5"). InnerVolumeSpecName "kube-api-access-qbzqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.451318 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8896d359-383e-4f56-a18d-2d8a913d05a4-scripts\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.451368 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8896d359-383e-4f56-a18d-2d8a913d05a4-logs\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.451421 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x2xs\" (UniqueName: \"kubernetes.io/projected/8896d359-383e-4f56-a18d-2d8a913d05a4-kube-api-access-2x2xs\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.451446 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8896d359-383e-4f56-a18d-2d8a913d05a4-internal-tls-certs\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.451481 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8896d359-383e-4f56-a18d-2d8a913d05a4-public-tls-certs\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.451506 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8896d359-383e-4f56-a18d-2d8a913d05a4-config-data\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.451527 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8896d359-383e-4f56-a18d-2d8a913d05a4-combined-ca-bundle\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.451616 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbzqn\" (UniqueName: \"kubernetes.io/projected/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-kube-api-access-qbzqn\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.462920 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8896d359-383e-4f56-a18d-2d8a913d05a4-logs\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.468791 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6md9m\" (UniqueName: \"kubernetes.io/projected/cd5f2a9d-eba9-4157-9b34-fba1714fa562-kube-api-access-6md9m\") pod \"neutron-59b8f9f788-85hsf\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.477734 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8896d359-383e-4f56-a18d-2d8a913d05a4-scripts\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.490493 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8896d359-383e-4f56-a18d-2d8a913d05a4-combined-ca-bundle\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.549600 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-httpd-config\") pod \"neutron-59b8f9f788-85hsf\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.569234 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:31:46 crc kubenswrapper[4724]: E0226 11:31:46.569681 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2d0ec9-77c9-4a19-b783-b40613d55eb5" containerName="dnsmasq-dns" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.569699 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2d0ec9-77c9-4a19-b783-b40613d55eb5" containerName="dnsmasq-dns" Feb 26 11:31:46 crc kubenswrapper[4724]: E0226 11:31:46.569735 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2d0ec9-77c9-4a19-b783-b40613d55eb5" containerName="init" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.569740 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2d0ec9-77c9-4a19-b783-b40613d55eb5" containerName="init" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.569903 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae2d0ec9-77c9-4a19-b783-b40613d55eb5" containerName="dnsmasq-dns" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.572066 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.574716 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8896d359-383e-4f56-a18d-2d8a913d05a4-config-data\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.576443 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8896d359-383e-4f56-a18d-2d8a913d05a4-internal-tls-certs\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.581117 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.594463 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.600389 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.600653 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.600779 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zsnlw" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.600886 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.601799 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8896d359-383e-4f56-a18d-2d8a913d05a4-public-tls-certs\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.691468 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" event={"ID":"ae2d0ec9-77c9-4a19-b783-b40613d55eb5","Type":"ContainerDied","Data":"66487d535353f5e1c1b5be705ee6dff0006bcdc9f5da25c4eafb5e394803bbaf"} Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.691514 4724 scope.go:117] "RemoveContainer" containerID="ba1a001785853808c0463aa52a382e160b06f195a0731fa7366a9be330f43189" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.691671 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-rh42r" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.722391 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x2xs\" (UniqueName: \"kubernetes.io/projected/8896d359-383e-4f56-a18d-2d8a913d05a4-kube-api-access-2x2xs\") pod \"placement-869f945844-vjsk6\" (UID: \"8896d359-383e-4f56-a18d-2d8a913d05a4\") " pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.765271 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-scripts\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.765329 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnv2b\" (UniqueName: \"kubernetes.io/projected/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-kube-api-access-bnv2b\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.765352 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-config-data\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.765410 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.765486 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.765525 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.836574 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ae2d0ec9-77c9-4a19-b783-b40613d55eb5" (UID: "ae2d0ec9-77c9-4a19-b783-b40613d55eb5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.871992 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.883316 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.890704 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-scripts\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.890987 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnv2b\" (UniqueName: \"kubernetes.io/projected/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-kube-api-access-bnv2b\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.891110 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-config-data\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.891327 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.891614 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.891763 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.898861 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-wtcch"] Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.908362 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.908422 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.908478 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.909123 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55d1fb33975b75b061c0528685eae11004b1a2f0eedaec829e3798af02cfba8d"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.909166 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://55d1fb33975b75b061c0528685eae11004b1a2f0eedaec829e3798af02cfba8d" gracePeriod=600 Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.933378 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-config-data\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.933813 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.936150 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-scripts\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:46 crc kubenswrapper[4724]: I0226 11:31:46.967920 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.021862 4724 scope.go:117] "RemoveContainer" containerID="d4bb3f0f7c3a2e2ce156e70d10b27ee7d942386ec78a8a7269fd471be82efdce" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.022549 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.041566 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-l9gr5"] Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.045556 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-config" (OuterVolumeSpecName: "config") pod "ae2d0ec9-77c9-4a19-b783-b40613d55eb5" (UID: "ae2d0ec9-77c9-4a19-b783-b40613d55eb5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.048975 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.060513 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-l9gr5"] Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.064117 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae2d0ec9-77c9-4a19-b783-b40613d55eb5" (UID: "ae2d0ec9-77c9-4a19-b783-b40613d55eb5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.085131 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae2d0ec9-77c9-4a19-b783-b40613d55eb5" (UID: "ae2d0ec9-77c9-4a19-b783-b40613d55eb5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.085310 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnv2b\" (UniqueName: \"kubernetes.io/projected/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-kube-api-access-bnv2b\") pod \"cinder-scheduler-0\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " pod="openstack/cinder-scheduler-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.089772 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.092386 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.098503 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.098537 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.098546 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.101556 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.138086 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.142672 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.175134 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ae2d0ec9-77c9-4a19-b783-b40613d55eb5" (UID: "ae2d0ec9-77c9-4a19-b783-b40613d55eb5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.205411 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.205472 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8d446a7-c07f-4b3d-ae55-a2246b928864-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.205516 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-scripts\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.205556 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkwgz\" (UniqueName: \"kubernetes.io/projected/c8d446a7-c07f-4b3d-ae55-a2246b928864-kube-api-access-nkwgz\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.205589 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.205610 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.205647 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.205668 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-config\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.205693 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8d446a7-c07f-4b3d-ae55-a2246b928864-logs\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.205712 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-config-data-custom\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.205741 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8q8c\" (UniqueName: \"kubernetes.io/projected/485e3e4a-c268-4d2e-8489-fc72d7dd385a-kube-api-access-q8q8c\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.205775 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.205790 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-config-data\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.205834 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae2d0ec9-77c9-4a19-b783-b40613d55eb5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.309897 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.310524 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.310568 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.310595 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-config\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.310625 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8d446a7-c07f-4b3d-ae55-a2246b928864-logs\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.310645 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-config-data-custom\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.310679 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8q8c\" (UniqueName: \"kubernetes.io/projected/485e3e4a-c268-4d2e-8489-fc72d7dd385a-kube-api-access-q8q8c\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.310718 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.310736 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-config-data\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.310759 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.310808 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8d446a7-c07f-4b3d-ae55-a2246b928864-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.310843 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-scripts\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.310879 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkwgz\" (UniqueName: \"kubernetes.io/projected/c8d446a7-c07f-4b3d-ae55-a2246b928864-kube-api-access-nkwgz\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.311319 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.311978 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.312237 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.312821 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8d446a7-c07f-4b3d-ae55-a2246b928864-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.315669 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8d446a7-c07f-4b3d-ae55-a2246b928864-logs\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.316813 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.316847 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-config\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.352129 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-config-data\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.411209 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.411763 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-config-data-custom\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.412154 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-scripts\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.430427 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkwgz\" (UniqueName: \"kubernetes.io/projected/c8d446a7-c07f-4b3d-ae55-a2246b928864-kube-api-access-nkwgz\") pod \"cinder-api-0\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.432877 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8q8c\" (UniqueName: \"kubernetes.io/projected/485e3e4a-c268-4d2e-8489-fc72d7dd385a-kube-api-access-q8q8c\") pod \"dnsmasq-dns-6bb4fc677f-l9gr5\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.538163 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.554775 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-rh42r"] Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.586907 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-rh42r"] Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.728411 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.914695 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="55d1fb33975b75b061c0528685eae11004b1a2f0eedaec829e3798af02cfba8d" exitCode=0 Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.915050 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"55d1fb33975b75b061c0528685eae11004b1a2f0eedaec829e3798af02cfba8d"} Feb 26 11:31:47 crc kubenswrapper[4724]: I0226 11:31:47.915083 4724 scope.go:117] "RemoveContainer" containerID="89545c6222687528337cf32ba9bda30e19443137c7e0933c297f827f49d03a36" Feb 26 11:31:47 crc kubenswrapper[4724]: E0226 11:31:47.950665 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae2d0ec9_77c9_4a19_b783_b40613d55eb5.slice/crio-66487d535353f5e1c1b5be705ee6dff0006bcdc9f5da25c4eafb5e394803bbaf\": RecentStats: unable to find data in memory cache]" Feb 26 11:31:48 crc kubenswrapper[4724]: I0226 11:31:48.004103 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae2d0ec9-77c9-4a19-b783-b40613d55eb5" path="/var/lib/kubelet/pods/ae2d0ec9-77c9-4a19-b783-b40613d55eb5/volumes" Feb 26 11:31:48 crc kubenswrapper[4724]: I0226 11:31:48.004908 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-wtcch"] Feb 26 11:31:48 crc kubenswrapper[4724]: I0226 11:31:48.257279 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:31:48 crc kubenswrapper[4724]: I0226 11:31:48.344520 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59b8f9f788-85hsf"] Feb 26 11:31:48 crc kubenswrapper[4724]: I0226 11:31:48.533225 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-869f945844-vjsk6"] Feb 26 11:31:48 crc kubenswrapper[4724]: I0226 11:31:48.795114 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-l9gr5"] Feb 26 11:31:48 crc kubenswrapper[4724]: W0226 11:31:48.810679 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod485e3e4a_c268_4d2e_8489_fc72d7dd385a.slice/crio-78f8a3cdd0259bc5a2d34fe3bce8a2200cb692d7bb8caaa3ed45a9300a64e014 WatchSource:0}: Error finding container 78f8a3cdd0259bc5a2d34fe3bce8a2200cb692d7bb8caaa3ed45a9300a64e014: Status 404 returned error can't find the container with id 78f8a3cdd0259bc5a2d34fe3bce8a2200cb692d7bb8caaa3ed45a9300a64e014 Feb 26 11:31:48 crc kubenswrapper[4724]: I0226 11:31:48.846817 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:31:48 crc kubenswrapper[4724]: I0226 11:31:48.992505 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59b8f9f788-85hsf" event={"ID":"cd5f2a9d-eba9-4157-9b34-fba1714fa562","Type":"ContainerStarted","Data":"31b208c45ecbb18bfb2d7e7dafbb3836f48970b5a23499aa599d769b90ed63c0"} Feb 26 11:31:48 crc kubenswrapper[4724]: I0226 11:31:48.992744 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59b8f9f788-85hsf" event={"ID":"cd5f2a9d-eba9-4157-9b34-fba1714fa562","Type":"ContainerStarted","Data":"41e8be9a72bbf4dade2221adc7533832eb6dfbfa124a269abd49ee46954823f6"} Feb 26 11:31:49 crc kubenswrapper[4724]: I0226 11:31:49.015844 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba","Type":"ContainerStarted","Data":"c16d1dc5d614cebd790438d4ed82e54f828f0d534cfba901fa035be6966c6d60"} Feb 26 11:31:49 crc kubenswrapper[4724]: I0226 11:31:49.051544 4724 generic.go:334] "Generic (PLEG): container finished" podID="edd3f8dc-e935-462e-ae52-c136ad4fddc2" containerID="c50ba90be3709153a7b2269e3a5d1f612179ae04adf5831dc8444f1c07ca48b0" exitCode=0 Feb 26 11:31:49 crc kubenswrapper[4724]: I0226 11:31:49.051633 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-wtcch" event={"ID":"edd3f8dc-e935-462e-ae52-c136ad4fddc2","Type":"ContainerDied","Data":"c50ba90be3709153a7b2269e3a5d1f612179ae04adf5831dc8444f1c07ca48b0"} Feb 26 11:31:49 crc kubenswrapper[4724]: I0226 11:31:49.051660 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-wtcch" event={"ID":"edd3f8dc-e935-462e-ae52-c136ad4fddc2","Type":"ContainerStarted","Data":"7612f5fc03e40d23e59686bbaf820a288c9b21ef6f533493144d434e06e3d786"} Feb 26 11:31:49 crc kubenswrapper[4724]: I0226 11:31:49.068736 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" event={"ID":"485e3e4a-c268-4d2e-8489-fc72d7dd385a","Type":"ContainerStarted","Data":"78f8a3cdd0259bc5a2d34fe3bce8a2200cb692d7bb8caaa3ed45a9300a64e014"} Feb 26 11:31:49 crc kubenswrapper[4724]: I0226 11:31:49.075646 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-869f945844-vjsk6" event={"ID":"8896d359-383e-4f56-a18d-2d8a913d05a4","Type":"ContainerStarted","Data":"a4764bf9aff7545c9911981d75cbb48fd153b800c6497e3af4c8e71e431ccff4"} Feb 26 11:31:49 crc kubenswrapper[4724]: I0226 11:31:49.114971 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8d446a7-c07f-4b3d-ae55-a2246b928864","Type":"ContainerStarted","Data":"99b1e3bfdca31a5246137e552281632bcae2ae7ef8de29715186bc98502fe65c"} Feb 26 11:31:49 crc kubenswrapper[4724]: I0226 11:31:49.119877 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5466fc4f46-xdj8r" podUID="f9707878-82b6-46d7-b6c6-65745f7c72c3" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:31:49 crc kubenswrapper[4724]: I0226 11:31:49.123035 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef"} Feb 26 11:31:49 crc kubenswrapper[4724]: I0226 11:31:49.130748 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5466fc4f46-xdj8r" podUID="f9707878-82b6-46d7-b6c6-65745f7c72c3" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:31:49 crc kubenswrapper[4724]: I0226 11:31:49.281452 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:31:49 crc kubenswrapper[4724]: I0226 11:31:49.918879 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.232690 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5466fc4f46-xdj8r" podUID="f9707878-82b6-46d7-b6c6-65745f7c72c3" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.232945 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5466fc4f46-xdj8r" podUID="f9707878-82b6-46d7-b6c6-65745f7c72c3" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.290521 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59b8f9f788-85hsf" event={"ID":"cd5f2a9d-eba9-4157-9b34-fba1714fa562","Type":"ContainerStarted","Data":"f628d07d09171904eb17b3a4883a21e8aaf92b69ee4ef08c151d9925936bc2da"} Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.291834 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.342690 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59b8f9f788-85hsf" podStartSLOduration=5.342672977 podStartE2EDuration="5.342672977s" podCreationTimestamp="2026-02-26 11:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:50.341701772 +0000 UTC m=+1576.997440887" watchObservedRunningTime="2026-02-26 11:31:50.342672977 +0000 UTC m=+1576.998412092" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.345273 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.380584 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-wtcch" event={"ID":"edd3f8dc-e935-462e-ae52-c136ad4fddc2","Type":"ContainerDied","Data":"7612f5fc03e40d23e59686bbaf820a288c9b21ef6f533493144d434e06e3d786"} Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.380639 4724 scope.go:117] "RemoveContainer" containerID="c50ba90be3709153a7b2269e3a5d1f612179ae04adf5831dc8444f1c07ca48b0" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.437669 4724 generic.go:334] "Generic (PLEG): container finished" podID="485e3e4a-c268-4d2e-8489-fc72d7dd385a" containerID="1df0a15f4d4b0af709163821a05cecf36fb1fd8388535e3efe699ece84ffcb02" exitCode=0 Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.437926 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" event={"ID":"485e3e4a-c268-4d2e-8489-fc72d7dd385a","Type":"ContainerDied","Data":"1df0a15f4d4b0af709163821a05cecf36fb1fd8388535e3efe699ece84ffcb02"} Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.448510 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmmvj\" (UniqueName: \"kubernetes.io/projected/edd3f8dc-e935-462e-ae52-c136ad4fddc2-kube-api-access-fmmvj\") pod \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.448656 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-config\") pod \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.448708 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-dns-svc\") pod \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.448808 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-ovsdbserver-sb\") pod \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.448847 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-ovsdbserver-nb\") pod \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.448894 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-dns-swift-storage-0\") pod \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\" (UID: \"edd3f8dc-e935-462e-ae52-c136ad4fddc2\") " Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.499369 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd3f8dc-e935-462e-ae52-c136ad4fddc2-kube-api-access-fmmvj" (OuterVolumeSpecName: "kube-api-access-fmmvj") pod "edd3f8dc-e935-462e-ae52-c136ad4fddc2" (UID: "edd3f8dc-e935-462e-ae52-c136ad4fddc2"). InnerVolumeSpecName "kube-api-access-fmmvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.517302 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-869f945844-vjsk6" event={"ID":"8896d359-383e-4f56-a18d-2d8a913d05a4","Type":"ContainerStarted","Data":"e02292ece65c19e446f1176d88ab77680110091f8a4504ecfcabf5a16b630d1b"} Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.550065 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "edd3f8dc-e935-462e-ae52-c136ad4fddc2" (UID: "edd3f8dc-e935-462e-ae52-c136ad4fddc2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.556470 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "edd3f8dc-e935-462e-ae52-c136ad4fddc2" (UID: "edd3f8dc-e935-462e-ae52-c136ad4fddc2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.558432 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.558452 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.558464 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmmvj\" (UniqueName: \"kubernetes.io/projected/edd3f8dc-e935-462e-ae52-c136ad4fddc2-kube-api-access-fmmvj\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.569257 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "edd3f8dc-e935-462e-ae52-c136ad4fddc2" (UID: "edd3f8dc-e935-462e-ae52-c136ad4fddc2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.584470 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-config" (OuterVolumeSpecName: "config") pod "edd3f8dc-e935-462e-ae52-c136ad4fddc2" (UID: "edd3f8dc-e935-462e-ae52-c136ad4fddc2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.613305 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "edd3f8dc-e935-462e-ae52-c136ad4fddc2" (UID: "edd3f8dc-e935-462e-ae52-c136ad4fddc2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.659945 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.659973 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:50 crc kubenswrapper[4724]: I0226 11:31:50.659982 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edd3f8dc-e935-462e-ae52-c136ad4fddc2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:51 crc kubenswrapper[4724]: I0226 11:31:51.069080 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:31:51 crc kubenswrapper[4724]: I0226 11:31:51.553336 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" event={"ID":"485e3e4a-c268-4d2e-8489-fc72d7dd385a","Type":"ContainerStarted","Data":"781945642b288fe8fb053b7f3203a97308fd0f786ae4d69c650653d1e0d37274"} Feb 26 11:31:51 crc kubenswrapper[4724]: I0226 11:31:51.553772 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:51 crc kubenswrapper[4724]: I0226 11:31:51.561849 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-869f945844-vjsk6" event={"ID":"8896d359-383e-4f56-a18d-2d8a913d05a4","Type":"ContainerStarted","Data":"2e1b6a18c55886234c2a9ff85afee7c0cc5503483788134004cafdf173bf89bc"} Feb 26 11:31:51 crc kubenswrapper[4724]: I0226 11:31:51.562840 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:51 crc kubenswrapper[4724]: I0226 11:31:51.562876 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-869f945844-vjsk6" Feb 26 11:31:51 crc kubenswrapper[4724]: I0226 11:31:51.580679 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" podStartSLOduration=5.580662749 podStartE2EDuration="5.580662749s" podCreationTimestamp="2026-02-26 11:31:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:51.578661738 +0000 UTC m=+1578.234400853" watchObservedRunningTime="2026-02-26 11:31:51.580662749 +0000 UTC m=+1578.236401864" Feb 26 11:31:51 crc kubenswrapper[4724]: I0226 11:31:51.585874 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8d446a7-c07f-4b3d-ae55-a2246b928864","Type":"ContainerStarted","Data":"ba77161bb6308333cb77b913cdff12582848a60145d93f8db3673399c868f9df"} Feb 26 11:31:51 crc kubenswrapper[4724]: I0226 11:31:51.600373 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-wtcch" Feb 26 11:31:51 crc kubenswrapper[4724]: I0226 11:31:51.611544 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-869f945844-vjsk6" podStartSLOduration=5.611526788 podStartE2EDuration="5.611526788s" podCreationTimestamp="2026-02-26 11:31:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:51.609604398 +0000 UTC m=+1578.265343513" watchObservedRunningTime="2026-02-26 11:31:51.611526788 +0000 UTC m=+1578.267265903" Feb 26 11:31:51 crc kubenswrapper[4724]: I0226 11:31:51.724286 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-wtcch"] Feb 26 11:31:51 crc kubenswrapper[4724]: I0226 11:31:51.755255 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-wtcch"] Feb 26 11:31:51 crc kubenswrapper[4724]: I0226 11:31:51.996987 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd3f8dc-e935-462e-ae52-c136ad4fddc2" path="/var/lib/kubelet/pods/edd3f8dc-e935-462e-ae52-c136ad4fddc2/volumes" Feb 26 11:31:52 crc kubenswrapper[4724]: I0226 11:31:52.616486 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8d446a7-c07f-4b3d-ae55-a2246b928864","Type":"ContainerStarted","Data":"a1dcc4c5ffad43041b42764e046bd7ba4586367e5215f094e3beeafabb918d90"} Feb 26 11:31:52 crc kubenswrapper[4724]: I0226 11:31:52.617054 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 26 11:31:52 crc kubenswrapper[4724]: I0226 11:31:52.616849 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerName="cinder-api-log" containerID="cri-o://ba77161bb6308333cb77b913cdff12582848a60145d93f8db3673399c868f9df" gracePeriod=30 Feb 26 11:31:52 crc kubenswrapper[4724]: I0226 11:31:52.617164 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerName="cinder-api" containerID="cri-o://a1dcc4c5ffad43041b42764e046bd7ba4586367e5215f094e3beeafabb918d90" gracePeriod=30 Feb 26 11:31:52 crc kubenswrapper[4724]: I0226 11:31:52.632002 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba","Type":"ContainerStarted","Data":"ca086d806437b1299667e569e39f2d2fc28aa43a7016906977c325f3c1600f9c"} Feb 26 11:31:52 crc kubenswrapper[4724]: I0226 11:31:52.654031 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.654007804 podStartE2EDuration="5.654007804s" podCreationTimestamp="2026-02-26 11:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:52.642818958 +0000 UTC m=+1579.298558073" watchObservedRunningTime="2026-02-26 11:31:52.654007804 +0000 UTC m=+1579.309746919" Feb 26 11:31:53 crc kubenswrapper[4724]: I0226 11:31:53.642568 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba","Type":"ContainerStarted","Data":"ba00db32d18df0e932a4b232f2f66ca2707975cd65ecbfee95339e6ce8df891d"} Feb 26 11:31:53 crc kubenswrapper[4724]: I0226 11:31:53.644558 4724 generic.go:334] "Generic (PLEG): container finished" podID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerID="ba77161bb6308333cb77b913cdff12582848a60145d93f8db3673399c868f9df" exitCode=143 Feb 26 11:31:53 crc kubenswrapper[4724]: I0226 11:31:53.644833 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8d446a7-c07f-4b3d-ae55-a2246b928864","Type":"ContainerDied","Data":"ba77161bb6308333cb77b913cdff12582848a60145d93f8db3673399c868f9df"} Feb 26 11:31:53 crc kubenswrapper[4724]: I0226 11:31:53.665754 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.506817783 podStartE2EDuration="7.665735496s" podCreationTimestamp="2026-02-26 11:31:46 +0000 UTC" firstStartedPulling="2026-02-26 11:31:48.289058855 +0000 UTC m=+1574.944797970" lastFinishedPulling="2026-02-26 11:31:50.447976568 +0000 UTC m=+1577.103715683" observedRunningTime="2026-02-26 11:31:53.65964264 +0000 UTC m=+1580.315381755" watchObservedRunningTime="2026-02-26 11:31:53.665735496 +0000 UTC m=+1580.321474611" Feb 26 11:31:53 crc kubenswrapper[4724]: I0226 11:31:53.692763 4724 scope.go:117] "RemoveContainer" containerID="631b4e0d3f1136fe00d6e49e73c80d84d1cc2474537488b34625df695bccca39" Feb 26 11:31:53 crc kubenswrapper[4724]: I0226 11:31:53.729884 4724 scope.go:117] "RemoveContainer" containerID="e5f0c1aeb9bfe6e49859cc904f6bb78a38a42711fb678b661fd2587c59b9ae6a" Feb 26 11:31:53 crc kubenswrapper[4724]: I0226 11:31:53.798449 4724 scope.go:117] "RemoveContainer" containerID="f48c06d1186a9899ea8e900f222701a491e712f8120851cf9be36495c3f544c3" Feb 26 11:31:53 crc kubenswrapper[4724]: I0226 11:31:53.935039 4724 scope.go:117] "RemoveContainer" containerID="3bf8c50b2e7688d339704534cec0145b50fe43b24b307a08ad7136adbbc1467e" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.068675 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-555b8bfd77-p4h8t"] Feb 26 11:31:54 crc kubenswrapper[4724]: E0226 11:31:54.069721 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd3f8dc-e935-462e-ae52-c136ad4fddc2" containerName="init" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.069740 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd3f8dc-e935-462e-ae52-c136ad4fddc2" containerName="init" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.069979 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd3f8dc-e935-462e-ae52-c136ad4fddc2" containerName="init" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.071116 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.076350 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.077302 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.097632 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-555b8bfd77-p4h8t"] Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.127126 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5466fc4f46-xdj8r" podUID="f9707878-82b6-46d7-b6c6-65745f7c72c3" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.136632 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5466fc4f46-xdj8r" podUID="f9707878-82b6-46d7-b6c6-65745f7c72c3" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.154224 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j96f8\" (UniqueName: \"kubernetes.io/projected/e7b8af94-a922-4315-bab6-3b67cda647e0-kube-api-access-j96f8\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.154329 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-httpd-config\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.154364 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-ovndb-tls-certs\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.154441 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-config\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.154467 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-internal-tls-certs\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.154490 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-combined-ca-bundle\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.154505 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-public-tls-certs\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.184060 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.257224 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-config\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.257267 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-internal-tls-certs\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.257295 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-combined-ca-bundle\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.257312 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-public-tls-certs\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.257331 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j96f8\" (UniqueName: \"kubernetes.io/projected/e7b8af94-a922-4315-bab6-3b67cda647e0-kube-api-access-j96f8\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.257417 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-httpd-config\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.257470 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-ovndb-tls-certs\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.268674 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-ovndb-tls-certs\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.269223 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-combined-ca-bundle\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.270723 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-httpd-config\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.271223 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-internal-tls-certs\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.275335 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-config\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.275416 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-public-tls-certs\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.285164 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5466fc4f46-xdj8r" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.303310 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j96f8\" (UniqueName: \"kubernetes.io/projected/e7b8af94-a922-4315-bab6-3b67cda647e0-kube-api-access-j96f8\") pod \"neutron-555b8bfd77-p4h8t\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.401794 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.411093 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5f5dc64bf8-kjdl8"] Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.411620 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api-log" containerID="cri-o://b20bf52af8b01963b2c09904198814aa3c85dbd5830fefe6d30bab32536bae83" gracePeriod=30 Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.417671 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api" containerID="cri-o://5574f873fa2ff807e63f9d2cad5f5b97bdfebaa74f2a51ee92ec0bae257744bf" gracePeriod=30 Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.699566 4724 generic.go:334] "Generic (PLEG): container finished" podID="9a360323-97b1-46ae-9379-f340e76bf065" containerID="b20bf52af8b01963b2c09904198814aa3c85dbd5830fefe6d30bab32536bae83" exitCode=143 Feb 26 11:31:54 crc kubenswrapper[4724]: I0226 11:31:54.700229 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" event={"ID":"9a360323-97b1-46ae-9379-f340e76bf065","Type":"ContainerDied","Data":"b20bf52af8b01963b2c09904198814aa3c85dbd5830fefe6d30bab32536bae83"} Feb 26 11:31:55 crc kubenswrapper[4724]: I0226 11:31:55.477193 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-555b8bfd77-p4h8t"] Feb 26 11:31:55 crc kubenswrapper[4724]: W0226 11:31:55.495170 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7b8af94_a922_4315_bab6_3b67cda647e0.slice/crio-dd9db7b4ca78b6af142c92add520db98e02f9a61043135797508ebaf3416aefa WatchSource:0}: Error finding container dd9db7b4ca78b6af142c92add520db98e02f9a61043135797508ebaf3416aefa: Status 404 returned error can't find the container with id dd9db7b4ca78b6af142c92add520db98e02f9a61043135797508ebaf3416aefa Feb 26 11:31:55 crc kubenswrapper[4724]: I0226 11:31:55.718437 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-555b8bfd77-p4h8t" event={"ID":"e7b8af94-a922-4315-bab6-3b67cda647e0","Type":"ContainerStarted","Data":"dd9db7b4ca78b6af142c92add520db98e02f9a61043135797508ebaf3416aefa"} Feb 26 11:31:56 crc kubenswrapper[4724]: I0226 11:31:56.729240 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-555b8bfd77-p4h8t" event={"ID":"e7b8af94-a922-4315-bab6-3b67cda647e0","Type":"ContainerStarted","Data":"7abab6a00fbf38c719c85c773734fe9c390c4ef4a63f97e9ee3e057437f2a57d"} Feb 26 11:31:56 crc kubenswrapper[4724]: I0226 11:31:56.729844 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-555b8bfd77-p4h8t" event={"ID":"e7b8af94-a922-4315-bab6-3b67cda647e0","Type":"ContainerStarted","Data":"aff0fea17a29a376504998816473a6ceda732ced9fc9d08ff62f8ee9435e7897"} Feb 26 11:31:56 crc kubenswrapper[4724]: I0226 11:31:56.729861 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:31:56 crc kubenswrapper[4724]: I0226 11:31:56.760319 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-555b8bfd77-p4h8t" podStartSLOduration=2.760294985 podStartE2EDuration="2.760294985s" podCreationTimestamp="2026-02-26 11:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:31:56.752418114 +0000 UTC m=+1583.408157249" watchObservedRunningTime="2026-02-26 11:31:56.760294985 +0000 UTC m=+1583.416034110" Feb 26 11:31:57 crc kubenswrapper[4724]: I0226 11:31:57.139707 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 26 11:31:57 crc kubenswrapper[4724]: I0226 11:31:57.141026 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.172:8080/\": dial tcp 10.217.0.172:8080: connect: connection refused" Feb 26 11:31:57 crc kubenswrapper[4724]: I0226 11:31:57.628578 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-678bf4f784-7wp9n" Feb 26 11:31:57 crc kubenswrapper[4724]: I0226 11:31:57.741324 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:31:57 crc kubenswrapper[4724]: I0226 11:31:57.848776 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-8k5rb"] Feb 26 11:31:57 crc kubenswrapper[4724]: I0226 11:31:57.849036 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" podUID="5f6c848a-a642-4b86-bfae-e715d8380602" containerName="dnsmasq-dns" containerID="cri-o://d3918e5db25cae8282ce1e7cc0897e2807bf0a54c897d8f86079faf866b9b772" gracePeriod=10 Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.223700 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.234466 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.252438 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-24zcg" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.253523 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.253932 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.274156 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.331326 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed-openstack-config\") pod \"openstackclient\" (UID: \"f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed\") " pod="openstack/openstackclient" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.331378 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed\") " pod="openstack/openstackclient" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.331432 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed-openstack-config-secret\") pod \"openstackclient\" (UID: \"f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed\") " pod="openstack/openstackclient" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.331481 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knbdl\" (UniqueName: \"kubernetes.io/projected/f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed-kube-api-access-knbdl\") pod \"openstackclient\" (UID: \"f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed\") " pod="openstack/openstackclient" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.434877 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed-openstack-config\") pod \"openstackclient\" (UID: \"f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed\") " pod="openstack/openstackclient" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.434954 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed\") " pod="openstack/openstackclient" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.435055 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed-openstack-config-secret\") pod \"openstackclient\" (UID: \"f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed\") " pod="openstack/openstackclient" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.435119 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knbdl\" (UniqueName: \"kubernetes.io/projected/f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed-kube-api-access-knbdl\") pod \"openstackclient\" (UID: \"f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed\") " pod="openstack/openstackclient" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.442550 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed-openstack-config\") pod \"openstackclient\" (UID: \"f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed\") " pod="openstack/openstackclient" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.472662 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed\") " pod="openstack/openstackclient" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.490813 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knbdl\" (UniqueName: \"kubernetes.io/projected/f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed-kube-api-access-knbdl\") pod \"openstackclient\" (UID: \"f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed\") " pod="openstack/openstackclient" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.512380 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed-openstack-config-secret\") pod \"openstackclient\" (UID: \"f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed\") " pod="openstack/openstackclient" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.623377 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.720166 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.752631 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgkb5\" (UniqueName: \"kubernetes.io/projected/5f6c848a-a642-4b86-bfae-e715d8380602-kube-api-access-tgkb5\") pod \"5f6c848a-a642-4b86-bfae-e715d8380602\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.752809 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-dns-swift-storage-0\") pod \"5f6c848a-a642-4b86-bfae-e715d8380602\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.752865 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-dns-svc\") pod \"5f6c848a-a642-4b86-bfae-e715d8380602\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.752978 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-config\") pod \"5f6c848a-a642-4b86-bfae-e715d8380602\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.753075 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-ovsdbserver-sb\") pod \"5f6c848a-a642-4b86-bfae-e715d8380602\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.753127 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-ovsdbserver-nb\") pod \"5f6c848a-a642-4b86-bfae-e715d8380602\" (UID: \"5f6c848a-a642-4b86-bfae-e715d8380602\") " Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.808887 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f6c848a-a642-4b86-bfae-e715d8380602-kube-api-access-tgkb5" (OuterVolumeSpecName: "kube-api-access-tgkb5") pod "5f6c848a-a642-4b86-bfae-e715d8380602" (UID: "5f6c848a-a642-4b86-bfae-e715d8380602"). InnerVolumeSpecName "kube-api-access-tgkb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.834116 4724 generic.go:334] "Generic (PLEG): container finished" podID="5f6c848a-a642-4b86-bfae-e715d8380602" containerID="d3918e5db25cae8282ce1e7cc0897e2807bf0a54c897d8f86079faf866b9b772" exitCode=0 Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.834515 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" event={"ID":"5f6c848a-a642-4b86-bfae-e715d8380602","Type":"ContainerDied","Data":"d3918e5db25cae8282ce1e7cc0897e2807bf0a54c897d8f86079faf866b9b772"} Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.834548 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" event={"ID":"5f6c848a-a642-4b86-bfae-e715d8380602","Type":"ContainerDied","Data":"78739ee55bddcdccdcb37728888cce2c025e806a2ab82dee7da1aa579591f868"} Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.834570 4724 scope.go:117] "RemoveContainer" containerID="d3918e5db25cae8282ce1e7cc0897e2807bf0a54c897d8f86079faf866b9b772" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.834754 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-8k5rb" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.856533 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgkb5\" (UniqueName: \"kubernetes.io/projected/5f6c848a-a642-4b86-bfae-e715d8380602-kube-api-access-tgkb5\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.909232 4724 scope.go:117] "RemoveContainer" containerID="895c3649953baa7f9e93632ff36b7f19fcbafbc39d72c8046fb998695e889bec" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.942333 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f6c848a-a642-4b86-bfae-e715d8380602" (UID: "5f6c848a-a642-4b86-bfae-e715d8380602"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.958404 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.978819 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5f6c848a-a642-4b86-bfae-e715d8380602" (UID: "5f6c848a-a642-4b86-bfae-e715d8380602"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.985924 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-config" (OuterVolumeSpecName: "config") pod "5f6c848a-a642-4b86-bfae-e715d8380602" (UID: "5f6c848a-a642-4b86-bfae-e715d8380602"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:58 crc kubenswrapper[4724]: I0226 11:31:58.998582 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f6c848a-a642-4b86-bfae-e715d8380602" (UID: "5f6c848a-a642-4b86-bfae-e715d8380602"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.013619 4724 scope.go:117] "RemoveContainer" containerID="d3918e5db25cae8282ce1e7cc0897e2807bf0a54c897d8f86079faf866b9b772" Feb 26 11:31:59 crc kubenswrapper[4724]: E0226 11:31:59.015701 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3918e5db25cae8282ce1e7cc0897e2807bf0a54c897d8f86079faf866b9b772\": container with ID starting with d3918e5db25cae8282ce1e7cc0897e2807bf0a54c897d8f86079faf866b9b772 not found: ID does not exist" containerID="d3918e5db25cae8282ce1e7cc0897e2807bf0a54c897d8f86079faf866b9b772" Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.015802 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3918e5db25cae8282ce1e7cc0897e2807bf0a54c897d8f86079faf866b9b772"} err="failed to get container status \"d3918e5db25cae8282ce1e7cc0897e2807bf0a54c897d8f86079faf866b9b772\": rpc error: code = NotFound desc = could not find container \"d3918e5db25cae8282ce1e7cc0897e2807bf0a54c897d8f86079faf866b9b772\": container with ID starting with d3918e5db25cae8282ce1e7cc0897e2807bf0a54c897d8f86079faf866b9b772 not found: ID does not exist" Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.015874 4724 scope.go:117] "RemoveContainer" containerID="895c3649953baa7f9e93632ff36b7f19fcbafbc39d72c8046fb998695e889bec" Feb 26 11:31:59 crc kubenswrapper[4724]: E0226 11:31:59.017048 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"895c3649953baa7f9e93632ff36b7f19fcbafbc39d72c8046fb998695e889bec\": container with ID starting with 895c3649953baa7f9e93632ff36b7f19fcbafbc39d72c8046fb998695e889bec not found: ID does not exist" containerID="895c3649953baa7f9e93632ff36b7f19fcbafbc39d72c8046fb998695e889bec" Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.017170 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"895c3649953baa7f9e93632ff36b7f19fcbafbc39d72c8046fb998695e889bec"} err="failed to get container status \"895c3649953baa7f9e93632ff36b7f19fcbafbc39d72c8046fb998695e889bec\": rpc error: code = NotFound desc = could not find container \"895c3649953baa7f9e93632ff36b7f19fcbafbc39d72c8046fb998695e889bec\": container with ID starting with 895c3649953baa7f9e93632ff36b7f19fcbafbc39d72c8046fb998695e889bec not found: ID does not exist" Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.034725 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f6c848a-a642-4b86-bfae-e715d8380602" (UID: "5f6c848a-a642-4b86-bfae-e715d8380602"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.063391 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.063573 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.063655 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.063709 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f6c848a-a642-4b86-bfae-e715d8380602-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.209021 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-8k5rb"] Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.223245 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-8k5rb"] Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.230687 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.855202 4724 generic.go:334] "Generic (PLEG): container finished" podID="e4c4b3ae-030b-4e33-9779-2ffa39196a76" containerID="3e5edab1e2c718511750fd9327e7561944102843f5433d3bb1fb9259ca86717b" exitCode=137 Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.855234 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57977849d4-8s5ds" event={"ID":"e4c4b3ae-030b-4e33-9779-2ffa39196a76","Type":"ContainerDied","Data":"3e5edab1e2c718511750fd9327e7561944102843f5433d3bb1fb9259ca86717b"} Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.855637 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57977849d4-8s5ds" event={"ID":"e4c4b3ae-030b-4e33-9779-2ffa39196a76","Type":"ContainerStarted","Data":"ac398868e5679a7aa01f6bdf65598f3111cd3c8e4085be5a0b71236c8e2306eb"} Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.863290 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed","Type":"ContainerStarted","Data":"c5ab019872aebc5856503bdc7f4569a268b926491ab8a6c74fba0b8d7be541f1"} Feb 26 11:31:59 crc kubenswrapper[4724]: I0226 11:31:59.986348 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f6c848a-a642-4b86-bfae-e715d8380602" path="/var/lib/kubelet/pods/5f6c848a-a642-4b86-bfae-e715d8380602/volumes" Feb 26 11:32:00 crc kubenswrapper[4724]: I0226 11:32:00.151511 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535092-bd82s"] Feb 26 11:32:00 crc kubenswrapper[4724]: E0226 11:32:00.151917 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6c848a-a642-4b86-bfae-e715d8380602" containerName="dnsmasq-dns" Feb 26 11:32:00 crc kubenswrapper[4724]: I0226 11:32:00.151937 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6c848a-a642-4b86-bfae-e715d8380602" containerName="dnsmasq-dns" Feb 26 11:32:00 crc kubenswrapper[4724]: E0226 11:32:00.151951 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6c848a-a642-4b86-bfae-e715d8380602" containerName="init" Feb 26 11:32:00 crc kubenswrapper[4724]: I0226 11:32:00.151960 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6c848a-a642-4b86-bfae-e715d8380602" containerName="init" Feb 26 11:32:00 crc kubenswrapper[4724]: I0226 11:32:00.152166 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f6c848a-a642-4b86-bfae-e715d8380602" containerName="dnsmasq-dns" Feb 26 11:32:00 crc kubenswrapper[4724]: I0226 11:32:00.152820 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535092-bd82s" Feb 26 11:32:00 crc kubenswrapper[4724]: I0226 11:32:00.159635 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:32:00 crc kubenswrapper[4724]: I0226 11:32:00.159874 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:32:00 crc kubenswrapper[4724]: I0226 11:32:00.166437 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:32:00 crc kubenswrapper[4724]: I0226 11:32:00.181788 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535092-bd82s"] Feb 26 11:32:00 crc kubenswrapper[4724]: I0226 11:32:00.197715 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7dr7\" (UniqueName: \"kubernetes.io/projected/fe38a495-f33f-49c3-a514-75542323fe2e-kube-api-access-r7dr7\") pod \"auto-csr-approver-29535092-bd82s\" (UID: \"fe38a495-f33f-49c3-a514-75542323fe2e\") " pod="openshift-infra/auto-csr-approver-29535092-bd82s" Feb 26 11:32:00 crc kubenswrapper[4724]: I0226 11:32:00.197865 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:32:00 crc kubenswrapper[4724]: I0226 11:32:00.198110 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:32:00 crc kubenswrapper[4724]: I0226 11:32:00.300020 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dr7\" (UniqueName: \"kubernetes.io/projected/fe38a495-f33f-49c3-a514-75542323fe2e-kube-api-access-r7dr7\") pod \"auto-csr-approver-29535092-bd82s\" (UID: \"fe38a495-f33f-49c3-a514-75542323fe2e\") " pod="openshift-infra/auto-csr-approver-29535092-bd82s" Feb 26 11:32:00 crc kubenswrapper[4724]: I0226 11:32:00.324764 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7dr7\" (UniqueName: \"kubernetes.io/projected/fe38a495-f33f-49c3-a514-75542323fe2e-kube-api-access-r7dr7\") pod \"auto-csr-approver-29535092-bd82s\" (UID: \"fe38a495-f33f-49c3-a514-75542323fe2e\") " pod="openshift-infra/auto-csr-approver-29535092-bd82s" Feb 26 11:32:00 crc kubenswrapper[4724]: I0226 11:32:00.477888 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535092-bd82s" Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.167655 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535092-bd82s"] Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.227021 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:44762->10.217.0.166:9311: read: connection reset by peer" Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.227683 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:44770->10.217.0.166:9311: read: connection reset by peer" Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.830480 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.921375 4724 generic.go:334] "Generic (PLEG): container finished" podID="9a360323-97b1-46ae-9379-f340e76bf065" containerID="5574f873fa2ff807e63f9d2cad5f5b97bdfebaa74f2a51ee92ec0bae257744bf" exitCode=0 Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.921440 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" event={"ID":"9a360323-97b1-46ae-9379-f340e76bf065","Type":"ContainerDied","Data":"5574f873fa2ff807e63f9d2cad5f5b97bdfebaa74f2a51ee92ec0bae257744bf"} Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.921448 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.921488 4724 scope.go:117] "RemoveContainer" containerID="5574f873fa2ff807e63f9d2cad5f5b97bdfebaa74f2a51ee92ec0bae257744bf" Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.921477 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f5dc64bf8-kjdl8" event={"ID":"9a360323-97b1-46ae-9379-f340e76bf065","Type":"ContainerDied","Data":"c68e4c9e8103eee91de8a34a550681ffb1526a7be5b751a18ff6c82cee678586"} Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.928826 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535092-bd82s" event={"ID":"fe38a495-f33f-49c3-a514-75542323fe2e","Type":"ContainerStarted","Data":"17f00f9d858a46768911a7815e5f65a449e131d05a8b2a23ef2c0569fb13329a"} Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.959975 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-config-data-custom\") pod \"9a360323-97b1-46ae-9379-f340e76bf065\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.960398 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-config-data\") pod \"9a360323-97b1-46ae-9379-f340e76bf065\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.960461 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwf59\" (UniqueName: \"kubernetes.io/projected/9a360323-97b1-46ae-9379-f340e76bf065-kube-api-access-xwf59\") pod \"9a360323-97b1-46ae-9379-f340e76bf065\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.960505 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-combined-ca-bundle\") pod \"9a360323-97b1-46ae-9379-f340e76bf065\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.960645 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a360323-97b1-46ae-9379-f340e76bf065-logs\") pod \"9a360323-97b1-46ae-9379-f340e76bf065\" (UID: \"9a360323-97b1-46ae-9379-f340e76bf065\") " Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.961769 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a360323-97b1-46ae-9379-f340e76bf065-logs" (OuterVolumeSpecName: "logs") pod "9a360323-97b1-46ae-9379-f340e76bf065" (UID: "9a360323-97b1-46ae-9379-f340e76bf065"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:01 crc kubenswrapper[4724]: I0226 11:32:01.977789 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9a360323-97b1-46ae-9379-f340e76bf065" (UID: "9a360323-97b1-46ae-9379-f340e76bf065"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.016067 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a360323-97b1-46ae-9379-f340e76bf065-kube-api-access-xwf59" (OuterVolumeSpecName: "kube-api-access-xwf59") pod "9a360323-97b1-46ae-9379-f340e76bf065" (UID: "9a360323-97b1-46ae-9379-f340e76bf065"). InnerVolumeSpecName "kube-api-access-xwf59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.031729 4724 scope.go:117] "RemoveContainer" containerID="b20bf52af8b01963b2c09904198814aa3c85dbd5830fefe6d30bab32536bae83" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.032576 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a360323-97b1-46ae-9379-f340e76bf065" (UID: "9a360323-97b1-46ae-9379-f340e76bf065"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.065651 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a360323-97b1-46ae-9379-f340e76bf065-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.065689 4724 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.065700 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwf59\" (UniqueName: \"kubernetes.io/projected/9a360323-97b1-46ae-9379-f340e76bf065-kube-api-access-xwf59\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.065708 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.099715 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-config-data" (OuterVolumeSpecName: "config-data") pod "9a360323-97b1-46ae-9379-f340e76bf065" (UID: "9a360323-97b1-46ae-9379-f340e76bf065"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.159964 4724 scope.go:117] "RemoveContainer" containerID="5574f873fa2ff807e63f9d2cad5f5b97bdfebaa74f2a51ee92ec0bae257744bf" Feb 26 11:32:02 crc kubenswrapper[4724]: E0226 11:32:02.160444 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5574f873fa2ff807e63f9d2cad5f5b97bdfebaa74f2a51ee92ec0bae257744bf\": container with ID starting with 5574f873fa2ff807e63f9d2cad5f5b97bdfebaa74f2a51ee92ec0bae257744bf not found: ID does not exist" containerID="5574f873fa2ff807e63f9d2cad5f5b97bdfebaa74f2a51ee92ec0bae257744bf" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.160487 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5574f873fa2ff807e63f9d2cad5f5b97bdfebaa74f2a51ee92ec0bae257744bf"} err="failed to get container status \"5574f873fa2ff807e63f9d2cad5f5b97bdfebaa74f2a51ee92ec0bae257744bf\": rpc error: code = NotFound desc = could not find container \"5574f873fa2ff807e63f9d2cad5f5b97bdfebaa74f2a51ee92ec0bae257744bf\": container with ID starting with 5574f873fa2ff807e63f9d2cad5f5b97bdfebaa74f2a51ee92ec0bae257744bf not found: ID does not exist" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.160513 4724 scope.go:117] "RemoveContainer" containerID="b20bf52af8b01963b2c09904198814aa3c85dbd5830fefe6d30bab32536bae83" Feb 26 11:32:02 crc kubenswrapper[4724]: E0226 11:32:02.163115 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b20bf52af8b01963b2c09904198814aa3c85dbd5830fefe6d30bab32536bae83\": container with ID starting with b20bf52af8b01963b2c09904198814aa3c85dbd5830fefe6d30bab32536bae83 not found: ID does not exist" containerID="b20bf52af8b01963b2c09904198814aa3c85dbd5830fefe6d30bab32536bae83" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.163157 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b20bf52af8b01963b2c09904198814aa3c85dbd5830fefe6d30bab32536bae83"} err="failed to get container status \"b20bf52af8b01963b2c09904198814aa3c85dbd5830fefe6d30bab32536bae83\": rpc error: code = NotFound desc = could not find container \"b20bf52af8b01963b2c09904198814aa3c85dbd5830fefe6d30bab32536bae83\": container with ID starting with b20bf52af8b01963b2c09904198814aa3c85dbd5830fefe6d30bab32536bae83 not found: ID does not exist" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.168113 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a360323-97b1-46ae-9379-f340e76bf065-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.291264 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5f5dc64bf8-kjdl8"] Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.402012 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5f5dc64bf8-kjdl8"] Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.583510 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.174:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.650944 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.877669 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.959536 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.959787 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" containerName="cinder-scheduler" containerID="cri-o://ca086d806437b1299667e569e39f2d2fc28aa43a7016906977c325f3c1600f9c" gracePeriod=30 Feb 26 11:32:02 crc kubenswrapper[4724]: I0226 11:32:02.960282 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" containerName="probe" containerID="cri-o://ba00db32d18df0e932a4b232f2f66ca2707975cd65ecbfee95339e6ce8df891d" gracePeriod=30 Feb 26 11:32:04 crc kubenswrapper[4724]: I0226 11:32:04.060541 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a360323-97b1-46ae-9379-f340e76bf065" path="/var/lib/kubelet/pods/9a360323-97b1-46ae-9379-f340e76bf065/volumes" Feb 26 11:32:04 crc kubenswrapper[4724]: I0226 11:32:04.064051 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535092-bd82s" event={"ID":"fe38a495-f33f-49c3-a514-75542323fe2e","Type":"ContainerStarted","Data":"209c179e834cfbc11ae4615a46134e3f77a34b5848a1e081086216ae023c3126"} Feb 26 11:32:04 crc kubenswrapper[4724]: I0226 11:32:04.131699 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535092-bd82s" podStartSLOduration=2.855569372 podStartE2EDuration="4.131676341s" podCreationTimestamp="2026-02-26 11:32:00 +0000 UTC" firstStartedPulling="2026-02-26 11:32:01.177508938 +0000 UTC m=+1587.833248053" lastFinishedPulling="2026-02-26 11:32:02.453615907 +0000 UTC m=+1589.109355022" observedRunningTime="2026-02-26 11:32:04.122779524 +0000 UTC m=+1590.778518639" watchObservedRunningTime="2026-02-26 11:32:04.131676341 +0000 UTC m=+1590.787415456" Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.004377 4724 generic.go:334] "Generic (PLEG): container finished" podID="e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" containerID="ba00db32d18df0e932a4b232f2f66ca2707975cd65ecbfee95339e6ce8df891d" exitCode=0 Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.004722 4724 generic.go:334] "Generic (PLEG): container finished" podID="e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" containerID="ca086d806437b1299667e569e39f2d2fc28aa43a7016906977c325f3c1600f9c" exitCode=0 Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.005824 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba","Type":"ContainerDied","Data":"ba00db32d18df0e932a4b232f2f66ca2707975cd65ecbfee95339e6ce8df891d"} Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.005853 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba","Type":"ContainerDied","Data":"ca086d806437b1299667e569e39f2d2fc28aa43a7016906977c325f3c1600f9c"} Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.505437 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.590893 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-etc-machine-id\") pod \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.591337 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-config-data\") pod \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.591479 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-scripts\") pod \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.591691 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-config-data-custom\") pod \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.591807 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnv2b\" (UniqueName: \"kubernetes.io/projected/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-kube-api-access-bnv2b\") pod \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.592043 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-combined-ca-bundle\") pod \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\" (UID: \"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba\") " Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.592243 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" (UID: "e8763dd7-cd9d-4083-80e3-f7e27fc1fdba"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.621941 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" (UID: "e8763dd7-cd9d-4083-80e3-f7e27fc1fdba"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.622368 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-kube-api-access-bnv2b" (OuterVolumeSpecName: "kube-api-access-bnv2b") pod "e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" (UID: "e8763dd7-cd9d-4083-80e3-f7e27fc1fdba"). InnerVolumeSpecName "kube-api-access-bnv2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.632856 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-scripts" (OuterVolumeSpecName: "scripts") pod "e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" (UID: "e8763dd7-cd9d-4083-80e3-f7e27fc1fdba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.698623 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.698671 4724 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.698684 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnv2b\" (UniqueName: \"kubernetes.io/projected/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-kube-api-access-bnv2b\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.698697 4724 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.852403 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" (UID: "e8763dd7-cd9d-4083-80e3-f7e27fc1fdba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.868349 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-config-data" (OuterVolumeSpecName: "config-data") pod "e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" (UID: "e8763dd7-cd9d-4083-80e3-f7e27fc1fdba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.902610 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:05 crc kubenswrapper[4724]: I0226 11:32:05.902670 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.020228 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e8763dd7-cd9d-4083-80e3-f7e27fc1fdba","Type":"ContainerDied","Data":"c16d1dc5d614cebd790438d4ed82e54f828f0d534cfba901fa035be6966c6d60"} Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.020283 4724 scope.go:117] "RemoveContainer" containerID="ba00db32d18df0e932a4b232f2f66ca2707975cd65ecbfee95339e6ce8df891d" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.020452 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.051984 4724 scope.go:117] "RemoveContainer" containerID="ca086d806437b1299667e569e39f2d2fc28aa43a7016906977c325f3c1600f9c" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.063736 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.088048 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.109387 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:32:06 crc kubenswrapper[4724]: E0226 11:32:06.109877 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.109901 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api" Feb 26 11:32:06 crc kubenswrapper[4724]: E0226 11:32:06.109920 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" containerName="probe" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.109929 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" containerName="probe" Feb 26 11:32:06 crc kubenswrapper[4724]: E0226 11:32:06.109942 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" containerName="cinder-scheduler" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.109949 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" containerName="cinder-scheduler" Feb 26 11:32:06 crc kubenswrapper[4724]: E0226 11:32:06.109959 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api-log" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.109967 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api-log" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.110342 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" containerName="cinder-scheduler" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.110377 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" containerName="probe" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.110393 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api-log" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.110404 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a360323-97b1-46ae-9379-f340e76bf065" containerName="barbican-api" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.119891 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.123752 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.144469 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.311870 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ba4493-2ccf-47d8-a018-eadc53f931cf-scripts\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.312359 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67ba4493-2ccf-47d8-a018-eadc53f931cf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.312414 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ba4493-2ccf-47d8-a018-eadc53f931cf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.312473 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67ba4493-2ccf-47d8-a018-eadc53f931cf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.312505 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ba4493-2ccf-47d8-a018-eadc53f931cf-config-data\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.312542 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qngqd\" (UniqueName: \"kubernetes.io/projected/67ba4493-2ccf-47d8-a018-eadc53f931cf-kube-api-access-qngqd\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.414007 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67ba4493-2ccf-47d8-a018-eadc53f931cf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.414063 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ba4493-2ccf-47d8-a018-eadc53f931cf-config-data\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.414103 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qngqd\" (UniqueName: \"kubernetes.io/projected/67ba4493-2ccf-47d8-a018-eadc53f931cf-kube-api-access-qngqd\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.414146 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ba4493-2ccf-47d8-a018-eadc53f931cf-scripts\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.414256 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67ba4493-2ccf-47d8-a018-eadc53f931cf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.414299 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ba4493-2ccf-47d8-a018-eadc53f931cf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.414596 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67ba4493-2ccf-47d8-a018-eadc53f931cf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.421484 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ba4493-2ccf-47d8-a018-eadc53f931cf-scripts\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.421735 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67ba4493-2ccf-47d8-a018-eadc53f931cf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.423792 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ba4493-2ccf-47d8-a018-eadc53f931cf-config-data\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.424471 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ba4493-2ccf-47d8-a018-eadc53f931cf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.439967 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qngqd\" (UniqueName: \"kubernetes.io/projected/67ba4493-2ccf-47d8-a018-eadc53f931cf-kube-api-access-qngqd\") pod \"cinder-scheduler-0\" (UID: \"67ba4493-2ccf-47d8-a018-eadc53f931cf\") " pod="openstack/cinder-scheduler-0" Feb 26 11:32:06 crc kubenswrapper[4724]: I0226 11:32:06.455108 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 11:32:07 crc kubenswrapper[4724]: I0226 11:32:07.143310 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 11:32:07 crc kubenswrapper[4724]: I0226 11:32:07.627380 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.174:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:32:07 crc kubenswrapper[4724]: I0226 11:32:07.990845 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8763dd7-cd9d-4083-80e3-f7e27fc1fdba" path="/var/lib/kubelet/pods/e8763dd7-cd9d-4083-80e3-f7e27fc1fdba/volumes" Feb 26 11:32:08 crc kubenswrapper[4724]: I0226 11:32:08.078462 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"67ba4493-2ccf-47d8-a018-eadc53f931cf","Type":"ContainerStarted","Data":"d2315b8b72a86ef99f4f46647819d005538c6c238800feeca4d8255d352f7391"} Feb 26 11:32:08 crc kubenswrapper[4724]: I0226 11:32:08.085991 4724 generic.go:334] "Generic (PLEG): container finished" podID="fe38a495-f33f-49c3-a514-75542323fe2e" containerID="209c179e834cfbc11ae4615a46134e3f77a34b5848a1e081086216ae023c3126" exitCode=0 Feb 26 11:32:08 crc kubenswrapper[4724]: I0226 11:32:08.086043 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535092-bd82s" event={"ID":"fe38a495-f33f-49c3-a514-75542323fe2e","Type":"ContainerDied","Data":"209c179e834cfbc11ae4615a46134e3f77a34b5848a1e081086216ae023c3126"} Feb 26 11:32:08 crc kubenswrapper[4724]: I0226 11:32:08.367165 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:32:08 crc kubenswrapper[4724]: I0226 11:32:08.367489 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:32:09 crc kubenswrapper[4724]: I0226 11:32:09.125453 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"67ba4493-2ccf-47d8-a018-eadc53f931cf","Type":"ContainerStarted","Data":"f0d5b4bec68b526dc78b96c8568664b9743118f44dcff932d1d7c5d7d12298d0"} Feb 26 11:32:09 crc kubenswrapper[4724]: I0226 11:32:09.135071 4724 generic.go:334] "Generic (PLEG): container finished" podID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerID="cf119be6b682f8400345d567636d81c24d1362c00c424d4a82811c66edd703a0" exitCode=137 Feb 26 11:32:09 crc kubenswrapper[4724]: I0226 11:32:09.135374 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ddfb9fd96-hzc8c" event={"ID":"fa39614a-db84-4214-baa1-bd7cbc7b5ae0","Type":"ContainerDied","Data":"cf119be6b682f8400345d567636d81c24d1362c00c424d4a82811c66edd703a0"} Feb 26 11:32:09 crc kubenswrapper[4724]: I0226 11:32:09.135463 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ddfb9fd96-hzc8c" event={"ID":"fa39614a-db84-4214-baa1-bd7cbc7b5ae0","Type":"ContainerStarted","Data":"7099fe5c31115c0b722be7a13c0a9feb5c472f77246d6698e652b193791a6781"} Feb 26 11:32:09 crc kubenswrapper[4724]: I0226 11:32:09.672081 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535092-bd82s" Feb 26 11:32:09 crc kubenswrapper[4724]: I0226 11:32:09.744810 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7dr7\" (UniqueName: \"kubernetes.io/projected/fe38a495-f33f-49c3-a514-75542323fe2e-kube-api-access-r7dr7\") pod \"fe38a495-f33f-49c3-a514-75542323fe2e\" (UID: \"fe38a495-f33f-49c3-a514-75542323fe2e\") " Feb 26 11:32:09 crc kubenswrapper[4724]: I0226 11:32:09.763475 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe38a495-f33f-49c3-a514-75542323fe2e-kube-api-access-r7dr7" (OuterVolumeSpecName: "kube-api-access-r7dr7") pod "fe38a495-f33f-49c3-a514-75542323fe2e" (UID: "fe38a495-f33f-49c3-a514-75542323fe2e"). InnerVolumeSpecName "kube-api-access-r7dr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:09 crc kubenswrapper[4724]: I0226 11:32:09.847503 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7dr7\" (UniqueName: \"kubernetes.io/projected/fe38a495-f33f-49c3-a514-75542323fe2e-kube-api-access-r7dr7\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:10 crc kubenswrapper[4724]: I0226 11:32:10.153374 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535092-bd82s" event={"ID":"fe38a495-f33f-49c3-a514-75542323fe2e","Type":"ContainerDied","Data":"17f00f9d858a46768911a7815e5f65a449e131d05a8b2a23ef2c0569fb13329a"} Feb 26 11:32:10 crc kubenswrapper[4724]: I0226 11:32:10.153427 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17f00f9d858a46768911a7815e5f65a449e131d05a8b2a23ef2c0569fb13329a" Feb 26 11:32:10 crc kubenswrapper[4724]: I0226 11:32:10.153386 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535092-bd82s" Feb 26 11:32:10 crc kubenswrapper[4724]: I0226 11:32:10.157979 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"67ba4493-2ccf-47d8-a018-eadc53f931cf","Type":"ContainerStarted","Data":"1fd8a3be54e18c7808f6c5e47e7132ad2a3592c4a7ef0d51cd0f9068f3ab0d4e"} Feb 26 11:32:10 crc kubenswrapper[4724]: I0226 11:32:10.196332 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535086-qmcq9"] Feb 26 11:32:10 crc kubenswrapper[4724]: I0226 11:32:10.211278 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535086-qmcq9"] Feb 26 11:32:10 crc kubenswrapper[4724]: I0226 11:32:10.223497 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.223471482 podStartE2EDuration="4.223471482s" podCreationTimestamp="2026-02-26 11:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:10.186816695 +0000 UTC m=+1596.842555810" watchObservedRunningTime="2026-02-26 11:32:10.223471482 +0000 UTC m=+1596.879210607" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.456913 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.615446 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6746496466-bz5b7"] Feb 26 11:32:11 crc kubenswrapper[4724]: E0226 11:32:11.616251 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe38a495-f33f-49c3-a514-75542323fe2e" containerName="oc" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.616274 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe38a495-f33f-49c3-a514-75542323fe2e" containerName="oc" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.616526 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe38a495-f33f-49c3-a514-75542323fe2e" containerName="oc" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.617333 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.625215 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.625876 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.649881 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6746496466-bz5b7"] Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.656531 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-6w9bw" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.755983 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dcl5w"] Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.758047 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.784882 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-config-data\") pod \"heat-engine-6746496466-bz5b7\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.784972 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-combined-ca-bundle\") pod \"heat-engine-6746496466-bz5b7\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.785017 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-config-data-custom\") pod \"heat-engine-6746496466-bz5b7\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.785078 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9zvp\" (UniqueName: \"kubernetes.io/projected/25ee5971-289d-4cf3-852d-e6473c97582f-kube-api-access-v9zvp\") pod \"heat-engine-6746496466-bz5b7\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.856271 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dcl5w"] Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.887555 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.887646 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.887674 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.887694 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-config-data\") pod \"heat-engine-6746496466-bz5b7\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.887721 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-combined-ca-bundle\") pod \"heat-engine-6746496466-bz5b7\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.887740 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-config\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.887768 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-config-data-custom\") pod \"heat-engine-6746496466-bz5b7\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.887791 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnkz8\" (UniqueName: \"kubernetes.io/projected/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-kube-api-access-jnkz8\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.887813 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9zvp\" (UniqueName: \"kubernetes.io/projected/25ee5971-289d-4cf3-852d-e6473c97582f-kube-api-access-v9zvp\") pod \"heat-engine-6746496466-bz5b7\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.887842 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-dns-svc\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.900982 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-config-data\") pod \"heat-engine-6746496466-bz5b7\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.904250 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-config-data-custom\") pod \"heat-engine-6746496466-bz5b7\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:11 crc kubenswrapper[4724]: I0226 11:32:11.904944 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-combined-ca-bundle\") pod \"heat-engine-6746496466-bz5b7\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.027380 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.027467 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.027496 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.027526 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-config\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.027573 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnkz8\" (UniqueName: \"kubernetes.io/projected/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-kube-api-access-jnkz8\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.027623 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-dns-svc\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.028522 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-dns-svc\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.029050 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.038939 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-config\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.039480 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.047937 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.054925 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9zvp\" (UniqueName: \"kubernetes.io/projected/25ee5971-289d-4cf3-852d-e6473c97582f-kube-api-access-v9zvp\") pod \"heat-engine-6746496466-bz5b7\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.168068 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnkz8\" (UniqueName: \"kubernetes.io/projected/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-kube-api-access-jnkz8\") pod \"dnsmasq-dns-7d978555f9-dcl5w\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.330257 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d" path="/var/lib/kubelet/pods/ccd0157b-ae8e-4dcb-9473-2c338cf6ee5d/volumes" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.341278 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7b57bf547-ctb72"] Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.342535 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b57bf547-ctb72"] Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.342622 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.348762 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.354648 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-78c4954f9c-cxzbb"] Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.356051 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.375164 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.375561 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.388017 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78c4954f9c-cxzbb"] Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.408703 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.538684 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-combined-ca-bundle\") pod \"heat-cfnapi-7b57bf547-ctb72\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.538804 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pczvp\" (UniqueName: \"kubernetes.io/projected/4e104b5a-57be-474d-957f-25a86e9111a1-kube-api-access-pczvp\") pod \"heat-cfnapi-7b57bf547-ctb72\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.538836 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-config-data\") pod \"heat-cfnapi-7b57bf547-ctb72\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.538896 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s8p9\" (UniqueName: \"kubernetes.io/projected/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-kube-api-access-7s8p9\") pod \"heat-api-78c4954f9c-cxzbb\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.538996 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-config-data\") pod \"heat-api-78c4954f9c-cxzbb\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.539032 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-config-data-custom\") pod \"heat-api-78c4954f9c-cxzbb\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.539106 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-combined-ca-bundle\") pod \"heat-api-78c4954f9c-cxzbb\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.539162 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-config-data-custom\") pod \"heat-cfnapi-7b57bf547-ctb72\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.640703 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pczvp\" (UniqueName: \"kubernetes.io/projected/4e104b5a-57be-474d-957f-25a86e9111a1-kube-api-access-pczvp\") pod \"heat-cfnapi-7b57bf547-ctb72\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.640772 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-config-data\") pod \"heat-cfnapi-7b57bf547-ctb72\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.640816 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s8p9\" (UniqueName: \"kubernetes.io/projected/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-kube-api-access-7s8p9\") pod \"heat-api-78c4954f9c-cxzbb\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.640890 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-config-data\") pod \"heat-api-78c4954f9c-cxzbb\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.640931 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-config-data-custom\") pod \"heat-api-78c4954f9c-cxzbb\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.641006 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-combined-ca-bundle\") pod \"heat-api-78c4954f9c-cxzbb\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.641073 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-config-data-custom\") pod \"heat-cfnapi-7b57bf547-ctb72\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.641136 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-combined-ca-bundle\") pod \"heat-cfnapi-7b57bf547-ctb72\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.663538 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-config-data-custom\") pod \"heat-cfnapi-7b57bf547-ctb72\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.663822 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-config-data\") pod \"heat-api-78c4954f9c-cxzbb\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.664336 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-config-data\") pod \"heat-cfnapi-7b57bf547-ctb72\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.665168 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-combined-ca-bundle\") pod \"heat-cfnapi-7b57bf547-ctb72\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.667889 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-config-data-custom\") pod \"heat-api-78c4954f9c-cxzbb\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.669777 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-combined-ca-bundle\") pod \"heat-api-78c4954f9c-cxzbb\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.672472 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.174:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.691780 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s8p9\" (UniqueName: \"kubernetes.io/projected/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-kube-api-access-7s8p9\") pod \"heat-api-78c4954f9c-cxzbb\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.708107 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pczvp\" (UniqueName: \"kubernetes.io/projected/4e104b5a-57be-474d-957f-25a86e9111a1-kube-api-access-pczvp\") pod \"heat-cfnapi-7b57bf547-ctb72\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:12 crc kubenswrapper[4724]: I0226 11:32:12.716993 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.052005 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.171299 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-746558bfbf-gbdpm"] Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.172974 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.179571 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.179867 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.181651 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.189571 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-746558bfbf-gbdpm"] Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.260507 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acbb8b99-0b04-48c7-904e-a5c5304813a3-run-httpd\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.260746 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbb8b99-0b04-48c7-904e-a5c5304813a3-combined-ca-bundle\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.260814 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqwr7\" (UniqueName: \"kubernetes.io/projected/acbb8b99-0b04-48c7-904e-a5c5304813a3-kube-api-access-lqwr7\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.260905 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acbb8b99-0b04-48c7-904e-a5c5304813a3-public-tls-certs\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.260971 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acbb8b99-0b04-48c7-904e-a5c5304813a3-config-data\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.261044 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acbb8b99-0b04-48c7-904e-a5c5304813a3-internal-tls-certs\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.261108 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acbb8b99-0b04-48c7-904e-a5c5304813a3-log-httpd\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.261198 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/acbb8b99-0b04-48c7-904e-a5c5304813a3-etc-swift\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.362954 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbb8b99-0b04-48c7-904e-a5c5304813a3-combined-ca-bundle\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.363009 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqwr7\" (UniqueName: \"kubernetes.io/projected/acbb8b99-0b04-48c7-904e-a5c5304813a3-kube-api-access-lqwr7\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.363091 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acbb8b99-0b04-48c7-904e-a5c5304813a3-public-tls-certs\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.363132 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acbb8b99-0b04-48c7-904e-a5c5304813a3-config-data\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.363197 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acbb8b99-0b04-48c7-904e-a5c5304813a3-internal-tls-certs\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.363234 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acbb8b99-0b04-48c7-904e-a5c5304813a3-log-httpd\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.363299 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/acbb8b99-0b04-48c7-904e-a5c5304813a3-etc-swift\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.363366 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acbb8b99-0b04-48c7-904e-a5c5304813a3-run-httpd\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.363925 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acbb8b99-0b04-48c7-904e-a5c5304813a3-run-httpd\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.366583 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acbb8b99-0b04-48c7-904e-a5c5304813a3-log-httpd\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.372411 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acbb8b99-0b04-48c7-904e-a5c5304813a3-internal-tls-certs\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.373045 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbb8b99-0b04-48c7-904e-a5c5304813a3-combined-ca-bundle\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.373477 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/acbb8b99-0b04-48c7-904e-a5c5304813a3-etc-swift\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.377054 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acbb8b99-0b04-48c7-904e-a5c5304813a3-config-data\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.394898 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acbb8b99-0b04-48c7-904e-a5c5304813a3-public-tls-certs\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.403982 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqwr7\" (UniqueName: \"kubernetes.io/projected/acbb8b99-0b04-48c7-904e-a5c5304813a3-kube-api-access-lqwr7\") pod \"swift-proxy-746558bfbf-gbdpm\" (UID: \"acbb8b99-0b04-48c7-904e-a5c5304813a3\") " pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:13 crc kubenswrapper[4724]: I0226 11:32:13.506407 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:16 crc kubenswrapper[4724]: I0226 11:32:16.590212 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-59b8f9f788-85hsf" podUID="cd5f2a9d-eba9-4157-9b34-fba1714fa562" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 11:32:16 crc kubenswrapper[4724]: I0226 11:32:16.599413 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-59b8f9f788-85hsf" podUID="cd5f2a9d-eba9-4157-9b34-fba1714fa562" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 11:32:16 crc kubenswrapper[4724]: I0226 11:32:16.599431 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-59b8f9f788-85hsf" podUID="cd5f2a9d-eba9-4157-9b34-fba1714fa562" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 11:32:17 crc kubenswrapper[4724]: I0226 11:32:17.713439 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.174:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:32:17 crc kubenswrapper[4724]: I0226 11:32:17.722526 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 26 11:32:18 crc kubenswrapper[4724]: I0226 11:32:18.060996 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:32:18 crc kubenswrapper[4724]: I0226 11:32:18.062494 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:32:18 crc kubenswrapper[4724]: I0226 11:32:18.070040 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.154:8443: connect: connection refused" Feb 26 11:32:18 crc kubenswrapper[4724]: I0226 11:32:18.368984 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57977849d4-8s5ds" podUID="e4c4b3ae-030b-4e33-9779-2ffa39196a76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Feb 26 11:32:20 crc kubenswrapper[4724]: I0226 11:32:20.256793 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 26 11:32:23 crc kubenswrapper[4724]: I0226 11:32:23.088326 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-869f945844-vjsk6" Feb 26 11:32:23 crc kubenswrapper[4724]: I0226 11:32:23.107398 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-869f945844-vjsk6" Feb 26 11:32:23 crc kubenswrapper[4724]: I0226 11:32:23.203140 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6c6f668b64-t5tsj"] Feb 26 11:32:23 crc kubenswrapper[4724]: I0226 11:32:23.203395 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6c6f668b64-t5tsj" podUID="123116af-ca93-48d5-95ef-9154cda84c60" containerName="placement-log" containerID="cri-o://58e9ca4f8eb246caaabc4b62bd3b5f71753945816dc69ebbac750df5a38a5f04" gracePeriod=30 Feb 26 11:32:23 crc kubenswrapper[4724]: I0226 11:32:23.203804 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6c6f668b64-t5tsj" podUID="123116af-ca93-48d5-95ef-9154cda84c60" containerName="placement-api" containerID="cri-o://b4cc1fec3b8aae9856d581f9f595a4e4629887f44d6b9ff89ce4e94b5030aa9e" gracePeriod=30 Feb 26 11:32:23 crc kubenswrapper[4724]: I0226 11:32:23.230044 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:23 crc kubenswrapper[4724]: I0226 11:32:23.230852 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="sg-core" containerID="cri-o://b1755022a67635c13cc93d63ca7f3ebc54ada71b41627fd77443fbfb898c0b3f" gracePeriod=30 Feb 26 11:32:23 crc kubenswrapper[4724]: I0226 11:32:23.230880 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="ceilometer-notification-agent" containerID="cri-o://80262f68c3a17cdab8a02f47df7f79ab1a05f36ef2ad0ca829ec203bd02216e4" gracePeriod=30 Feb 26 11:32:23 crc kubenswrapper[4724]: I0226 11:32:23.230956 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="proxy-httpd" containerID="cri-o://5d496e314b4816b519be635885b65799f0c0f04a9d3e5ade9fed904a33bfe612" gracePeriod=30 Feb 26 11:32:23 crc kubenswrapper[4724]: I0226 11:32:23.230968 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="ceilometer-central-agent" containerID="cri-o://a85bc78dafa79d8f07279a4aa337f47c7589205dbd700e113817acf807b1a9bb" gracePeriod=30 Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.503051 4724 generic.go:334] "Generic (PLEG): container finished" podID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerID="a1dcc4c5ffad43041b42764e046bd7ba4586367e5215f094e3beeafabb918d90" exitCode=137 Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.503132 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8d446a7-c07f-4b3d-ae55-a2246b928864","Type":"ContainerDied","Data":"a1dcc4c5ffad43041b42764e046bd7ba4586367e5215f094e3beeafabb918d90"} Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.506782 4724 generic.go:334] "Generic (PLEG): container finished" podID="123116af-ca93-48d5-95ef-9154cda84c60" containerID="58e9ca4f8eb246caaabc4b62bd3b5f71753945816dc69ebbac750df5a38a5f04" exitCode=143 Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.506865 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c6f668b64-t5tsj" event={"ID":"123116af-ca93-48d5-95ef-9154cda84c60","Type":"ContainerDied","Data":"58e9ca4f8eb246caaabc4b62bd3b5f71753945816dc69ebbac750df5a38a5f04"} Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.509308 4724 generic.go:334] "Generic (PLEG): container finished" podID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerID="b1755022a67635c13cc93d63ca7f3ebc54ada71b41627fd77443fbfb898c0b3f" exitCode=2 Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.509341 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96e9c06-0ce9-46b6-9422-a0729d93d8d6","Type":"ContainerDied","Data":"b1755022a67635c13cc93d63ca7f3ebc54ada71b41627fd77443fbfb898c0b3f"} Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.569886 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-674894f85d-fwnwf"] Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.581794 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.583963 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-674894f85d-fwnwf"] Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.617453 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-config-data-custom\") pod \"heat-cfnapi-674894f85d-fwnwf\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.617599 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-config-data\") pod \"heat-cfnapi-674894f85d-fwnwf\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.617628 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-combined-ca-bundle\") pod \"heat-cfnapi-674894f85d-fwnwf\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.617832 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm5gh\" (UniqueName: \"kubernetes.io/projected/dfb2bad0-3923-4242-9339-b88cc85fc206-kube-api-access-nm5gh\") pod \"heat-cfnapi-674894f85d-fwnwf\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.719250 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm5gh\" (UniqueName: \"kubernetes.io/projected/dfb2bad0-3923-4242-9339-b88cc85fc206-kube-api-access-nm5gh\") pod \"heat-cfnapi-674894f85d-fwnwf\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.719317 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-config-data-custom\") pod \"heat-cfnapi-674894f85d-fwnwf\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.719385 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-config-data\") pod \"heat-cfnapi-674894f85d-fwnwf\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.720161 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-combined-ca-bundle\") pod \"heat-cfnapi-674894f85d-fwnwf\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.759788 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-78fbbcf444-k8n4t"] Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.762199 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.777372 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-78fbbcf444-k8n4t"] Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.826384 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tz74\" (UniqueName: \"kubernetes.io/projected/791d107b-678e-448e-859c-864e9e66dd16-kube-api-access-2tz74\") pod \"heat-engine-78fbbcf444-k8n4t\" (UID: \"791d107b-678e-448e-859c-864e9e66dd16\") " pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.826451 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791d107b-678e-448e-859c-864e9e66dd16-combined-ca-bundle\") pod \"heat-engine-78fbbcf444-k8n4t\" (UID: \"791d107b-678e-448e-859c-864e9e66dd16\") " pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.826525 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791d107b-678e-448e-859c-864e9e66dd16-config-data\") pod \"heat-engine-78fbbcf444-k8n4t\" (UID: \"791d107b-678e-448e-859c-864e9e66dd16\") " pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.826581 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/791d107b-678e-448e-859c-864e9e66dd16-config-data-custom\") pod \"heat-engine-78fbbcf444-k8n4t\" (UID: \"791d107b-678e-448e-859c-864e9e66dd16\") " pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.928373 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tz74\" (UniqueName: \"kubernetes.io/projected/791d107b-678e-448e-859c-864e9e66dd16-kube-api-access-2tz74\") pod \"heat-engine-78fbbcf444-k8n4t\" (UID: \"791d107b-678e-448e-859c-864e9e66dd16\") " pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.928445 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791d107b-678e-448e-859c-864e9e66dd16-combined-ca-bundle\") pod \"heat-engine-78fbbcf444-k8n4t\" (UID: \"791d107b-678e-448e-859c-864e9e66dd16\") " pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.928505 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791d107b-678e-448e-859c-864e9e66dd16-config-data\") pod \"heat-engine-78fbbcf444-k8n4t\" (UID: \"791d107b-678e-448e-859c-864e9e66dd16\") " pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:24 crc kubenswrapper[4724]: I0226 11:32:24.928555 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/791d107b-678e-448e-859c-864e9e66dd16-config-data-custom\") pod \"heat-engine-78fbbcf444-k8n4t\" (UID: \"791d107b-678e-448e-859c-864e9e66dd16\") " pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.089962 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm5gh\" (UniqueName: \"kubernetes.io/projected/dfb2bad0-3923-4242-9339-b88cc85fc206-kube-api-access-nm5gh\") pod \"heat-cfnapi-674894f85d-fwnwf\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.090738 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791d107b-678e-448e-859c-864e9e66dd16-config-data\") pod \"heat-engine-78fbbcf444-k8n4t\" (UID: \"791d107b-678e-448e-859c-864e9e66dd16\") " pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.090952 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-config-data\") pod \"heat-cfnapi-674894f85d-fwnwf\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.091013 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791d107b-678e-448e-859c-864e9e66dd16-combined-ca-bundle\") pod \"heat-engine-78fbbcf444-k8n4t\" (UID: \"791d107b-678e-448e-859c-864e9e66dd16\") " pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.091588 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-config-data-custom\") pod \"heat-cfnapi-674894f85d-fwnwf\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.091698 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-combined-ca-bundle\") pod \"heat-cfnapi-674894f85d-fwnwf\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.091771 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tz74\" (UniqueName: \"kubernetes.io/projected/791d107b-678e-448e-859c-864e9e66dd16-kube-api-access-2tz74\") pod \"heat-engine-78fbbcf444-k8n4t\" (UID: \"791d107b-678e-448e-859c-864e9e66dd16\") " pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.092617 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/791d107b-678e-448e-859c-864e9e66dd16-config-data-custom\") pod \"heat-engine-78fbbcf444-k8n4t\" (UID: \"791d107b-678e-448e-859c-864e9e66dd16\") " pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.109869 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.255421 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.267870 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-76f4bfd896-xsknh"] Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.269594 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.315465 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-76f4bfd896-xsknh"] Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.339421 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-config-data\") pod \"heat-api-76f4bfd896-xsknh\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.339497 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-config-data-custom\") pod \"heat-api-76f4bfd896-xsknh\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.339579 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr6kd\" (UniqueName: \"kubernetes.io/projected/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-kube-api-access-cr6kd\") pod \"heat-api-76f4bfd896-xsknh\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.339646 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-combined-ca-bundle\") pod \"heat-api-76f4bfd896-xsknh\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.442844 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-combined-ca-bundle\") pod \"heat-api-76f4bfd896-xsknh\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.443092 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-config-data\") pod \"heat-api-76f4bfd896-xsknh\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.444029 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-config-data-custom\") pod \"heat-api-76f4bfd896-xsknh\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.444153 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr6kd\" (UniqueName: \"kubernetes.io/projected/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-kube-api-access-cr6kd\") pod \"heat-api-76f4bfd896-xsknh\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.447883 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-config-data\") pod \"heat-api-76f4bfd896-xsknh\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.448705 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-config-data-custom\") pod \"heat-api-76f4bfd896-xsknh\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.468936 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-combined-ca-bundle\") pod \"heat-api-76f4bfd896-xsknh\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.476107 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr6kd\" (UniqueName: \"kubernetes.io/projected/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-kube-api-access-cr6kd\") pod \"heat-api-76f4bfd896-xsknh\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.531334 4724 generic.go:334] "Generic (PLEG): container finished" podID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerID="5d496e314b4816b519be635885b65799f0c0f04a9d3e5ade9fed904a33bfe612" exitCode=0 Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.531387 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96e9c06-0ce9-46b6-9422-a0729d93d8d6","Type":"ContainerDied","Data":"5d496e314b4816b519be635885b65799f0c0f04a9d3e5ade9fed904a33bfe612"} Feb 26 11:32:25 crc kubenswrapper[4724]: I0226 11:32:25.587308 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:27 crc kubenswrapper[4724]: I0226 11:32:27.249841 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 11:32:27 crc kubenswrapper[4724]: I0226 11:32:27.437988 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59b8f9f788-85hsf"] Feb 26 11:32:27 crc kubenswrapper[4724]: I0226 11:32:27.438346 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59b8f9f788-85hsf" podUID="cd5f2a9d-eba9-4157-9b34-fba1714fa562" containerName="neutron-api" containerID="cri-o://31b208c45ecbb18bfb2d7e7dafbb3836f48970b5a23499aa599d769b90ed63c0" gracePeriod=30 Feb 26 11:32:27 crc kubenswrapper[4724]: I0226 11:32:27.438710 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59b8f9f788-85hsf" podUID="cd5f2a9d-eba9-4157-9b34-fba1714fa562" containerName="neutron-httpd" containerID="cri-o://f628d07d09171904eb17b3a4883a21e8aaf92b69ee4ef08c151d9925936bc2da" gracePeriod=30 Feb 26 11:32:27 crc kubenswrapper[4724]: I0226 11:32:27.462433 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-59b8f9f788-85hsf" podUID="cd5f2a9d-eba9-4157-9b34-fba1714fa562" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.170:9696/\": EOF" Feb 26 11:32:27 crc kubenswrapper[4724]: I0226 11:32:27.540081 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.174:8776/healthcheck\": dial tcp 10.217.0.174:8776: connect: connection refused" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.063442 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.154:8443: connect: connection refused" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.318547 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78c4954f9c-cxzbb"] Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.363774 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-58cc4895d6-7zzgw"] Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.365210 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.367779 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57977849d4-8s5ds" podUID="e4c4b3ae-030b-4e33-9779-2ffa39196a76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.380938 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-58cc4895d6-7zzgw"] Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.382548 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.382915 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.411526 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dc589b-0663-4d44-a1aa-c57772731f5b-combined-ca-bundle\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.411580 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j6sv\" (UniqueName: \"kubernetes.io/projected/60dc589b-0663-4d44-a1aa-c57772731f5b-kube-api-access-8j6sv\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.411617 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60dc589b-0663-4d44-a1aa-c57772731f5b-config-data\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.411643 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60dc589b-0663-4d44-a1aa-c57772731f5b-config-data-custom\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.411686 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dc589b-0663-4d44-a1aa-c57772731f5b-internal-tls-certs\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.411744 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dc589b-0663-4d44-a1aa-c57772731f5b-public-tls-certs\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.459982 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b57bf547-ctb72"] Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.489765 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5bbc75466c-6dmf6"] Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.491102 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.496792 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.503286 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5bbc75466c-6dmf6"] Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.505606 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.526080 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dc589b-0663-4d44-a1aa-c57772731f5b-public-tls-certs\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.526200 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dc589b-0663-4d44-a1aa-c57772731f5b-combined-ca-bundle\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.526300 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j6sv\" (UniqueName: \"kubernetes.io/projected/60dc589b-0663-4d44-a1aa-c57772731f5b-kube-api-access-8j6sv\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.526386 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60dc589b-0663-4d44-a1aa-c57772731f5b-config-data\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.526443 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60dc589b-0663-4d44-a1aa-c57772731f5b-config-data-custom\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.526581 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dc589b-0663-4d44-a1aa-c57772731f5b-internal-tls-certs\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.540218 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dc589b-0663-4d44-a1aa-c57772731f5b-combined-ca-bundle\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.540241 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60dc589b-0663-4d44-a1aa-c57772731f5b-config-data-custom\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.541349 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dc589b-0663-4d44-a1aa-c57772731f5b-internal-tls-certs\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.542354 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60dc589b-0663-4d44-a1aa-c57772731f5b-config-data\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.544971 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dc589b-0663-4d44-a1aa-c57772731f5b-public-tls-certs\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.550048 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j6sv\" (UniqueName: \"kubernetes.io/projected/60dc589b-0663-4d44-a1aa-c57772731f5b-kube-api-access-8j6sv\") pod \"heat-api-58cc4895d6-7zzgw\" (UID: \"60dc589b-0663-4d44-a1aa-c57772731f5b\") " pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.587206 4724 generic.go:334] "Generic (PLEG): container finished" podID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerID="80262f68c3a17cdab8a02f47df7f79ab1a05f36ef2ad0ca829ec203bd02216e4" exitCode=0 Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.587244 4724 generic.go:334] "Generic (PLEG): container finished" podID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerID="a85bc78dafa79d8f07279a4aa337f47c7589205dbd700e113817acf807b1a9bb" exitCode=0 Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.587289 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96e9c06-0ce9-46b6-9422-a0729d93d8d6","Type":"ContainerDied","Data":"80262f68c3a17cdab8a02f47df7f79ab1a05f36ef2ad0ca829ec203bd02216e4"} Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.587318 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96e9c06-0ce9-46b6-9422-a0729d93d8d6","Type":"ContainerDied","Data":"a85bc78dafa79d8f07279a4aa337f47c7589205dbd700e113817acf807b1a9bb"} Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.628350 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e57d7bd1-267a-4643-9581-8554109f7cba-config-data-custom\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.628428 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e57d7bd1-267a-4643-9581-8554109f7cba-internal-tls-certs\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.628479 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57d7bd1-267a-4643-9581-8554109f7cba-config-data\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.628506 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57d7bd1-267a-4643-9581-8554109f7cba-combined-ca-bundle\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.628551 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcb9s\" (UniqueName: \"kubernetes.io/projected/e57d7bd1-267a-4643-9581-8554109f7cba-kube-api-access-gcb9s\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.628637 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e57d7bd1-267a-4643-9581-8554109f7cba-public-tls-certs\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.685105 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.730597 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e57d7bd1-267a-4643-9581-8554109f7cba-public-tls-certs\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.730697 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e57d7bd1-267a-4643-9581-8554109f7cba-config-data-custom\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.730740 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e57d7bd1-267a-4643-9581-8554109f7cba-internal-tls-certs\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.730799 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57d7bd1-267a-4643-9581-8554109f7cba-config-data\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.730829 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57d7bd1-267a-4643-9581-8554109f7cba-combined-ca-bundle\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.730872 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcb9s\" (UniqueName: \"kubernetes.io/projected/e57d7bd1-267a-4643-9581-8554109f7cba-kube-api-access-gcb9s\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.739141 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57d7bd1-267a-4643-9581-8554109f7cba-config-data\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.739811 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e57d7bd1-267a-4643-9581-8554109f7cba-internal-tls-certs\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.743024 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e57d7bd1-267a-4643-9581-8554109f7cba-config-data-custom\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.746893 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e57d7bd1-267a-4643-9581-8554109f7cba-public-tls-certs\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.746901 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57d7bd1-267a-4643-9581-8554109f7cba-combined-ca-bundle\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.769834 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcb9s\" (UniqueName: \"kubernetes.io/projected/e57d7bd1-267a-4643-9581-8554109f7cba-kube-api-access-gcb9s\") pod \"heat-cfnapi-5bbc75466c-6dmf6\" (UID: \"e57d7bd1-267a-4643-9581-8554109f7cba\") " pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:28 crc kubenswrapper[4724]: I0226 11:32:28.806710 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:30 crc kubenswrapper[4724]: I0226 11:32:30.610102 4724 generic.go:334] "Generic (PLEG): container finished" podID="123116af-ca93-48d5-95ef-9154cda84c60" containerID="b4cc1fec3b8aae9856d581f9f595a4e4629887f44d6b9ff89ce4e94b5030aa9e" exitCode=0 Feb 26 11:32:30 crc kubenswrapper[4724]: I0226 11:32:30.610307 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c6f668b64-t5tsj" event={"ID":"123116af-ca93-48d5-95ef-9154cda84c60","Type":"ContainerDied","Data":"b4cc1fec3b8aae9856d581f9f595a4e4629887f44d6b9ff89ce4e94b5030aa9e"} Feb 26 11:32:31 crc kubenswrapper[4724]: I0226 11:32:31.621061 4724 generic.go:334] "Generic (PLEG): container finished" podID="cd5f2a9d-eba9-4157-9b34-fba1714fa562" containerID="f628d07d09171904eb17b3a4883a21e8aaf92b69ee4ef08c151d9925936bc2da" exitCode=0 Feb 26 11:32:31 crc kubenswrapper[4724]: I0226 11:32:31.621091 4724 generic.go:334] "Generic (PLEG): container finished" podID="cd5f2a9d-eba9-4157-9b34-fba1714fa562" containerID="31b208c45ecbb18bfb2d7e7dafbb3836f48970b5a23499aa599d769b90ed63c0" exitCode=0 Feb 26 11:32:31 crc kubenswrapper[4724]: I0226 11:32:31.621111 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59b8f9f788-85hsf" event={"ID":"cd5f2a9d-eba9-4157-9b34-fba1714fa562","Type":"ContainerDied","Data":"f628d07d09171904eb17b3a4883a21e8aaf92b69ee4ef08c151d9925936bc2da"} Feb 26 11:32:31 crc kubenswrapper[4724]: I0226 11:32:31.621136 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59b8f9f788-85hsf" event={"ID":"cd5f2a9d-eba9-4157-9b34-fba1714fa562","Type":"ContainerDied","Data":"31b208c45ecbb18bfb2d7e7dafbb3836f48970b5a23499aa599d769b90ed63c0"} Feb 26 11:32:32 crc kubenswrapper[4724]: I0226 11:32:32.539265 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.174:8776/healthcheck\": dial tcp 10.217.0.174:8776: connect: connection refused" Feb 26 11:32:32 crc kubenswrapper[4724]: E0226 11:32:32.569074 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 26 11:32:32 crc kubenswrapper[4724]: E0226 11:32:32.572647 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c8h97h5f5h5d6h66ch644h5c5h5ch559h5bch698h8h57h675h88hdbh678h5bbh647h5bdh86h95h67fh5b8h648hf7h5f4h57bh694h5b4h59dh55dq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knbdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:32:32 crc kubenswrapper[4724]: E0226 11:32:32.574502 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed" Feb 26 11:32:32 crc kubenswrapper[4724]: I0226 11:32:32.588474 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.167:3000/\": dial tcp 10.217.0.167:3000: connect: connection refused" Feb 26 11:32:32 crc kubenswrapper[4724]: E0226 11:32:32.636417 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed" Feb 26 11:32:32 crc kubenswrapper[4724]: I0226 11:32:32.948312 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-7grhf"] Feb 26 11:32:32 crc kubenswrapper[4724]: I0226 11:32:32.966145 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7grhf" Feb 26 11:32:32 crc kubenswrapper[4724]: I0226 11:32:32.975375 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7grhf"] Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.065433 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6014f5be-ec67-4cfd-89f7-74db5e786dc0-operator-scripts\") pod \"nova-api-db-create-7grhf\" (UID: \"6014f5be-ec67-4cfd-89f7-74db5e786dc0\") " pod="openstack/nova-api-db-create-7grhf" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.065568 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qft9f\" (UniqueName: \"kubernetes.io/projected/6014f5be-ec67-4cfd-89f7-74db5e786dc0-kube-api-access-qft9f\") pod \"nova-api-db-create-7grhf\" (UID: \"6014f5be-ec67-4cfd-89f7-74db5e786dc0\") " pod="openstack/nova-api-db-create-7grhf" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.157079 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6tpht"] Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.158462 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6tpht" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.170008 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qft9f\" (UniqueName: \"kubernetes.io/projected/6014f5be-ec67-4cfd-89f7-74db5e786dc0-kube-api-access-qft9f\") pod \"nova-api-db-create-7grhf\" (UID: \"6014f5be-ec67-4cfd-89f7-74db5e786dc0\") " pod="openstack/nova-api-db-create-7grhf" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.170198 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6014f5be-ec67-4cfd-89f7-74db5e786dc0-operator-scripts\") pod \"nova-api-db-create-7grhf\" (UID: \"6014f5be-ec67-4cfd-89f7-74db5e786dc0\") " pod="openstack/nova-api-db-create-7grhf" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.171115 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6014f5be-ec67-4cfd-89f7-74db5e786dc0-operator-scripts\") pod \"nova-api-db-create-7grhf\" (UID: \"6014f5be-ec67-4cfd-89f7-74db5e786dc0\") " pod="openstack/nova-api-db-create-7grhf" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.201403 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6tpht"] Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.274729 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/565dc4e0-05d9-4e31-8a8a-0865909b2523-operator-scripts\") pod \"nova-cell0-db-create-6tpht\" (UID: \"565dc4e0-05d9-4e31-8a8a-0865909b2523\") " pod="openstack/nova-cell0-db-create-6tpht" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.274777 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wktlq\" (UniqueName: \"kubernetes.io/projected/565dc4e0-05d9-4e31-8a8a-0865909b2523-kube-api-access-wktlq\") pod \"nova-cell0-db-create-6tpht\" (UID: \"565dc4e0-05d9-4e31-8a8a-0865909b2523\") " pod="openstack/nova-cell0-db-create-6tpht" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.280027 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qft9f\" (UniqueName: \"kubernetes.io/projected/6014f5be-ec67-4cfd-89f7-74db5e786dc0-kube-api-access-qft9f\") pod \"nova-api-db-create-7grhf\" (UID: \"6014f5be-ec67-4cfd-89f7-74db5e786dc0\") " pod="openstack/nova-api-db-create-7grhf" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.346799 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-pw6nq"] Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.348134 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pw6nq" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.366496 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7grhf" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.376501 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/565dc4e0-05d9-4e31-8a8a-0865909b2523-operator-scripts\") pod \"nova-cell0-db-create-6tpht\" (UID: \"565dc4e0-05d9-4e31-8a8a-0865909b2523\") " pod="openstack/nova-cell0-db-create-6tpht" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.376553 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wktlq\" (UniqueName: \"kubernetes.io/projected/565dc4e0-05d9-4e31-8a8a-0865909b2523-kube-api-access-wktlq\") pod \"nova-cell0-db-create-6tpht\" (UID: \"565dc4e0-05d9-4e31-8a8a-0865909b2523\") " pod="openstack/nova-cell0-db-create-6tpht" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.378263 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pw6nq"] Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.384315 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/565dc4e0-05d9-4e31-8a8a-0865909b2523-operator-scripts\") pod \"nova-cell0-db-create-6tpht\" (UID: \"565dc4e0-05d9-4e31-8a8a-0865909b2523\") " pod="openstack/nova-cell0-db-create-6tpht" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.445565 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wktlq\" (UniqueName: \"kubernetes.io/projected/565dc4e0-05d9-4e31-8a8a-0865909b2523-kube-api-access-wktlq\") pod \"nova-cell0-db-create-6tpht\" (UID: \"565dc4e0-05d9-4e31-8a8a-0865909b2523\") " pod="openstack/nova-cell0-db-create-6tpht" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.458014 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2caf-account-create-update-lqcj8"] Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.459317 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2caf-account-create-update-lqcj8" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.467500 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.476331 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2caf-account-create-update-lqcj8"] Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.510591 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fktmt\" (UniqueName: \"kubernetes.io/projected/c1589cfa-091d-47f6-bd8f-0db0f5756cce-kube-api-access-fktmt\") pod \"nova-cell1-db-create-pw6nq\" (UID: \"c1589cfa-091d-47f6-bd8f-0db0f5756cce\") " pod="openstack/nova-cell1-db-create-pw6nq" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.510916 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1589cfa-091d-47f6-bd8f-0db0f5756cce-operator-scripts\") pod \"nova-cell1-db-create-pw6nq\" (UID: \"c1589cfa-091d-47f6-bd8f-0db0f5756cce\") " pod="openstack/nova-cell1-db-create-pw6nq" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.511252 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6tpht" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.614410 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fktmt\" (UniqueName: \"kubernetes.io/projected/c1589cfa-091d-47f6-bd8f-0db0f5756cce-kube-api-access-fktmt\") pod \"nova-cell1-db-create-pw6nq\" (UID: \"c1589cfa-091d-47f6-bd8f-0db0f5756cce\") " pod="openstack/nova-cell1-db-create-pw6nq" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.614810 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hzwl\" (UniqueName: \"kubernetes.io/projected/7ddd79bf-b594-45ff-95e6-69bb0bc58dca-kube-api-access-9hzwl\") pod \"nova-cell0-2caf-account-create-update-lqcj8\" (UID: \"7ddd79bf-b594-45ff-95e6-69bb0bc58dca\") " pod="openstack/nova-cell0-2caf-account-create-update-lqcj8" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.614933 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1589cfa-091d-47f6-bd8f-0db0f5756cce-operator-scripts\") pod \"nova-cell1-db-create-pw6nq\" (UID: \"c1589cfa-091d-47f6-bd8f-0db0f5756cce\") " pod="openstack/nova-cell1-db-create-pw6nq" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.615008 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ddd79bf-b594-45ff-95e6-69bb0bc58dca-operator-scripts\") pod \"nova-cell0-2caf-account-create-update-lqcj8\" (UID: \"7ddd79bf-b594-45ff-95e6-69bb0bc58dca\") " pod="openstack/nova-cell0-2caf-account-create-update-lqcj8" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.616039 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1589cfa-091d-47f6-bd8f-0db0f5756cce-operator-scripts\") pod \"nova-cell1-db-create-pw6nq\" (UID: \"c1589cfa-091d-47f6-bd8f-0db0f5756cce\") " pod="openstack/nova-cell1-db-create-pw6nq" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.672283 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d3bd-account-create-update-lnz8z"] Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.676831 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d3bd-account-create-update-lnz8z" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.687660 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.689116 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fktmt\" (UniqueName: \"kubernetes.io/projected/c1589cfa-091d-47f6-bd8f-0db0f5756cce-kube-api-access-fktmt\") pod \"nova-cell1-db-create-pw6nq\" (UID: \"c1589cfa-091d-47f6-bd8f-0db0f5756cce\") " pod="openstack/nova-cell1-db-create-pw6nq" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.726066 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ddd79bf-b594-45ff-95e6-69bb0bc58dca-operator-scripts\") pod \"nova-cell0-2caf-account-create-update-lqcj8\" (UID: \"7ddd79bf-b594-45ff-95e6-69bb0bc58dca\") " pod="openstack/nova-cell0-2caf-account-create-update-lqcj8" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.735301 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hzwl\" (UniqueName: \"kubernetes.io/projected/7ddd79bf-b594-45ff-95e6-69bb0bc58dca-kube-api-access-9hzwl\") pod \"nova-cell0-2caf-account-create-update-lqcj8\" (UID: \"7ddd79bf-b594-45ff-95e6-69bb0bc58dca\") " pod="openstack/nova-cell0-2caf-account-create-update-lqcj8" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.736076 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ddd79bf-b594-45ff-95e6-69bb0bc58dca-operator-scripts\") pod \"nova-cell0-2caf-account-create-update-lqcj8\" (UID: \"7ddd79bf-b594-45ff-95e6-69bb0bc58dca\") " pod="openstack/nova-cell0-2caf-account-create-update-lqcj8" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.766591 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.877668 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d3bd-account-create-update-lnz8z"] Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.889740 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-config-data\") pod \"123116af-ca93-48d5-95ef-9154cda84c60\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.889838 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-internal-tls-certs\") pod \"123116af-ca93-48d5-95ef-9154cda84c60\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.889884 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psnq8\" (UniqueName: \"kubernetes.io/projected/123116af-ca93-48d5-95ef-9154cda84c60-kube-api-access-psnq8\") pod \"123116af-ca93-48d5-95ef-9154cda84c60\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.889913 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-public-tls-certs\") pod \"123116af-ca93-48d5-95ef-9154cda84c60\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.889971 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-combined-ca-bundle\") pod \"123116af-ca93-48d5-95ef-9154cda84c60\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.890012 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-scripts\") pod \"123116af-ca93-48d5-95ef-9154cda84c60\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.890056 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/123116af-ca93-48d5-95ef-9154cda84c60-logs\") pod \"123116af-ca93-48d5-95ef-9154cda84c60\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.890469 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6804ceff-36ec-4004-baf8-69e65d998378-operator-scripts\") pod \"nova-api-d3bd-account-create-update-lnz8z\" (UID: \"6804ceff-36ec-4004-baf8-69e65d998378\") " pod="openstack/nova-api-d3bd-account-create-update-lnz8z" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.890537 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zwxv\" (UniqueName: \"kubernetes.io/projected/6804ceff-36ec-4004-baf8-69e65d998378-kube-api-access-8zwxv\") pod \"nova-api-d3bd-account-create-update-lnz8z\" (UID: \"6804ceff-36ec-4004-baf8-69e65d998378\") " pod="openstack/nova-api-d3bd-account-create-update-lnz8z" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.901811 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c6f668b64-t5tsj" event={"ID":"123116af-ca93-48d5-95ef-9154cda84c60","Type":"ContainerDied","Data":"b76b515c1c410baecf94099b606a79634f6fd13e47d57be3fff16496477d2db0"} Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.901866 4724 scope.go:117] "RemoveContainer" containerID="b4cc1fec3b8aae9856d581f9f595a4e4629887f44d6b9ff89ce4e94b5030aa9e" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.905439 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/123116af-ca93-48d5-95ef-9154cda84c60-logs" (OuterVolumeSpecName: "logs") pod "123116af-ca93-48d5-95ef-9154cda84c60" (UID: "123116af-ca93-48d5-95ef-9154cda84c60"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.930089 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hzwl\" (UniqueName: \"kubernetes.io/projected/7ddd79bf-b594-45ff-95e6-69bb0bc58dca-kube-api-access-9hzwl\") pod \"nova-cell0-2caf-account-create-update-lqcj8\" (UID: \"7ddd79bf-b594-45ff-95e6-69bb0bc58dca\") " pod="openstack/nova-cell0-2caf-account-create-update-lqcj8" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.937167 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3641-account-create-update-fq2s7"] Feb 26 11:32:33 crc kubenswrapper[4724]: E0226 11:32:33.938013 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123116af-ca93-48d5-95ef-9154cda84c60" containerName="placement-log" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.938029 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="123116af-ca93-48d5-95ef-9154cda84c60" containerName="placement-log" Feb 26 11:32:33 crc kubenswrapper[4724]: E0226 11:32:33.938045 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123116af-ca93-48d5-95ef-9154cda84c60" containerName="placement-api" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.938053 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="123116af-ca93-48d5-95ef-9154cda84c60" containerName="placement-api" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.938294 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="123116af-ca93-48d5-95ef-9154cda84c60" containerName="placement-log" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.938312 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="123116af-ca93-48d5-95ef-9154cda84c60" containerName="placement-api" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.939195 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3641-account-create-update-fq2s7" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.945706 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.946246 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3641-account-create-update-fq2s7"] Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.977074 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-scripts" (OuterVolumeSpecName: "scripts") pod "123116af-ca93-48d5-95ef-9154cda84c60" (UID: "123116af-ca93-48d5-95ef-9154cda84c60"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.997428 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98vfq\" (UniqueName: \"kubernetes.io/projected/86a8dfd5-eeec-402d-a5fb-a087eae65b81-kube-api-access-98vfq\") pod \"nova-cell1-3641-account-create-update-fq2s7\" (UID: \"86a8dfd5-eeec-402d-a5fb-a087eae65b81\") " pod="openstack/nova-cell1-3641-account-create-update-fq2s7" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.997948 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zwxv\" (UniqueName: \"kubernetes.io/projected/6804ceff-36ec-4004-baf8-69e65d998378-kube-api-access-8zwxv\") pod \"nova-api-d3bd-account-create-update-lnz8z\" (UID: \"6804ceff-36ec-4004-baf8-69e65d998378\") " pod="openstack/nova-api-d3bd-account-create-update-lnz8z" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.998281 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6804ceff-36ec-4004-baf8-69e65d998378-operator-scripts\") pod \"nova-api-d3bd-account-create-update-lnz8z\" (UID: \"6804ceff-36ec-4004-baf8-69e65d998378\") " pod="openstack/nova-api-d3bd-account-create-update-lnz8z" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.998606 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86a8dfd5-eeec-402d-a5fb-a087eae65b81-operator-scripts\") pod \"nova-cell1-3641-account-create-update-fq2s7\" (UID: \"86a8dfd5-eeec-402d-a5fb-a087eae65b81\") " pod="openstack/nova-cell1-3641-account-create-update-fq2s7" Feb 26 11:32:33 crc kubenswrapper[4724]: I0226 11:32:33.999433 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/123116af-ca93-48d5-95ef-9154cda84c60-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.003345 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.007517 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pw6nq" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.009344 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6804ceff-36ec-4004-baf8-69e65d998378-operator-scripts\") pod \"nova-api-d3bd-account-create-update-lnz8z\" (UID: \"6804ceff-36ec-4004-baf8-69e65d998378\") " pod="openstack/nova-api-d3bd-account-create-update-lnz8z" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.035991 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123116af-ca93-48d5-95ef-9154cda84c60-kube-api-access-psnq8" (OuterVolumeSpecName: "kube-api-access-psnq8") pod "123116af-ca93-48d5-95ef-9154cda84c60" (UID: "123116af-ca93-48d5-95ef-9154cda84c60"). InnerVolumeSpecName "kube-api-access-psnq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.101829 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zwxv\" (UniqueName: \"kubernetes.io/projected/6804ceff-36ec-4004-baf8-69e65d998378-kube-api-access-8zwxv\") pod \"nova-api-d3bd-account-create-update-lnz8z\" (UID: \"6804ceff-36ec-4004-baf8-69e65d998378\") " pod="openstack/nova-api-d3bd-account-create-update-lnz8z" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.105854 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86a8dfd5-eeec-402d-a5fb-a087eae65b81-operator-scripts\") pod \"nova-cell1-3641-account-create-update-fq2s7\" (UID: \"86a8dfd5-eeec-402d-a5fb-a087eae65b81\") " pod="openstack/nova-cell1-3641-account-create-update-fq2s7" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.105902 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98vfq\" (UniqueName: \"kubernetes.io/projected/86a8dfd5-eeec-402d-a5fb-a087eae65b81-kube-api-access-98vfq\") pod \"nova-cell1-3641-account-create-update-fq2s7\" (UID: \"86a8dfd5-eeec-402d-a5fb-a087eae65b81\") " pod="openstack/nova-cell1-3641-account-create-update-fq2s7" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.106092 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psnq8\" (UniqueName: \"kubernetes.io/projected/123116af-ca93-48d5-95ef-9154cda84c60-kube-api-access-psnq8\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.113645 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86a8dfd5-eeec-402d-a5fb-a087eae65b81-operator-scripts\") pod \"nova-cell1-3641-account-create-update-fq2s7\" (UID: \"86a8dfd5-eeec-402d-a5fb-a087eae65b81\") " pod="openstack/nova-cell1-3641-account-create-update-fq2s7" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.155514 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98vfq\" (UniqueName: \"kubernetes.io/projected/86a8dfd5-eeec-402d-a5fb-a087eae65b81-kube-api-access-98vfq\") pod \"nova-cell1-3641-account-create-update-fq2s7\" (UID: \"86a8dfd5-eeec-402d-a5fb-a087eae65b81\") " pod="openstack/nova-cell1-3641-account-create-update-fq2s7" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.188446 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2caf-account-create-update-lqcj8" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.199973 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d3bd-account-create-update-lnz8z" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.225545 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3641-account-create-update-fq2s7" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.266959 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "123116af-ca93-48d5-95ef-9154cda84c60" (UID: "123116af-ca93-48d5-95ef-9154cda84c60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.310765 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.368712 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.413539 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8d446a7-c07f-4b3d-ae55-a2246b928864-etc-machine-id\") pod \"c8d446a7-c07f-4b3d-ae55-a2246b928864\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.413658 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-scripts\") pod \"c8d446a7-c07f-4b3d-ae55-a2246b928864\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.413709 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-config-data\") pod \"c8d446a7-c07f-4b3d-ae55-a2246b928864\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.413746 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8d446a7-c07f-4b3d-ae55-a2246b928864-logs\") pod \"c8d446a7-c07f-4b3d-ae55-a2246b928864\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.413802 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-combined-ca-bundle\") pod \"c8d446a7-c07f-4b3d-ae55-a2246b928864\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.413940 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-config-data-custom\") pod \"c8d446a7-c07f-4b3d-ae55-a2246b928864\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.415601 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkwgz\" (UniqueName: \"kubernetes.io/projected/c8d446a7-c07f-4b3d-ae55-a2246b928864-kube-api-access-nkwgz\") pod \"c8d446a7-c07f-4b3d-ae55-a2246b928864\" (UID: \"c8d446a7-c07f-4b3d-ae55-a2246b928864\") " Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.436980 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8d446a7-c07f-4b3d-ae55-a2246b928864-logs" (OuterVolumeSpecName: "logs") pod "c8d446a7-c07f-4b3d-ae55-a2246b928864" (UID: "c8d446a7-c07f-4b3d-ae55-a2246b928864"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.437047 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8d446a7-c07f-4b3d-ae55-a2246b928864-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c8d446a7-c07f-4b3d-ae55-a2246b928864" (UID: "c8d446a7-c07f-4b3d-ae55-a2246b928864"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.453847 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.481797 4724 scope.go:117] "RemoveContainer" containerID="58e9ca4f8eb246caaabc4b62bd3b5f71753945816dc69ebbac750df5a38a5f04" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.531908 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-sg-core-conf-yaml\") pod \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.532237 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-config-data\") pod \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.532267 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-scripts\") pod \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.532480 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-config-data" (OuterVolumeSpecName: "config-data") pod "123116af-ca93-48d5-95ef-9154cda84c60" (UID: "123116af-ca93-48d5-95ef-9154cda84c60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.533120 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f7zw\" (UniqueName: \"kubernetes.io/projected/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-kube-api-access-7f7zw\") pod \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.533171 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-log-httpd\") pod \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.533509 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-config-data\") pod \"123116af-ca93-48d5-95ef-9154cda84c60\" (UID: \"123116af-ca93-48d5-95ef-9154cda84c60\") " Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.533622 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-run-httpd\") pod \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.533663 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-combined-ca-bundle\") pod \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\" (UID: \"b96e9c06-0ce9-46b6-9422-a0729d93d8d6\") " Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.535513 4724 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8d446a7-c07f-4b3d-ae55-a2246b928864-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.535532 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8d446a7-c07f-4b3d-ae55-a2246b928864-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: W0226 11:32:34.547343 4724 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/123116af-ca93-48d5-95ef-9154cda84c60/volumes/kubernetes.io~secret/config-data Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.547382 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-config-data" (OuterVolumeSpecName: "config-data") pod "123116af-ca93-48d5-95ef-9154cda84c60" (UID: "123116af-ca93-48d5-95ef-9154cda84c60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.548302 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b96e9c06-0ce9-46b6-9422-a0729d93d8d6" (UID: "b96e9c06-0ce9-46b6-9422-a0729d93d8d6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.548651 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b96e9c06-0ce9-46b6-9422-a0729d93d8d6" (UID: "b96e9c06-0ce9-46b6-9422-a0729d93d8d6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.548928 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-kube-api-access-7f7zw" (OuterVolumeSpecName: "kube-api-access-7f7zw") pod "b96e9c06-0ce9-46b6-9422-a0729d93d8d6" (UID: "b96e9c06-0ce9-46b6-9422-a0729d93d8d6"). InnerVolumeSpecName "kube-api-access-7f7zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.554361 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8d446a7-c07f-4b3d-ae55-a2246b928864-kube-api-access-nkwgz" (OuterVolumeSpecName: "kube-api-access-nkwgz") pod "c8d446a7-c07f-4b3d-ae55-a2246b928864" (UID: "c8d446a7-c07f-4b3d-ae55-a2246b928864"). InnerVolumeSpecName "kube-api-access-nkwgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.564357 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c8d446a7-c07f-4b3d-ae55-a2246b928864" (UID: "c8d446a7-c07f-4b3d-ae55-a2246b928864"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.592409 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-scripts" (OuterVolumeSpecName: "scripts") pod "b96e9c06-0ce9-46b6-9422-a0729d93d8d6" (UID: "b96e9c06-0ce9-46b6-9422-a0729d93d8d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.701080 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkwgz\" (UniqueName: \"kubernetes.io/projected/c8d446a7-c07f-4b3d-ae55-a2246b928864-kube-api-access-nkwgz\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.701118 4724 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.701129 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.701150 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f7zw\" (UniqueName: \"kubernetes.io/projected/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-kube-api-access-7f7zw\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.701159 4724 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.701168 4724 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.701231 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.734085 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-scripts" (OuterVolumeSpecName: "scripts") pod "c8d446a7-c07f-4b3d-ae55-a2246b928864" (UID: "c8d446a7-c07f-4b3d-ae55-a2246b928864"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.735650 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-78fbbcf444-k8n4t"] Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.759118 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b96e9c06-0ce9-46b6-9422-a0729d93d8d6" (UID: "b96e9c06-0ce9-46b6-9422-a0729d93d8d6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.803116 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.803250 4724 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.832464 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6746496466-bz5b7"] Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.852375 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8d446a7-c07f-4b3d-ae55-a2246b928864" (UID: "c8d446a7-c07f-4b3d-ae55-a2246b928864"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.912542 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:34 crc kubenswrapper[4724]: W0226 11:32:34.960823 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacbb8b99_0b04_48c7_904e_a5c5304813a3.slice/crio-119c0c7087675209fae03942c19cdb917b0736041d81a633c817fa7fb7b5cc2c WatchSource:0}: Error finding container 119c0c7087675209fae03942c19cdb917b0736041d81a633c817fa7fb7b5cc2c: Status 404 returned error can't find the container with id 119c0c7087675209fae03942c19cdb917b0736041d81a633c817fa7fb7b5cc2c Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.971851 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "123116af-ca93-48d5-95ef-9154cda84c60" (UID: "123116af-ca93-48d5-95ef-9154cda84c60"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.981541 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5bbc75466c-6dmf6"] Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.981822 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59b8f9f788-85hsf" event={"ID":"cd5f2a9d-eba9-4157-9b34-fba1714fa562","Type":"ContainerDied","Data":"41e8be9a72bbf4dade2221adc7533832eb6dfbfa124a269abd49ee46954823f6"} Feb 26 11:32:34 crc kubenswrapper[4724]: I0226 11:32:34.981917 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41e8be9a72bbf4dade2221adc7533832eb6dfbfa124a269abd49ee46954823f6" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.000481 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-746558bfbf-gbdpm"] Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.003558 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6746496466-bz5b7" event={"ID":"25ee5971-289d-4cf3-852d-e6473c97582f","Type":"ContainerStarted","Data":"fc5e9cf4e830a5324da1fd66effddc45471af1e7de889b5c5863f7058a29a2af"} Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.015166 4724 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.019978 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-config-data" (OuterVolumeSpecName: "config-data") pod "c8d446a7-c07f-4b3d-ae55-a2246b928864" (UID: "c8d446a7-c07f-4b3d-ae55-a2246b928864"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.083026 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8d446a7-c07f-4b3d-ae55-a2246b928864","Type":"ContainerDied","Data":"99b1e3bfdca31a5246137e552281632bcae2ae7ef8de29715186bc98502fe65c"} Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.083079 4724 scope.go:117] "RemoveContainer" containerID="a1dcc4c5ffad43041b42764e046bd7ba4586367e5215f094e3beeafabb918d90" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.083210 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.082792 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "123116af-ca93-48d5-95ef-9154cda84c60" (UID: "123116af-ca93-48d5-95ef-9154cda84c60"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.117308 4724 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/123116af-ca93-48d5-95ef-9154cda84c60-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.117338 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d446a7-c07f-4b3d-ae55-a2246b928864-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.123712 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c6f668b64-t5tsj" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.221785 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.240753 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.241400 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96e9c06-0ce9-46b6-9422-a0729d93d8d6","Type":"ContainerDied","Data":"36c5f8179e5443d7c2914cb830c0bd210cc2b11d15076ace10ac7852e8add760"} Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.241496 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.257310 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-config-data" (OuterVolumeSpecName: "config-data") pod "b96e9c06-0ce9-46b6-9422-a0729d93d8d6" (UID: "b96e9c06-0ce9-46b6-9422-a0729d93d8d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.282901 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.293128 4724 scope.go:117] "RemoveContainer" containerID="ba77161bb6308333cb77b913cdff12582848a60145d93f8db3673399c868f9df" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.326766 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-config\") pod \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.326827 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-httpd-config\") pod \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.328707 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-combined-ca-bundle\") pod \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.330023 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6md9m\" (UniqueName: \"kubernetes.io/projected/cd5f2a9d-eba9-4157-9b34-fba1714fa562-kube-api-access-6md9m\") pod \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.344855 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-ovndb-tls-certs\") pod \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\" (UID: \"cd5f2a9d-eba9-4157-9b34-fba1714fa562\") " Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.345746 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.357766 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "cd5f2a9d-eba9-4157-9b34-fba1714fa562" (UID: "cd5f2a9d-eba9-4157-9b34-fba1714fa562"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.373277 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:32:35 crc kubenswrapper[4724]: E0226 11:32:35.373849 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="sg-core" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.373877 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="sg-core" Feb 26 11:32:35 crc kubenswrapper[4724]: E0226 11:32:35.373901 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerName="cinder-api" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.373913 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerName="cinder-api" Feb 26 11:32:35 crc kubenswrapper[4724]: E0226 11:32:35.373926 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="ceilometer-notification-agent" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.373934 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="ceilometer-notification-agent" Feb 26 11:32:35 crc kubenswrapper[4724]: E0226 11:32:35.373945 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="ceilometer-central-agent" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.373951 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="ceilometer-central-agent" Feb 26 11:32:35 crc kubenswrapper[4724]: E0226 11:32:35.373977 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5f2a9d-eba9-4157-9b34-fba1714fa562" containerName="neutron-httpd" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.373984 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5f2a9d-eba9-4157-9b34-fba1714fa562" containerName="neutron-httpd" Feb 26 11:32:35 crc kubenswrapper[4724]: E0226 11:32:35.374003 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="proxy-httpd" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.374012 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="proxy-httpd" Feb 26 11:32:35 crc kubenswrapper[4724]: E0226 11:32:35.374020 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5f2a9d-eba9-4157-9b34-fba1714fa562" containerName="neutron-api" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.374025 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5f2a9d-eba9-4157-9b34-fba1714fa562" containerName="neutron-api" Feb 26 11:32:35 crc kubenswrapper[4724]: E0226 11:32:35.374032 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerName="cinder-api-log" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.374835 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerName="cinder-api-log" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.375079 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="proxy-httpd" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.375104 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="ceilometer-central-agent" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.375115 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5f2a9d-eba9-4157-9b34-fba1714fa562" containerName="neutron-httpd" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.375124 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerName="cinder-api" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.375133 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="sg-core" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.375141 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5f2a9d-eba9-4157-9b34-fba1714fa562" containerName="neutron-api" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.375153 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" containerName="ceilometer-notification-agent" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.375163 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d446a7-c07f-4b3d-ae55-a2246b928864" containerName="cinder-api-log" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.376197 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.380733 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.381010 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.381264 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.398421 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd5f2a9d-eba9-4157-9b34-fba1714fa562-kube-api-access-6md9m" (OuterVolumeSpecName: "kube-api-access-6md9m") pod "cd5f2a9d-eba9-4157-9b34-fba1714fa562" (UID: "cd5f2a9d-eba9-4157-9b34-fba1714fa562"). InnerVolumeSpecName "kube-api-access-6md9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.448759 4724 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.448800 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6md9m\" (UniqueName: \"kubernetes.io/projected/cd5f2a9d-eba9-4157-9b34-fba1714fa562-kube-api-access-6md9m\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.456575 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6c6f668b64-t5tsj"] Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.473990 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b96e9c06-0ce9-46b6-9422-a0729d93d8d6" (UID: "b96e9c06-0ce9-46b6-9422-a0729d93d8d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.474149 4724 scope.go:117] "RemoveContainer" containerID="5d496e314b4816b519be635885b65799f0c0f04a9d3e5ade9fed904a33bfe612" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.560362 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a66d564c-8f30-413c-8026-578de3a429d4-logs\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.561599 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-config-data\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.561668 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.561767 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.561900 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-config-data-custom\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.562011 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a66d564c-8f30-413c-8026-578de3a429d4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.566903 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r6hl\" (UniqueName: \"kubernetes.io/projected/a66d564c-8f30-413c-8026-578de3a429d4-kube-api-access-4r6hl\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.566958 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.567072 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-scripts\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.567334 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96e9c06-0ce9-46b6-9422-a0729d93d8d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.568138 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6c6f668b64-t5tsj"] Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.591307 4724 scope.go:117] "RemoveContainer" containerID="b1755022a67635c13cc93d63ca7f3ebc54ada71b41627fd77443fbfb898c0b3f" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.659402 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.669007 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.669258 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-config-data-custom\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.669376 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a66d564c-8f30-413c-8026-578de3a429d4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.669468 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r6hl\" (UniqueName: \"kubernetes.io/projected/a66d564c-8f30-413c-8026-578de3a429d4-kube-api-access-4r6hl\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.669555 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.669677 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-scripts\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.669781 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a66d564c-8f30-413c-8026-578de3a429d4-logs\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.669908 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-config-data\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.670003 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.678700 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a66d564c-8f30-413c-8026-578de3a429d4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.679341 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a66d564c-8f30-413c-8026-578de3a429d4-logs\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.689616 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "cd5f2a9d-eba9-4157-9b34-fba1714fa562" (UID: "cd5f2a9d-eba9-4157-9b34-fba1714fa562"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.691768 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-config-data-custom\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.693495 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-config-data\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.703935 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.706983 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.719939 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-scripts\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.744146 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r6hl\" (UniqueName: \"kubernetes.io/projected/a66d564c-8f30-413c-8026-578de3a429d4-kube-api-access-4r6hl\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.763663 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a66d564c-8f30-413c-8026-578de3a429d4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a66d564c-8f30-413c-8026-578de3a429d4\") " pod="openstack/cinder-api-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.769378 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd5f2a9d-eba9-4157-9b34-fba1714fa562" (UID: "cd5f2a9d-eba9-4157-9b34-fba1714fa562"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.770868 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-58cc4895d6-7zzgw"] Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.772192 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.772208 4724 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.782680 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-76f4bfd896-xsknh"] Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.793481 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dcl5w"] Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.826650 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78c4954f9c-cxzbb"] Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.830398 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-config" (OuterVolumeSpecName: "config") pod "cd5f2a9d-eba9-4157-9b34-fba1714fa562" (UID: "cd5f2a9d-eba9-4157-9b34-fba1714fa562"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.847934 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.857290 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.874498 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.875481 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd5f2a9d-eba9-4157-9b34-fba1714fa562-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.878959 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.884568 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.884725 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.958857 4724 scope.go:117] "RemoveContainer" containerID="80262f68c3a17cdab8a02f47df7f79ab1a05f36ef2ad0ca829ec203bd02216e4" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.960321 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.999725 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.999773 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-scripts\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:35 crc kubenswrapper[4724]: I0226 11:32:35.999803 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2972a18-f09b-4535-ae61-0e6b9498d094-run-httpd\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:35.999914 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:35.999959 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2972a18-f09b-4535-ae61-0e6b9498d094-log-httpd\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:35.999999 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-config-data\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.000021 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s986\" (UniqueName: \"kubernetes.io/projected/a2972a18-f09b-4535-ae61-0e6b9498d094-kube-api-access-8s986\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.047000 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 11:32:36 crc kubenswrapper[4724]: W0226 11:32:36.081458 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e104b5a_57be_474d_957f_25a86e9111a1.slice/crio-5c40a5488a7a3ad30c18abfec8024bd145f756b5137493b583895e3e22c811d1 WatchSource:0}: Error finding container 5c40a5488a7a3ad30c18abfec8024bd145f756b5137493b583895e3e22c811d1: Status 404 returned error can't find the container with id 5c40a5488a7a3ad30c18abfec8024bd145f756b5137493b583895e3e22c811d1 Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.095071 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="123116af-ca93-48d5-95ef-9154cda84c60" path="/var/lib/kubelet/pods/123116af-ca93-48d5-95ef-9154cda84c60/volumes" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.097361 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b96e9c06-0ce9-46b6-9422-a0729d93d8d6" path="/var/lib/kubelet/pods/b96e9c06-0ce9-46b6-9422-a0729d93d8d6/volumes" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.110972 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8d446a7-c07f-4b3d-ae55-a2246b928864" path="/var/lib/kubelet/pods/c8d446a7-c07f-4b3d-ae55-a2246b928864/volumes" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.113760 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.114018 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2972a18-f09b-4535-ae61-0e6b9498d094-log-httpd\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.114069 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-config-data\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.114224 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s986\" (UniqueName: \"kubernetes.io/projected/a2972a18-f09b-4535-ae61-0e6b9498d094-kube-api-access-8s986\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.114314 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.114341 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-scripts\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.114384 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2972a18-f09b-4535-ae61-0e6b9498d094-run-httpd\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.117248 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-674894f85d-fwnwf"] Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.117294 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b57bf547-ctb72"] Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.124964 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-config-data\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.127458 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2972a18-f09b-4535-ae61-0e6b9498d094-log-httpd\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.186687 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2972a18-f09b-4535-ae61-0e6b9498d094-run-httpd\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.231305 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.232268 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.232722 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-scripts\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.241578 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s986\" (UniqueName: \"kubernetes.io/projected/a2972a18-f09b-4535-ae61-0e6b9498d094-kube-api-access-8s986\") pod \"ceilometer-0\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.263306 4724 scope.go:117] "RemoveContainer" containerID="a85bc78dafa79d8f07279a4aa337f47c7589205dbd700e113817acf807b1a9bb" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.270013 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6tpht"] Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.288394 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7grhf"] Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.293340 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-674894f85d-fwnwf" event={"ID":"dfb2bad0-3923-4242-9339-b88cc85fc206","Type":"ContainerStarted","Data":"88b39ac7022b373b0eaccbf2de382fda65c1b57e41a9155d164cbfda9fffdad3"} Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.301474 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b57bf547-ctb72" event={"ID":"4e104b5a-57be-474d-957f-25a86e9111a1","Type":"ContainerStarted","Data":"5c40a5488a7a3ad30c18abfec8024bd145f756b5137493b583895e3e22c811d1"} Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.304522 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" event={"ID":"e57d7bd1-267a-4643-9581-8554109f7cba","Type":"ContainerStarted","Data":"e3f6e6207c20032294395bcaa334a6b8bf45ff649bba6e7ce6adbd119b85bcc1"} Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.317423 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-76f4bfd896-xsknh" event={"ID":"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d","Type":"ContainerStarted","Data":"db2e35d45d784a1d6790f195ef39cf3d3423584c85a2816ef017d2cc0d74eb7b"} Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.341650 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-58cc4895d6-7zzgw" event={"ID":"60dc589b-0663-4d44-a1aa-c57772731f5b","Type":"ContainerStarted","Data":"12524255dde2ce755e5342c68f66a60774c9094fab35d2e7eae33ced63c20ad6"} Feb 26 11:32:36 crc kubenswrapper[4724]: W0226 11:32:36.346600 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod565dc4e0_05d9_4e31_8a8a_0865909b2523.slice/crio-3d8b9c33e105587807cc15b16e01cefd863d1ca713451d45bc9256b2ca70150d WatchSource:0}: Error finding container 3d8b9c33e105587807cc15b16e01cefd863d1ca713451d45bc9256b2ca70150d: Status 404 returned error can't find the container with id 3d8b9c33e105587807cc15b16e01cefd863d1ca713451d45bc9256b2ca70150d Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.362055 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-78fbbcf444-k8n4t" event={"ID":"791d107b-678e-448e-859c-864e9e66dd16","Type":"ContainerStarted","Data":"cebc493d2623a57f01591bc5aece4440e5beaed9c4b4a377a8617de34dfc0b5d"} Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.367814 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-746558bfbf-gbdpm" event={"ID":"acbb8b99-0b04-48c7-904e-a5c5304813a3","Type":"ContainerStarted","Data":"119c0c7087675209fae03942c19cdb917b0736041d81a633c817fa7fb7b5cc2c"} Feb 26 11:32:36 crc kubenswrapper[4724]: W0226 11:32:36.376625 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6014f5be_ec67_4cfd_89f7_74db5e786dc0.slice/crio-9e0eaaaf9710d062becd12f478411fc9d7b11f2cd9e4657a9964fbac58b0f777 WatchSource:0}: Error finding container 9e0eaaaf9710d062becd12f478411fc9d7b11f2cd9e4657a9964fbac58b0f777: Status 404 returned error can't find the container with id 9e0eaaaf9710d062becd12f478411fc9d7b11f2cd9e4657a9964fbac58b0f777 Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.378770 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78c4954f9c-cxzbb" event={"ID":"b1c44043-5a26-44d6-bcf3-9f723e9e3f06","Type":"ContainerStarted","Data":"74cc554dd6a7009ecaed57e60d0ca0303c6ec2f2e689bbbbbafc5fdd7c7ad462"} Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.401528 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" event={"ID":"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff","Type":"ContainerStarted","Data":"76bb2cb2d58dcb16e18a415bb4fb81d75f94d44023c4872ae71ae870158cf1ad"} Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.401575 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59b8f9f788-85hsf" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.418596 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.446536 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59b8f9f788-85hsf"] Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.460167 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59b8f9f788-85hsf"] Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.526374 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2caf-account-create-update-lqcj8"] Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.538926 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pw6nq"] Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.565984 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d3bd-account-create-update-lnz8z"] Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.610993 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.665901 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.770721 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3641-account-create-update-fq2s7"] Feb 26 11:32:36 crc kubenswrapper[4724]: I0226 11:32:36.979351 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 11:32:37 crc kubenswrapper[4724]: W0226 11:32:37.174542 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda66d564c_8f30_413c_8026_578de3a429d4.slice/crio-ffbc2681b7c35ca45c3315ed15d564e3b40aab4af98b809b244bc3258655cbdf WatchSource:0}: Error finding container ffbc2681b7c35ca45c3315ed15d564e3b40aab4af98b809b244bc3258655cbdf: Status 404 returned error can't find the container with id ffbc2681b7c35ca45c3315ed15d564e3b40aab4af98b809b244bc3258655cbdf Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.407133 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.472277 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-746558bfbf-gbdpm" event={"ID":"acbb8b99-0b04-48c7-904e-a5c5304813a3","Type":"ContainerStarted","Data":"08ab6c6a6a333b4fb2f78e1404166da4908f5dcb497c6b17177f9bc92c015e84"} Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.478583 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2caf-account-create-update-lqcj8" event={"ID":"7ddd79bf-b594-45ff-95e6-69bb0bc58dca","Type":"ContainerStarted","Data":"485161dfba7b69e2209075deabdb447f9c2601f43a01febebcbff443edeec736"} Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.480927 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d3bd-account-create-update-lnz8z" event={"ID":"6804ceff-36ec-4004-baf8-69e65d998378","Type":"ContainerStarted","Data":"d6a3c36725b394596e2fc63a00416c856d87455b8786f547e6d7dfd62b5342b2"} Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.485373 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7grhf" event={"ID":"6014f5be-ec67-4cfd-89f7-74db5e786dc0","Type":"ContainerStarted","Data":"9e0eaaaf9710d062becd12f478411fc9d7b11f2cd9e4657a9964fbac58b0f777"} Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.500455 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" event={"ID":"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff","Type":"ContainerStarted","Data":"a23afe89200de432a0a27176e53969b38466d0d4035e813f17a04a81115d7c2f"} Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.508144 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6tpht" event={"ID":"565dc4e0-05d9-4e31-8a8a-0865909b2523","Type":"ContainerStarted","Data":"684ca4f93f7b4f276a1adfbd9a7f4246ffc141e2e1a7e712ee79bdcc3a738275"} Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.508240 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6tpht" event={"ID":"565dc4e0-05d9-4e31-8a8a-0865909b2523","Type":"ContainerStarted","Data":"3d8b9c33e105587807cc15b16e01cefd863d1ca713451d45bc9256b2ca70150d"} Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.544606 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-6tpht" podStartSLOduration=4.544580586 podStartE2EDuration="4.544580586s" podCreationTimestamp="2026-02-26 11:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:37.538598704 +0000 UTC m=+1624.194337839" watchObservedRunningTime="2026-02-26 11:32:37.544580586 +0000 UTC m=+1624.200319701" Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.549468 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3641-account-create-update-fq2s7" event={"ID":"86a8dfd5-eeec-402d-a5fb-a087eae65b81","Type":"ContainerStarted","Data":"39ab14fef661742882bde9a2b3b4a76eca58b23d22efefb56e1e685a1996c52d"} Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.574604 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6746496466-bz5b7" event={"ID":"25ee5971-289d-4cf3-852d-e6473c97582f","Type":"ContainerStarted","Data":"bb4f16c6d5fa10f2ad577b6dfe9e380b9656ebd727c25424943ba552aa274849"} Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.575814 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.599625 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-78fbbcf444-k8n4t" event={"ID":"791d107b-678e-448e-859c-864e9e66dd16","Type":"ContainerStarted","Data":"26369e5b7a8e53726642f29832e9ffbcfd4c5f60e77bba09cd706ab0479cdc98"} Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.601582 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.624908 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a66d564c-8f30-413c-8026-578de3a429d4","Type":"ContainerStarted","Data":"ffbc2681b7c35ca45c3315ed15d564e3b40aab4af98b809b244bc3258655cbdf"} Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.638550 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pw6nq" event={"ID":"c1589cfa-091d-47f6-bd8f-0db0f5756cce","Type":"ContainerStarted","Data":"1eb76bb1bb025dac6c30f0123f427c73d49d9acfc9b4561060da0b355e1db739"} Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.676596 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6746496466-bz5b7" podStartSLOduration=26.676569453 podStartE2EDuration="26.676569453s" podCreationTimestamp="2026-02-26 11:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:37.605211108 +0000 UTC m=+1624.260950233" watchObservedRunningTime="2026-02-26 11:32:37.676569453 +0000 UTC m=+1624.332308578" Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.683510 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-78fbbcf444-k8n4t" podStartSLOduration=13.683482879 podStartE2EDuration="13.683482879s" podCreationTimestamp="2026-02-26 11:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:37.637317475 +0000 UTC m=+1624.293056600" watchObservedRunningTime="2026-02-26 11:32:37.683482879 +0000 UTC m=+1624.339222004" Feb 26 11:32:37 crc kubenswrapper[4724]: I0226 11:32:37.991769 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd5f2a9d-eba9-4157-9b34-fba1714fa562" path="/var/lib/kubelet/pods/cd5f2a9d-eba9-4157-9b34-fba1714fa562/volumes" Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.063126 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.154:8443: connect: connection refused" Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.063214 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.064130 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"7099fe5c31115c0b722be7a13c0a9feb5c472f77246d6698e652b193791a6781"} pod="openstack/horizon-ddfb9fd96-hzc8c" containerMessage="Container horizon failed startup probe, will be restarted" Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.064169 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" containerID="cri-o://7099fe5c31115c0b722be7a13c0a9feb5c472f77246d6698e652b193791a6781" gracePeriod=30 Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.369118 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57977849d4-8s5ds" podUID="e4c4b3ae-030b-4e33-9779-2ffa39196a76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.369421 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.370150 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"ac398868e5679a7aa01f6bdf65598f3111cd3c8e4085be5a0b71236c8e2306eb"} pod="openstack/horizon-57977849d4-8s5ds" containerMessage="Container horizon failed startup probe, will be restarted" Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.370216 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-57977849d4-8s5ds" podUID="e4c4b3ae-030b-4e33-9779-2ffa39196a76" containerName="horizon" containerID="cri-o://ac398868e5679a7aa01f6bdf65598f3111cd3c8e4085be5a0b71236c8e2306eb" gracePeriod=30 Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.663450 4724 generic.go:334] "Generic (PLEG): container finished" podID="565dc4e0-05d9-4e31-8a8a-0865909b2523" containerID="684ca4f93f7b4f276a1adfbd9a7f4246ffc141e2e1a7e712ee79bdcc3a738275" exitCode=0 Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.663530 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6tpht" event={"ID":"565dc4e0-05d9-4e31-8a8a-0865909b2523","Type":"ContainerDied","Data":"684ca4f93f7b4f276a1adfbd9a7f4246ffc141e2e1a7e712ee79bdcc3a738275"} Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.681921 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2972a18-f09b-4535-ae61-0e6b9498d094","Type":"ContainerStarted","Data":"db69f43307b8931addd5cdea8a6bddd6c238d5db0c357e17cb7fdfa91781aff6"} Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.710301 4724 generic.go:334] "Generic (PLEG): container finished" podID="6804ceff-36ec-4004-baf8-69e65d998378" containerID="0c21e402f6c3d81a7341fec53fae9339e565f3e6cf87a08555fd4d3504e0b875" exitCode=0 Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.710425 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d3bd-account-create-update-lnz8z" event={"ID":"6804ceff-36ec-4004-baf8-69e65d998378","Type":"ContainerDied","Data":"0c21e402f6c3d81a7341fec53fae9339e565f3e6cf87a08555fd4d3504e0b875"} Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.720108 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pw6nq" event={"ID":"c1589cfa-091d-47f6-bd8f-0db0f5756cce","Type":"ContainerStarted","Data":"685ed3136017ae0ff55073399d58ad796fb73f378f72676c21f60ecc6f86cedb"} Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.728211 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3641-account-create-update-fq2s7" event={"ID":"86a8dfd5-eeec-402d-a5fb-a087eae65b81","Type":"ContainerStarted","Data":"e72cf10a2d626e9777a88fb79acd2b087f1c80291a99a3d78bd179f459f362d3"} Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.757883 4724 generic.go:334] "Generic (PLEG): container finished" podID="ee99c1c0-7f32-43c2-a559-5ff89e37a1ff" containerID="a23afe89200de432a0a27176e53969b38466d0d4035e813f17a04a81115d7c2f" exitCode=0 Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.757969 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" event={"ID":"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff","Type":"ContainerDied","Data":"a23afe89200de432a0a27176e53969b38466d0d4035e813f17a04a81115d7c2f"} Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.757996 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" event={"ID":"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff","Type":"ContainerStarted","Data":"f93b9dad1ecf2b8546d11834589eb95fac7383a65714b01795185f8e02ab1be6"} Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.758276 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.760925 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-pw6nq" podStartSLOduration=5.76090398 podStartE2EDuration="5.76090398s" podCreationTimestamp="2026-02-26 11:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:38.745589791 +0000 UTC m=+1625.401328916" watchObservedRunningTime="2026-02-26 11:32:38.76090398 +0000 UTC m=+1625.416643095" Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.776991 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-3641-account-create-update-fq2s7" podStartSLOduration=5.776973749 podStartE2EDuration="5.776973749s" podCreationTimestamp="2026-02-26 11:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:38.771783857 +0000 UTC m=+1625.427522972" watchObservedRunningTime="2026-02-26 11:32:38.776973749 +0000 UTC m=+1625.432712864" Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.782086 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-746558bfbf-gbdpm" event={"ID":"acbb8b99-0b04-48c7-904e-a5c5304813a3","Type":"ContainerStarted","Data":"d199618012579fc2a2b440df019e8e8b8c1dcae1983f7ac4d040f203e7c07d33"} Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.783053 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.783084 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.786776 4724 generic.go:334] "Generic (PLEG): container finished" podID="7ddd79bf-b594-45ff-95e6-69bb0bc58dca" containerID="bfdb3719c294050a657994e90245081edae1735458b800544e754ffadbead17e" exitCode=0 Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.786830 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2caf-account-create-update-lqcj8" event={"ID":"7ddd79bf-b594-45ff-95e6-69bb0bc58dca","Type":"ContainerDied","Data":"bfdb3719c294050a657994e90245081edae1735458b800544e754ffadbead17e"} Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.804271 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7grhf" event={"ID":"6014f5be-ec67-4cfd-89f7-74db5e786dc0","Type":"ContainerStarted","Data":"0d12246f7ac283fd04bef33e790f782f6d2590bdb3c6fb6836964d4636a0ae6f"} Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.829484 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" podStartSLOduration=27.829460834 podStartE2EDuration="27.829460834s" podCreationTimestamp="2026-02-26 11:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:38.798170198 +0000 UTC m=+1625.453909303" watchObservedRunningTime="2026-02-26 11:32:38.829460834 +0000 UTC m=+1625.485199939" Feb 26 11:32:38 crc kubenswrapper[4724]: I0226 11:32:38.865150 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-746558bfbf-gbdpm" podStartSLOduration=25.865125431 podStartE2EDuration="25.865125431s" podCreationTimestamp="2026-02-26 11:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:38.855710031 +0000 UTC m=+1625.511449166" watchObservedRunningTime="2026-02-26 11:32:38.865125431 +0000 UTC m=+1625.520864556" Feb 26 11:32:39 crc kubenswrapper[4724]: I0226 11:32:39.832795 4724 generic.go:334] "Generic (PLEG): container finished" podID="86a8dfd5-eeec-402d-a5fb-a087eae65b81" containerID="e72cf10a2d626e9777a88fb79acd2b087f1c80291a99a3d78bd179f459f362d3" exitCode=0 Feb 26 11:32:39 crc kubenswrapper[4724]: I0226 11:32:39.833047 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3641-account-create-update-fq2s7" event={"ID":"86a8dfd5-eeec-402d-a5fb-a087eae65b81","Type":"ContainerDied","Data":"e72cf10a2d626e9777a88fb79acd2b087f1c80291a99a3d78bd179f459f362d3"} Feb 26 11:32:39 crc kubenswrapper[4724]: I0226 11:32:39.842474 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2972a18-f09b-4535-ae61-0e6b9498d094","Type":"ContainerStarted","Data":"393a2a65e400b32a8071e3114589ea56262d7c7ef7746528ee28bb2886942ff2"} Feb 26 11:32:39 crc kubenswrapper[4724]: I0226 11:32:39.852000 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a66d564c-8f30-413c-8026-578de3a429d4","Type":"ContainerStarted","Data":"9e13b153a108f0b7c43a9cda0362f1693e8a065aee093d1af079eb018ba30f64"} Feb 26 11:32:39 crc kubenswrapper[4724]: I0226 11:32:39.853939 4724 generic.go:334] "Generic (PLEG): container finished" podID="c1589cfa-091d-47f6-bd8f-0db0f5756cce" containerID="685ed3136017ae0ff55073399d58ad796fb73f378f72676c21f60ecc6f86cedb" exitCode=0 Feb 26 11:32:39 crc kubenswrapper[4724]: I0226 11:32:39.853991 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pw6nq" event={"ID":"c1589cfa-091d-47f6-bd8f-0db0f5756cce","Type":"ContainerDied","Data":"685ed3136017ae0ff55073399d58ad796fb73f378f72676c21f60ecc6f86cedb"} Feb 26 11:32:39 crc kubenswrapper[4724]: I0226 11:32:39.861927 4724 generic.go:334] "Generic (PLEG): container finished" podID="6014f5be-ec67-4cfd-89f7-74db5e786dc0" containerID="0d12246f7ac283fd04bef33e790f782f6d2590bdb3c6fb6836964d4636a0ae6f" exitCode=0 Feb 26 11:32:39 crc kubenswrapper[4724]: I0226 11:32:39.862210 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7grhf" event={"ID":"6014f5be-ec67-4cfd-89f7-74db5e786dc0","Type":"ContainerDied","Data":"0d12246f7ac283fd04bef33e790f782f6d2590bdb3c6fb6836964d4636a0ae6f"} Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.373945 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d3bd-account-create-update-lnz8z" Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.382535 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2caf-account-create-update-lqcj8" Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.494006 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ddd79bf-b594-45ff-95e6-69bb0bc58dca-operator-scripts\") pod \"7ddd79bf-b594-45ff-95e6-69bb0bc58dca\" (UID: \"7ddd79bf-b594-45ff-95e6-69bb0bc58dca\") " Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.494103 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hzwl\" (UniqueName: \"kubernetes.io/projected/7ddd79bf-b594-45ff-95e6-69bb0bc58dca-kube-api-access-9hzwl\") pod \"7ddd79bf-b594-45ff-95e6-69bb0bc58dca\" (UID: \"7ddd79bf-b594-45ff-95e6-69bb0bc58dca\") " Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.494153 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6804ceff-36ec-4004-baf8-69e65d998378-operator-scripts\") pod \"6804ceff-36ec-4004-baf8-69e65d998378\" (UID: \"6804ceff-36ec-4004-baf8-69e65d998378\") " Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.494305 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zwxv\" (UniqueName: \"kubernetes.io/projected/6804ceff-36ec-4004-baf8-69e65d998378-kube-api-access-8zwxv\") pod \"6804ceff-36ec-4004-baf8-69e65d998378\" (UID: \"6804ceff-36ec-4004-baf8-69e65d998378\") " Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.496317 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ddd79bf-b594-45ff-95e6-69bb0bc58dca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ddd79bf-b594-45ff-95e6-69bb0bc58dca" (UID: "7ddd79bf-b594-45ff-95e6-69bb0bc58dca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.496751 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6804ceff-36ec-4004-baf8-69e65d998378-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6804ceff-36ec-4004-baf8-69e65d998378" (UID: "6804ceff-36ec-4004-baf8-69e65d998378"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.502497 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6804ceff-36ec-4004-baf8-69e65d998378-kube-api-access-8zwxv" (OuterVolumeSpecName: "kube-api-access-8zwxv") pod "6804ceff-36ec-4004-baf8-69e65d998378" (UID: "6804ceff-36ec-4004-baf8-69e65d998378"). InnerVolumeSpecName "kube-api-access-8zwxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.503898 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ddd79bf-b594-45ff-95e6-69bb0bc58dca-kube-api-access-9hzwl" (OuterVolumeSpecName: "kube-api-access-9hzwl") pod "7ddd79bf-b594-45ff-95e6-69bb0bc58dca" (UID: "7ddd79bf-b594-45ff-95e6-69bb0bc58dca"). InnerVolumeSpecName "kube-api-access-9hzwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.596713 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zwxv\" (UniqueName: \"kubernetes.io/projected/6804ceff-36ec-4004-baf8-69e65d998378-kube-api-access-8zwxv\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.596767 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ddd79bf-b594-45ff-95e6-69bb0bc58dca-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.596780 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hzwl\" (UniqueName: \"kubernetes.io/projected/7ddd79bf-b594-45ff-95e6-69bb0bc58dca-kube-api-access-9hzwl\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.596793 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6804ceff-36ec-4004-baf8-69e65d998378-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.884562 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2caf-account-create-update-lqcj8" event={"ID":"7ddd79bf-b594-45ff-95e6-69bb0bc58dca","Type":"ContainerDied","Data":"485161dfba7b69e2209075deabdb447f9c2601f43a01febebcbff443edeec736"} Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.884890 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="485161dfba7b69e2209075deabdb447f9c2601f43a01febebcbff443edeec736" Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.884653 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2caf-account-create-update-lqcj8" Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.886442 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d3bd-account-create-update-lnz8z" event={"ID":"6804ceff-36ec-4004-baf8-69e65d998378","Type":"ContainerDied","Data":"d6a3c36725b394596e2fc63a00416c856d87455b8786f547e6d7dfd62b5342b2"} Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.886490 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6a3c36725b394596e2fc63a00416c856d87455b8786f547e6d7dfd62b5342b2" Feb 26 11:32:41 crc kubenswrapper[4724]: I0226 11:32:41.886559 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d3bd-account-create-update-lnz8z" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.669549 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6tpht" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.675605 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7grhf" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.684560 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pw6nq" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.690941 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3641-account-create-update-fq2s7" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.826406 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98vfq\" (UniqueName: \"kubernetes.io/projected/86a8dfd5-eeec-402d-a5fb-a087eae65b81-kube-api-access-98vfq\") pod \"86a8dfd5-eeec-402d-a5fb-a087eae65b81\" (UID: \"86a8dfd5-eeec-402d-a5fb-a087eae65b81\") " Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.826709 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6014f5be-ec67-4cfd-89f7-74db5e786dc0-operator-scripts\") pod \"6014f5be-ec67-4cfd-89f7-74db5e786dc0\" (UID: \"6014f5be-ec67-4cfd-89f7-74db5e786dc0\") " Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.826746 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86a8dfd5-eeec-402d-a5fb-a087eae65b81-operator-scripts\") pod \"86a8dfd5-eeec-402d-a5fb-a087eae65b81\" (UID: \"86a8dfd5-eeec-402d-a5fb-a087eae65b81\") " Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.826852 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qft9f\" (UniqueName: \"kubernetes.io/projected/6014f5be-ec67-4cfd-89f7-74db5e786dc0-kube-api-access-qft9f\") pod \"6014f5be-ec67-4cfd-89f7-74db5e786dc0\" (UID: \"6014f5be-ec67-4cfd-89f7-74db5e786dc0\") " Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.826901 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1589cfa-091d-47f6-bd8f-0db0f5756cce-operator-scripts\") pod \"c1589cfa-091d-47f6-bd8f-0db0f5756cce\" (UID: \"c1589cfa-091d-47f6-bd8f-0db0f5756cce\") " Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.828058 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wktlq\" (UniqueName: \"kubernetes.io/projected/565dc4e0-05d9-4e31-8a8a-0865909b2523-kube-api-access-wktlq\") pod \"565dc4e0-05d9-4e31-8a8a-0865909b2523\" (UID: \"565dc4e0-05d9-4e31-8a8a-0865909b2523\") " Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.828140 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fktmt\" (UniqueName: \"kubernetes.io/projected/c1589cfa-091d-47f6-bd8f-0db0f5756cce-kube-api-access-fktmt\") pod \"c1589cfa-091d-47f6-bd8f-0db0f5756cce\" (UID: \"c1589cfa-091d-47f6-bd8f-0db0f5756cce\") " Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.828289 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/565dc4e0-05d9-4e31-8a8a-0865909b2523-operator-scripts\") pod \"565dc4e0-05d9-4e31-8a8a-0865909b2523\" (UID: \"565dc4e0-05d9-4e31-8a8a-0865909b2523\") " Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.851994 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6014f5be-ec67-4cfd-89f7-74db5e786dc0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6014f5be-ec67-4cfd-89f7-74db5e786dc0" (UID: "6014f5be-ec67-4cfd-89f7-74db5e786dc0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.858281 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a8dfd5-eeec-402d-a5fb-a087eae65b81-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86a8dfd5-eeec-402d-a5fb-a087eae65b81" (UID: "86a8dfd5-eeec-402d-a5fb-a087eae65b81"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.866608 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6014f5be-ec67-4cfd-89f7-74db5e786dc0-kube-api-access-qft9f" (OuterVolumeSpecName: "kube-api-access-qft9f") pod "6014f5be-ec67-4cfd-89f7-74db5e786dc0" (UID: "6014f5be-ec67-4cfd-89f7-74db5e786dc0"). InnerVolumeSpecName "kube-api-access-qft9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.874216 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1589cfa-091d-47f6-bd8f-0db0f5756cce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1589cfa-091d-47f6-bd8f-0db0f5756cce" (UID: "c1589cfa-091d-47f6-bd8f-0db0f5756cce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.876147 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/565dc4e0-05d9-4e31-8a8a-0865909b2523-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "565dc4e0-05d9-4e31-8a8a-0865909b2523" (UID: "565dc4e0-05d9-4e31-8a8a-0865909b2523"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.885304 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565dc4e0-05d9-4e31-8a8a-0865909b2523-kube-api-access-wktlq" (OuterVolumeSpecName: "kube-api-access-wktlq") pod "565dc4e0-05d9-4e31-8a8a-0865909b2523" (UID: "565dc4e0-05d9-4e31-8a8a-0865909b2523"). InnerVolumeSpecName "kube-api-access-wktlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.894065 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a8dfd5-eeec-402d-a5fb-a087eae65b81-kube-api-access-98vfq" (OuterVolumeSpecName: "kube-api-access-98vfq") pod "86a8dfd5-eeec-402d-a5fb-a087eae65b81" (UID: "86a8dfd5-eeec-402d-a5fb-a087eae65b81"). InnerVolumeSpecName "kube-api-access-98vfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.910625 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1589cfa-091d-47f6-bd8f-0db0f5756cce-kube-api-access-fktmt" (OuterVolumeSpecName: "kube-api-access-fktmt") pod "c1589cfa-091d-47f6-bd8f-0db0f5756cce" (UID: "c1589cfa-091d-47f6-bd8f-0db0f5756cce"). InnerVolumeSpecName "kube-api-access-fktmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.938350 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pw6nq" event={"ID":"c1589cfa-091d-47f6-bd8f-0db0f5756cce","Type":"ContainerDied","Data":"1eb76bb1bb025dac6c30f0123f427c73d49d9acfc9b4561060da0b355e1db739"} Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.938389 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb76bb1bb025dac6c30f0123f427c73d49d9acfc9b4561060da0b355e1db739" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.938985 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pw6nq" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.953957 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wktlq\" (UniqueName: \"kubernetes.io/projected/565dc4e0-05d9-4e31-8a8a-0865909b2523-kube-api-access-wktlq\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.953989 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fktmt\" (UniqueName: \"kubernetes.io/projected/c1589cfa-091d-47f6-bd8f-0db0f5756cce-kube-api-access-fktmt\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.954000 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/565dc4e0-05d9-4e31-8a8a-0865909b2523-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.954009 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98vfq\" (UniqueName: \"kubernetes.io/projected/86a8dfd5-eeec-402d-a5fb-a087eae65b81-kube-api-access-98vfq\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.954023 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6014f5be-ec67-4cfd-89f7-74db5e786dc0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.954034 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86a8dfd5-eeec-402d-a5fb-a087eae65b81-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.954045 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qft9f\" (UniqueName: \"kubernetes.io/projected/6014f5be-ec67-4cfd-89f7-74db5e786dc0-kube-api-access-qft9f\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:42 crc kubenswrapper[4724]: I0226 11:32:42.954054 4724 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1589cfa-091d-47f6-bd8f-0db0f5756cce-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:43 crc kubenswrapper[4724]: I0226 11:32:43.043549 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7grhf" Feb 26 11:32:43 crc kubenswrapper[4724]: I0226 11:32:43.044397 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7grhf" event={"ID":"6014f5be-ec67-4cfd-89f7-74db5e786dc0","Type":"ContainerDied","Data":"9e0eaaaf9710d062becd12f478411fc9d7b11f2cd9e4657a9964fbac58b0f777"} Feb 26 11:32:43 crc kubenswrapper[4724]: I0226 11:32:43.044432 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e0eaaaf9710d062becd12f478411fc9d7b11f2cd9e4657a9964fbac58b0f777" Feb 26 11:32:43 crc kubenswrapper[4724]: I0226 11:32:43.068461 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3641-account-create-update-fq2s7" Feb 26 11:32:43 crc kubenswrapper[4724]: I0226 11:32:43.068550 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3641-account-create-update-fq2s7" event={"ID":"86a8dfd5-eeec-402d-a5fb-a087eae65b81","Type":"ContainerDied","Data":"39ab14fef661742882bde9a2b3b4a76eca58b23d22efefb56e1e685a1996c52d"} Feb 26 11:32:43 crc kubenswrapper[4724]: I0226 11:32:43.068580 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39ab14fef661742882bde9a2b3b4a76eca58b23d22efefb56e1e685a1996c52d" Feb 26 11:32:43 crc kubenswrapper[4724]: I0226 11:32:43.086352 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6tpht" event={"ID":"565dc4e0-05d9-4e31-8a8a-0865909b2523","Type":"ContainerDied","Data":"3d8b9c33e105587807cc15b16e01cefd863d1ca713451d45bc9256b2ca70150d"} Feb 26 11:32:43 crc kubenswrapper[4724]: I0226 11:32:43.086384 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d8b9c33e105587807cc15b16e01cefd863d1ca713451d45bc9256b2ca70150d" Feb 26 11:32:43 crc kubenswrapper[4724]: I0226 11:32:43.086504 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6tpht" Feb 26 11:32:43 crc kubenswrapper[4724]: I0226 11:32:43.576883 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:43 crc kubenswrapper[4724]: I0226 11:32:43.578798 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-746558bfbf-gbdpm" Feb 26 11:32:44 crc kubenswrapper[4724]: I0226 11:32:44.115806 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2972a18-f09b-4535-ae61-0e6b9498d094","Type":"ContainerStarted","Data":"0b9d1bfe262ed33fc110fb4af6cb076c0db5cc1033fbb1900ee7af1e33b24413"} Feb 26 11:32:44 crc kubenswrapper[4724]: I0226 11:32:44.118113 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-58cc4895d6-7zzgw" event={"ID":"60dc589b-0663-4d44-a1aa-c57772731f5b","Type":"ContainerStarted","Data":"25659f1b7bb1acf865986baa61edc355ac982f1fa21efed5d1c80185ccf0c083"} Feb 26 11:32:44 crc kubenswrapper[4724]: I0226 11:32:44.122080 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-78c4954f9c-cxzbb" podUID="b1c44043-5a26-44d6-bcf3-9f723e9e3f06" containerName="heat-api" containerID="cri-o://c753f580483db5959bdfd5aee618750fac49d88337f44474e57ca97e16e48578" gracePeriod=60 Feb 26 11:32:44 crc kubenswrapper[4724]: I0226 11:32:44.122813 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78c4954f9c-cxzbb" event={"ID":"b1c44043-5a26-44d6-bcf3-9f723e9e3f06","Type":"ContainerStarted","Data":"c753f580483db5959bdfd5aee618750fac49d88337f44474e57ca97e16e48578"} Feb 26 11:32:44 crc kubenswrapper[4724]: I0226 11:32:44.122849 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:44 crc kubenswrapper[4724]: I0226 11:32:44.223007 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-78c4954f9c-cxzbb" podStartSLOduration=26.361520731 podStartE2EDuration="33.222989211s" podCreationTimestamp="2026-02-26 11:32:11 +0000 UTC" firstStartedPulling="2026-02-26 11:32:35.847672731 +0000 UTC m=+1622.503411846" lastFinishedPulling="2026-02-26 11:32:42.709141201 +0000 UTC m=+1629.364880326" observedRunningTime="2026-02-26 11:32:44.204437969 +0000 UTC m=+1630.860177084" watchObservedRunningTime="2026-02-26 11:32:44.222989211 +0000 UTC m=+1630.878728326" Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.155459 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-76f4bfd896-xsknh" event={"ID":"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d","Type":"ContainerStarted","Data":"4c5a1e65621a6e7e1a524f47d7b4621e4d374483a49b673d44c815de19334d9b"} Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.156564 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.164531 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a66d564c-8f30-413c-8026-578de3a429d4","Type":"ContainerStarted","Data":"4ef4932366d4e8efff6a0e2a95f9445ebe7017ba381ca81db5a441eee9e2ff9e"} Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.166258 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.192433 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-76f4bfd896-xsknh" podStartSLOduration=13.189925829 podStartE2EDuration="20.192407885s" podCreationTimestamp="2026-02-26 11:32:25 +0000 UTC" firstStartedPulling="2026-02-26 11:32:35.70646124 +0000 UTC m=+1622.362200345" lastFinishedPulling="2026-02-26 11:32:42.708943286 +0000 UTC m=+1629.364682401" observedRunningTime="2026-02-26 11:32:45.180255456 +0000 UTC m=+1631.835994571" watchObservedRunningTime="2026-02-26 11:32:45.192407885 +0000 UTC m=+1631.848147020" Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.196818 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2972a18-f09b-4535-ae61-0e6b9498d094","Type":"ContainerStarted","Data":"fe61b74073a05c91641147c0ed890963faccb0ec50d4414fa05b2aa63da9b913"} Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.215498 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-674894f85d-fwnwf" event={"ID":"dfb2bad0-3923-4242-9339-b88cc85fc206","Type":"ContainerStarted","Data":"6784e37e18467ea1923a4c7032b1e4108f3c32884a6c7ee3e42e23b0eb2b492f"} Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.216088 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.219576 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=10.219546585 podStartE2EDuration="10.219546585s" podCreationTimestamp="2026-02-26 11:32:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:32:45.209949831 +0000 UTC m=+1631.865688946" watchObservedRunningTime="2026-02-26 11:32:45.219546585 +0000 UTC m=+1631.875285710" Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.239448 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b57bf547-ctb72" event={"ID":"4e104b5a-57be-474d-957f-25a86e9111a1","Type":"ContainerStarted","Data":"091a311b1bfca31d143e5d62395c0bfa2e51555aa390c76af0d02a00687fe41e"} Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.239913 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.239981 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7b57bf547-ctb72" podUID="4e104b5a-57be-474d-957f-25a86e9111a1" containerName="heat-cfnapi" containerID="cri-o://091a311b1bfca31d143e5d62395c0bfa2e51555aa390c76af0d02a00687fe41e" gracePeriod=60 Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.272829 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" event={"ID":"e57d7bd1-267a-4643-9581-8554109f7cba","Type":"ContainerStarted","Data":"f1513692264e41e6f16a3c04527a2de4f87e74f21bb259622ead8849d68539cb"} Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.273121 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.273136 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.287110 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-674894f85d-fwnwf" podStartSLOduration=13.970315804 podStartE2EDuration="21.287085033s" podCreationTimestamp="2026-02-26 11:32:24 +0000 UTC" firstStartedPulling="2026-02-26 11:32:35.959801303 +0000 UTC m=+1622.615540418" lastFinishedPulling="2026-02-26 11:32:43.276570532 +0000 UTC m=+1629.932309647" observedRunningTime="2026-02-26 11:32:45.276913214 +0000 UTC m=+1631.932652349" watchObservedRunningTime="2026-02-26 11:32:45.287085033 +0000 UTC m=+1631.942824148" Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.323012 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-58cc4895d6-7zzgw" podStartSLOduration=10.110253882 podStartE2EDuration="17.322994386s" podCreationTimestamp="2026-02-26 11:32:28 +0000 UTC" firstStartedPulling="2026-02-26 11:32:35.524964814 +0000 UTC m=+1622.180703929" lastFinishedPulling="2026-02-26 11:32:42.737705308 +0000 UTC m=+1629.393444433" observedRunningTime="2026-02-26 11:32:45.304811834 +0000 UTC m=+1631.960550959" watchObservedRunningTime="2026-02-26 11:32:45.322994386 +0000 UTC m=+1631.978733501" Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.377351 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7b57bf547-ctb72" podStartSLOduration=27.694316657 podStartE2EDuration="34.377324368s" podCreationTimestamp="2026-02-26 11:32:11 +0000 UTC" firstStartedPulling="2026-02-26 11:32:36.123487306 +0000 UTC m=+1622.779226421" lastFinishedPulling="2026-02-26 11:32:42.806495017 +0000 UTC m=+1629.462234132" observedRunningTime="2026-02-26 11:32:45.342649016 +0000 UTC m=+1631.998388141" watchObservedRunningTime="2026-02-26 11:32:45.377324368 +0000 UTC m=+1632.033063503" Feb 26 11:32:45 crc kubenswrapper[4724]: I0226 11:32:45.413095 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" podStartSLOduration=9.642730582 podStartE2EDuration="17.413072417s" podCreationTimestamp="2026-02-26 11:32:28 +0000 UTC" firstStartedPulling="2026-02-26 11:32:34.938618422 +0000 UTC m=+1621.594357537" lastFinishedPulling="2026-02-26 11:32:42.708960257 +0000 UTC m=+1629.364699372" observedRunningTime="2026-02-26 11:32:45.391705484 +0000 UTC m=+1632.047444609" watchObservedRunningTime="2026-02-26 11:32:45.413072417 +0000 UTC m=+1632.068811542" Feb 26 11:32:46 crc kubenswrapper[4724]: I0226 11:32:46.282777 4724 generic.go:334] "Generic (PLEG): container finished" podID="506fa3b2-fb8c-4481-9dda-e1af6c9ff27d" containerID="4c5a1e65621a6e7e1a524f47d7b4621e4d374483a49b673d44c815de19334d9b" exitCode=1 Feb 26 11:32:46 crc kubenswrapper[4724]: I0226 11:32:46.282843 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-76f4bfd896-xsknh" event={"ID":"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d","Type":"ContainerDied","Data":"4c5a1e65621a6e7e1a524f47d7b4621e4d374483a49b673d44c815de19334d9b"} Feb 26 11:32:46 crc kubenswrapper[4724]: I0226 11:32:46.283480 4724 scope.go:117] "RemoveContainer" containerID="4c5a1e65621a6e7e1a524f47d7b4621e4d374483a49b673d44c815de19334d9b" Feb 26 11:32:46 crc kubenswrapper[4724]: I0226 11:32:46.285620 4724 generic.go:334] "Generic (PLEG): container finished" podID="dfb2bad0-3923-4242-9339-b88cc85fc206" containerID="6784e37e18467ea1923a4c7032b1e4108f3c32884a6c7ee3e42e23b0eb2b492f" exitCode=1 Feb 26 11:32:46 crc kubenswrapper[4724]: I0226 11:32:46.286113 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-674894f85d-fwnwf" event={"ID":"dfb2bad0-3923-4242-9339-b88cc85fc206","Type":"ContainerDied","Data":"6784e37e18467ea1923a4c7032b1e4108f3c32884a6c7ee3e42e23b0eb2b492f"} Feb 26 11:32:46 crc kubenswrapper[4724]: I0226 11:32:46.286476 4724 scope.go:117] "RemoveContainer" containerID="6784e37e18467ea1923a4c7032b1e4108f3c32884a6c7ee3e42e23b0eb2b492f" Feb 26 11:32:47 crc kubenswrapper[4724]: I0226 11:32:47.298938 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-674894f85d-fwnwf" event={"ID":"dfb2bad0-3923-4242-9339-b88cc85fc206","Type":"ContainerStarted","Data":"fcf8d1c0459d5f1cefc067ac2134c960c7cde95abed13391893b2cf7d4df5b2a"} Feb 26 11:32:47 crc kubenswrapper[4724]: I0226 11:32:47.300128 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:47 crc kubenswrapper[4724]: I0226 11:32:47.302363 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-76f4bfd896-xsknh" event={"ID":"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d","Type":"ContainerStarted","Data":"a49579870c9317a498e133792e552649f38f5a584255cb290ecd4eaabbc40c33"} Feb 26 11:32:47 crc kubenswrapper[4724]: I0226 11:32:47.303065 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:47 crc kubenswrapper[4724]: I0226 11:32:47.412336 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:32:47 crc kubenswrapper[4724]: I0226 11:32:47.477531 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-l9gr5"] Feb 26 11:32:47 crc kubenswrapper[4724]: I0226 11:32:47.477812 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" podUID="485e3e4a-c268-4d2e-8489-fc72d7dd385a" containerName="dnsmasq-dns" containerID="cri-o://781945642b288fe8fb053b7f3203a97308fd0f786ae4d69c650653d1e0d37274" gracePeriod=10 Feb 26 11:32:47 crc kubenswrapper[4724]: I0226 11:32:47.730552 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" podUID="485e3e4a-c268-4d2e-8489-fc72d7dd385a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.173:5353: connect: connection refused" Feb 26 11:32:50 crc kubenswrapper[4724]: I0226 11:32:50.331773 4724 generic.go:334] "Generic (PLEG): container finished" podID="485e3e4a-c268-4d2e-8489-fc72d7dd385a" containerID="781945642b288fe8fb053b7f3203a97308fd0f786ae4d69c650653d1e0d37274" exitCode=0 Feb 26 11:32:50 crc kubenswrapper[4724]: I0226 11:32:50.331862 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" event={"ID":"485e3e4a-c268-4d2e-8489-fc72d7dd385a","Type":"ContainerDied","Data":"781945642b288fe8fb053b7f3203a97308fd0f786ae4d69c650653d1e0d37274"} Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.361436 4724 generic.go:334] "Generic (PLEG): container finished" podID="506fa3b2-fb8c-4481-9dda-e1af6c9ff27d" containerID="a49579870c9317a498e133792e552649f38f5a584255cb290ecd4eaabbc40c33" exitCode=1 Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.361885 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-76f4bfd896-xsknh" event={"ID":"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d","Type":"ContainerDied","Data":"a49579870c9317a498e133792e552649f38f5a584255cb290ecd4eaabbc40c33"} Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.361929 4724 scope.go:117] "RemoveContainer" containerID="4c5a1e65621a6e7e1a524f47d7b4621e4d374483a49b673d44c815de19334d9b" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.363486 4724 scope.go:117] "RemoveContainer" containerID="a49579870c9317a498e133792e552649f38f5a584255cb290ecd4eaabbc40c33" Feb 26 11:32:51 crc kubenswrapper[4724]: E0226 11:32:51.364089 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-76f4bfd896-xsknh_openstack(506fa3b2-fb8c-4481-9dda-e1af6c9ff27d)\"" pod="openstack/heat-api-76f4bfd896-xsknh" podUID="506fa3b2-fb8c-4481-9dda-e1af6c9ff27d" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.366732 4724 generic.go:334] "Generic (PLEG): container finished" podID="dfb2bad0-3923-4242-9339-b88cc85fc206" containerID="fcf8d1c0459d5f1cefc067ac2134c960c7cde95abed13391893b2cf7d4df5b2a" exitCode=1 Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.366777 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-674894f85d-fwnwf" event={"ID":"dfb2bad0-3923-4242-9339-b88cc85fc206","Type":"ContainerDied","Data":"fcf8d1c0459d5f1cefc067ac2134c960c7cde95abed13391893b2cf7d4df5b2a"} Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.367683 4724 scope.go:117] "RemoveContainer" containerID="fcf8d1c0459d5f1cefc067ac2134c960c7cde95abed13391893b2cf7d4df5b2a" Feb 26 11:32:51 crc kubenswrapper[4724]: E0226 11:32:51.368506 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-674894f85d-fwnwf_openstack(dfb2bad0-3923-4242-9339-b88cc85fc206)\"" pod="openstack/heat-cfnapi-674894f85d-fwnwf" podUID="dfb2bad0-3923-4242-9339-b88cc85fc206" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.699969 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vzfph"] Feb 26 11:32:51 crc kubenswrapper[4724]: E0226 11:32:51.700793 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6804ceff-36ec-4004-baf8-69e65d998378" containerName="mariadb-account-create-update" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.700914 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="6804ceff-36ec-4004-baf8-69e65d998378" containerName="mariadb-account-create-update" Feb 26 11:32:51 crc kubenswrapper[4724]: E0226 11:32:51.701010 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ddd79bf-b594-45ff-95e6-69bb0bc58dca" containerName="mariadb-account-create-update" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.701113 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ddd79bf-b594-45ff-95e6-69bb0bc58dca" containerName="mariadb-account-create-update" Feb 26 11:32:51 crc kubenswrapper[4724]: E0226 11:32:51.701238 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1589cfa-091d-47f6-bd8f-0db0f5756cce" containerName="mariadb-database-create" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.701317 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1589cfa-091d-47f6-bd8f-0db0f5756cce" containerName="mariadb-database-create" Feb 26 11:32:51 crc kubenswrapper[4724]: E0226 11:32:51.701438 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a8dfd5-eeec-402d-a5fb-a087eae65b81" containerName="mariadb-account-create-update" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.701521 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a8dfd5-eeec-402d-a5fb-a087eae65b81" containerName="mariadb-account-create-update" Feb 26 11:32:51 crc kubenswrapper[4724]: E0226 11:32:51.701601 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6014f5be-ec67-4cfd-89f7-74db5e786dc0" containerName="mariadb-database-create" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.701694 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="6014f5be-ec67-4cfd-89f7-74db5e786dc0" containerName="mariadb-database-create" Feb 26 11:32:51 crc kubenswrapper[4724]: E0226 11:32:51.701787 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565dc4e0-05d9-4e31-8a8a-0865909b2523" containerName="mariadb-database-create" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.701865 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="565dc4e0-05d9-4e31-8a8a-0865909b2523" containerName="mariadb-database-create" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.702289 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="86a8dfd5-eeec-402d-a5fb-a087eae65b81" containerName="mariadb-account-create-update" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.702414 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="565dc4e0-05d9-4e31-8a8a-0865909b2523" containerName="mariadb-database-create" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.702509 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1589cfa-091d-47f6-bd8f-0db0f5756cce" containerName="mariadb-database-create" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.702618 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="6804ceff-36ec-4004-baf8-69e65d998378" containerName="mariadb-account-create-update" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.702722 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="6014f5be-ec67-4cfd-89f7-74db5e786dc0" containerName="mariadb-database-create" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.702820 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ddd79bf-b594-45ff-95e6-69bb0bc58dca" containerName="mariadb-account-create-update" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.703886 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.714851 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.715534 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.715819 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-b6gk7" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.717516 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vzfph"] Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.847656 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jsn2\" (UniqueName: \"kubernetes.io/projected/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-kube-api-access-9jsn2\") pod \"nova-cell0-conductor-db-sync-vzfph\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.848170 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vzfph\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.848216 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-scripts\") pod \"nova-cell0-conductor-db-sync-vzfph\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.848249 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-config-data\") pod \"nova-cell0-conductor-db-sync-vzfph\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.954288 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jsn2\" (UniqueName: \"kubernetes.io/projected/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-kube-api-access-9jsn2\") pod \"nova-cell0-conductor-db-sync-vzfph\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.954392 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vzfph\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.954424 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-scripts\") pod \"nova-cell0-conductor-db-sync-vzfph\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.954459 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-config-data\") pod \"nova-cell0-conductor-db-sync-vzfph\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.969568 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vzfph\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.969839 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-config-data\") pod \"nova-cell0-conductor-db-sync-vzfph\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:32:51 crc kubenswrapper[4724]: I0226 11:32:51.985097 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-scripts\") pod \"nova-cell0-conductor-db-sync-vzfph\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:51.987832 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jsn2\" (UniqueName: \"kubernetes.io/projected/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-kube-api-access-9jsn2\") pod \"nova-cell0-conductor-db-sync-vzfph\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.036933 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.100053 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.216270 4724 scope.go:117] "RemoveContainer" containerID="6784e37e18467ea1923a4c7032b1e4108f3c32884a6c7ee3e42e23b0eb2b492f" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.269899 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8q8c\" (UniqueName: \"kubernetes.io/projected/485e3e4a-c268-4d2e-8489-fc72d7dd385a-kube-api-access-q8q8c\") pod \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.270191 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-dns-svc\") pod \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.270241 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-config\") pod \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.270286 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-ovsdbserver-sb\") pod \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.270312 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-dns-swift-storage-0\") pod \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.270417 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-ovsdbserver-nb\") pod \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\" (UID: \"485e3e4a-c268-4d2e-8489-fc72d7dd385a\") " Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.294446 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485e3e4a-c268-4d2e-8489-fc72d7dd385a-kube-api-access-q8q8c" (OuterVolumeSpecName: "kube-api-access-q8q8c") pod "485e3e4a-c268-4d2e-8489-fc72d7dd385a" (UID: "485e3e4a-c268-4d2e-8489-fc72d7dd385a"). InnerVolumeSpecName "kube-api-access-q8q8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.347891 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "485e3e4a-c268-4d2e-8489-fc72d7dd385a" (UID: "485e3e4a-c268-4d2e-8489-fc72d7dd385a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.352442 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-config" (OuterVolumeSpecName: "config") pod "485e3e4a-c268-4d2e-8489-fc72d7dd385a" (UID: "485e3e4a-c268-4d2e-8489-fc72d7dd385a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.383564 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8q8c\" (UniqueName: \"kubernetes.io/projected/485e3e4a-c268-4d2e-8489-fc72d7dd385a-kube-api-access-q8q8c\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.383597 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.383607 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.411518 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "485e3e4a-c268-4d2e-8489-fc72d7dd385a" (UID: "485e3e4a-c268-4d2e-8489-fc72d7dd385a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.415473 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" event={"ID":"485e3e4a-c268-4d2e-8489-fc72d7dd385a","Type":"ContainerDied","Data":"78f8a3cdd0259bc5a2d34fe3bce8a2200cb692d7bb8caaa3ed45a9300a64e014"} Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.415577 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-l9gr5" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.425616 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "485e3e4a-c268-4d2e-8489-fc72d7dd385a" (UID: "485e3e4a-c268-4d2e-8489-fc72d7dd385a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.436791 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "485e3e4a-c268-4d2e-8489-fc72d7dd385a" (UID: "485e3e4a-c268-4d2e-8489-fc72d7dd385a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.447705 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.485664 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.485703 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.485745 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/485e3e4a-c268-4d2e-8489-fc72d7dd385a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.684725 4724 scope.go:117] "RemoveContainer" containerID="781945642b288fe8fb053b7f3203a97308fd0f786ae4d69c650653d1e0d37274" Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.762000 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-l9gr5"] Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.773891 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-l9gr5"] Feb 26 11:32:52 crc kubenswrapper[4724]: I0226 11:32:52.810411 4724 scope.go:117] "RemoveContainer" containerID="1df0a15f4d4b0af709163821a05cecf36fb1fd8388535e3efe699ece84ffcb02" Feb 26 11:32:53 crc kubenswrapper[4724]: I0226 11:32:53.388863 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vzfph"] Feb 26 11:32:53 crc kubenswrapper[4724]: I0226 11:32:53.460055 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2972a18-f09b-4535-ae61-0e6b9498d094","Type":"ContainerStarted","Data":"b87cd3d6cba4bb3a31380c07febfd6588e8a780f697c2746e229f5148837e7d0"} Feb 26 11:32:53 crc kubenswrapper[4724]: I0226 11:32:53.460122 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:32:53 crc kubenswrapper[4724]: I0226 11:32:53.464577 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vzfph" event={"ID":"41d636b5-9092-4373-a1f9-8c79f5b9ddaa","Type":"ContainerStarted","Data":"92d59f36daa876318c86d1bd3f8fc6f19dc1e4bd443950d5bfe07ef57907a08f"} Feb 26 11:32:53 crc kubenswrapper[4724]: I0226 11:32:53.503021 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.400204007 podStartE2EDuration="18.5029981s" podCreationTimestamp="2026-02-26 11:32:35 +0000 UTC" firstStartedPulling="2026-02-26 11:32:37.707845739 +0000 UTC m=+1624.363584854" lastFinishedPulling="2026-02-26 11:32:52.810639832 +0000 UTC m=+1639.466378947" observedRunningTime="2026-02-26 11:32:53.490081911 +0000 UTC m=+1640.145821026" watchObservedRunningTime="2026-02-26 11:32:53.5029981 +0000 UTC m=+1640.158737215" Feb 26 11:32:53 crc kubenswrapper[4724]: I0226 11:32:53.987429 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="485e3e4a-c268-4d2e-8489-fc72d7dd385a" path="/var/lib/kubelet/pods/485e3e4a-c268-4d2e-8489-fc72d7dd385a/volumes" Feb 26 11:32:54 crc kubenswrapper[4724]: I0226 11:32:54.468123 4724 scope.go:117] "RemoveContainer" containerID="b97bda9188d95594555c0b39fe8cafd2b472fc84658d15791afe2455e81531aa" Feb 26 11:32:54 crc kubenswrapper[4724]: I0226 11:32:54.508248 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed","Type":"ContainerStarted","Data":"347ca2bdc9b9f65218e72041be666e33684a5e0f2b4a6013051175b62299c559"} Feb 26 11:32:54 crc kubenswrapper[4724]: I0226 11:32:54.545672 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.965905088 podStartE2EDuration="56.545653087s" podCreationTimestamp="2026-02-26 11:31:58 +0000 UTC" firstStartedPulling="2026-02-26 11:31:59.245588596 +0000 UTC m=+1585.901327711" lastFinishedPulling="2026-02-26 11:32:52.825336595 +0000 UTC m=+1639.481075710" observedRunningTime="2026-02-26 11:32:54.526838158 +0000 UTC m=+1641.182577283" watchObservedRunningTime="2026-02-26 11:32:54.545653087 +0000 UTC m=+1641.201392202" Feb 26 11:32:55 crc kubenswrapper[4724]: I0226 11:32:55.256604 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:55 crc kubenswrapper[4724]: I0226 11:32:55.257678 4724 scope.go:117] "RemoveContainer" containerID="fcf8d1c0459d5f1cefc067ac2134c960c7cde95abed13391893b2cf7d4df5b2a" Feb 26 11:32:55 crc kubenswrapper[4724]: E0226 11:32:55.258015 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-674894f85d-fwnwf_openstack(dfb2bad0-3923-4242-9339-b88cc85fc206)\"" pod="openstack/heat-cfnapi-674894f85d-fwnwf" podUID="dfb2bad0-3923-4242-9339-b88cc85fc206" Feb 26 11:32:55 crc kubenswrapper[4724]: I0226 11:32:55.306667 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-78fbbcf444-k8n4t" Feb 26 11:32:55 crc kubenswrapper[4724]: I0226 11:32:55.397645 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6746496466-bz5b7"] Feb 26 11:32:55 crc kubenswrapper[4724]: I0226 11:32:55.397837 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6746496466-bz5b7" podUID="25ee5971-289d-4cf3-852d-e6473c97582f" containerName="heat-engine" containerID="cri-o://bb4f16c6d5fa10f2ad577b6dfe9e380b9656ebd727c25424943ba552aa274849" gracePeriod=60 Feb 26 11:32:55 crc kubenswrapper[4724]: I0226 11:32:55.588458 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:32:55 crc kubenswrapper[4724]: I0226 11:32:55.589749 4724 scope.go:117] "RemoveContainer" containerID="a49579870c9317a498e133792e552649f38f5a584255cb290ecd4eaabbc40c33" Feb 26 11:32:55 crc kubenswrapper[4724]: E0226 11:32:55.590015 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-76f4bfd896-xsknh_openstack(506fa3b2-fb8c-4481-9dda-e1af6c9ff27d)\"" pod="openstack/heat-api-76f4bfd896-xsknh" podUID="506fa3b2-fb8c-4481-9dda-e1af6c9ff27d" Feb 26 11:32:56 crc kubenswrapper[4724]: I0226 11:32:56.304447 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="a66d564c-8f30-413c-8026-578de3a429d4" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.195:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:32:56 crc kubenswrapper[4724]: I0226 11:32:56.304898 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="a66d564c-8f30-413c-8026-578de3a429d4" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.195:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:32:58 crc kubenswrapper[4724]: I0226 11:32:58.415536 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:32:58 crc kubenswrapper[4724]: I0226 11:32:58.607216 4724 generic.go:334] "Generic (PLEG): container finished" podID="25ee5971-289d-4cf3-852d-e6473c97582f" containerID="bb4f16c6d5fa10f2ad577b6dfe9e380b9656ebd727c25424943ba552aa274849" exitCode=0 Feb 26 11:32:58 crc kubenswrapper[4724]: I0226 11:32:58.607507 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6746496466-bz5b7" event={"ID":"25ee5971-289d-4cf3-852d-e6473c97582f","Type":"ContainerDied","Data":"bb4f16c6d5fa10f2ad577b6dfe9e380b9656ebd727c25424943ba552aa274849"} Feb 26 11:32:58 crc kubenswrapper[4724]: I0226 11:32:58.767990 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:58 crc kubenswrapper[4724]: I0226 11:32:58.905262 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-config-data\") pod \"25ee5971-289d-4cf3-852d-e6473c97582f\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " Feb 26 11:32:58 crc kubenswrapper[4724]: I0226 11:32:58.905442 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-config-data-custom\") pod \"25ee5971-289d-4cf3-852d-e6473c97582f\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " Feb 26 11:32:58 crc kubenswrapper[4724]: I0226 11:32:58.905558 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-combined-ca-bundle\") pod \"25ee5971-289d-4cf3-852d-e6473c97582f\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " Feb 26 11:32:58 crc kubenswrapper[4724]: I0226 11:32:58.905638 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9zvp\" (UniqueName: \"kubernetes.io/projected/25ee5971-289d-4cf3-852d-e6473c97582f-kube-api-access-v9zvp\") pod \"25ee5971-289d-4cf3-852d-e6473c97582f\" (UID: \"25ee5971-289d-4cf3-852d-e6473c97582f\") " Feb 26 11:32:58 crc kubenswrapper[4724]: I0226 11:32:58.914565 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ee5971-289d-4cf3-852d-e6473c97582f-kube-api-access-v9zvp" (OuterVolumeSpecName: "kube-api-access-v9zvp") pod "25ee5971-289d-4cf3-852d-e6473c97582f" (UID: "25ee5971-289d-4cf3-852d-e6473c97582f"). InnerVolumeSpecName "kube-api-access-v9zvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:58 crc kubenswrapper[4724]: I0226 11:32:58.926597 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "25ee5971-289d-4cf3-852d-e6473c97582f" (UID: "25ee5971-289d-4cf3-852d-e6473c97582f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:58 crc kubenswrapper[4724]: I0226 11:32:58.954310 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25ee5971-289d-4cf3-852d-e6473c97582f" (UID: "25ee5971-289d-4cf3-852d-e6473c97582f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:58 crc kubenswrapper[4724]: I0226 11:32:58.995355 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-config-data" (OuterVolumeSpecName: "config-data") pod "25ee5971-289d-4cf3-852d-e6473c97582f" (UID: "25ee5971-289d-4cf3-852d-e6473c97582f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.008649 4724 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.008686 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.008697 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9zvp\" (UniqueName: \"kubernetes.io/projected/25ee5971-289d-4cf3-852d-e6473c97582f-kube-api-access-v9zvp\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.008708 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ee5971-289d-4cf3-852d-e6473c97582f-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.042750 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5bbc75466c-6dmf6" Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.117393 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-674894f85d-fwnwf"] Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.230787 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.652812 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6746496466-bz5b7" event={"ID":"25ee5971-289d-4cf3-852d-e6473c97582f","Type":"ContainerDied","Data":"fc5e9cf4e830a5324da1fd66effddc45471af1e7de889b5c5863f7058a29a2af"} Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.653147 4724 scope.go:117] "RemoveContainer" containerID="bb4f16c6d5fa10f2ad577b6dfe9e380b9656ebd727c25424943ba552aa274849" Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.653245 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6746496466-bz5b7" Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.667748 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-674894f85d-fwnwf" event={"ID":"dfb2bad0-3923-4242-9339-b88cc85fc206","Type":"ContainerDied","Data":"88b39ac7022b373b0eaccbf2de382fda65c1b57e41a9155d164cbfda9fffdad3"} Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.667785 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88b39ac7022b373b0eaccbf2de382fda65c1b57e41a9155d164cbfda9fffdad3" Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.750923 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.773896 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6746496466-bz5b7"] Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.799151 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6746496466-bz5b7"] Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.947795 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-combined-ca-bundle\") pod \"dfb2bad0-3923-4242-9339-b88cc85fc206\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.948051 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-config-data-custom\") pod \"dfb2bad0-3923-4242-9339-b88cc85fc206\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.948079 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm5gh\" (UniqueName: \"kubernetes.io/projected/dfb2bad0-3923-4242-9339-b88cc85fc206-kube-api-access-nm5gh\") pod \"dfb2bad0-3923-4242-9339-b88cc85fc206\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.948124 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-config-data\") pod \"dfb2bad0-3923-4242-9339-b88cc85fc206\" (UID: \"dfb2bad0-3923-4242-9339-b88cc85fc206\") " Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.956391 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfb2bad0-3923-4242-9339-b88cc85fc206-kube-api-access-nm5gh" (OuterVolumeSpecName: "kube-api-access-nm5gh") pod "dfb2bad0-3923-4242-9339-b88cc85fc206" (UID: "dfb2bad0-3923-4242-9339-b88cc85fc206"). InnerVolumeSpecName "kube-api-access-nm5gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:32:59 crc kubenswrapper[4724]: I0226 11:32:59.991292 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dfb2bad0-3923-4242-9339-b88cc85fc206" (UID: "dfb2bad0-3923-4242-9339-b88cc85fc206"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:00 crc kubenswrapper[4724]: I0226 11:33:00.024506 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfb2bad0-3923-4242-9339-b88cc85fc206" (UID: "dfb2bad0-3923-4242-9339-b88cc85fc206"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:00 crc kubenswrapper[4724]: I0226 11:33:00.044398 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ee5971-289d-4cf3-852d-e6473c97582f" path="/var/lib/kubelet/pods/25ee5971-289d-4cf3-852d-e6473c97582f/volumes" Feb 26 11:33:00 crc kubenswrapper[4724]: I0226 11:33:00.051240 4724 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:00 crc kubenswrapper[4724]: I0226 11:33:00.051269 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm5gh\" (UniqueName: \"kubernetes.io/projected/dfb2bad0-3923-4242-9339-b88cc85fc206-kube-api-access-nm5gh\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:00 crc kubenswrapper[4724]: I0226 11:33:00.051281 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:00 crc kubenswrapper[4724]: I0226 11:33:00.144410 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-config-data" (OuterVolumeSpecName: "config-data") pod "dfb2bad0-3923-4242-9339-b88cc85fc206" (UID: "dfb2bad0-3923-4242-9339-b88cc85fc206"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:00 crc kubenswrapper[4724]: I0226 11:33:00.159808 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb2bad0-3923-4242-9339-b88cc85fc206-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:00 crc kubenswrapper[4724]: I0226 11:33:00.682218 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-674894f85d-fwnwf" Feb 26 11:33:00 crc kubenswrapper[4724]: I0226 11:33:00.691972 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-58cc4895d6-7zzgw" Feb 26 11:33:00 crc kubenswrapper[4724]: I0226 11:33:00.743678 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-674894f85d-fwnwf"] Feb 26 11:33:00 crc kubenswrapper[4724]: I0226 11:33:00.764305 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-674894f85d-fwnwf"] Feb 26 11:33:00 crc kubenswrapper[4724]: I0226 11:33:00.787867 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-76f4bfd896-xsknh"] Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.317305 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="a66d564c-8f30-413c-8026-578de3a429d4" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.195:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.327036 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="a66d564c-8f30-413c-8026-578de3a429d4" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.195:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.364579 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.520864 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-config-data-custom\") pod \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.520927 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr6kd\" (UniqueName: \"kubernetes.io/projected/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-kube-api-access-cr6kd\") pod \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.520963 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-combined-ca-bundle\") pod \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.521004 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-config-data\") pod \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\" (UID: \"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d\") " Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.530464 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-kube-api-access-cr6kd" (OuterVolumeSpecName: "kube-api-access-cr6kd") pod "506fa3b2-fb8c-4481-9dda-e1af6c9ff27d" (UID: "506fa3b2-fb8c-4481-9dda-e1af6c9ff27d"). InnerVolumeSpecName "kube-api-access-cr6kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.530629 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "506fa3b2-fb8c-4481-9dda-e1af6c9ff27d" (UID: "506fa3b2-fb8c-4481-9dda-e1af6c9ff27d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.588649 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "506fa3b2-fb8c-4481-9dda-e1af6c9ff27d" (UID: "506fa3b2-fb8c-4481-9dda-e1af6c9ff27d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.626893 4724 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.626928 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr6kd\" (UniqueName: \"kubernetes.io/projected/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-kube-api-access-cr6kd\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.626939 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.689335 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-config-data" (OuterVolumeSpecName: "config-data") pod "506fa3b2-fb8c-4481-9dda-e1af6c9ff27d" (UID: "506fa3b2-fb8c-4481-9dda-e1af6c9ff27d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.728416 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.749891 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-76f4bfd896-xsknh" event={"ID":"506fa3b2-fb8c-4481-9dda-e1af6c9ff27d","Type":"ContainerDied","Data":"db2e35d45d784a1d6790f195ef39cf3d3423584c85a2816ef017d2cc0d74eb7b"} Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.749947 4724 scope.go:117] "RemoveContainer" containerID="a49579870c9317a498e133792e552649f38f5a584255cb290ecd4eaabbc40c33" Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.750072 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-76f4bfd896-xsknh" Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.791894 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-76f4bfd896-xsknh"] Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.803449 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-76f4bfd896-xsknh"] Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.993771 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="506fa3b2-fb8c-4481-9dda-e1af6c9ff27d" path="/var/lib/kubelet/pods/506fa3b2-fb8c-4481-9dda-e1af6c9ff27d/volumes" Feb 26 11:33:01 crc kubenswrapper[4724]: I0226 11:33:01.994326 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfb2bad0-3923-4242-9339-b88cc85fc206" path="/var/lib/kubelet/pods/dfb2bad0-3923-4242-9339-b88cc85fc206/volumes" Feb 26 11:33:03 crc kubenswrapper[4724]: I0226 11:33:03.836330 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-b86hc" podUID="d848b417-9306-4564-b059-0dc84bd7ec1a" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:33:05 crc kubenswrapper[4724]: I0226 11:33:05.914245 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 26 11:33:06 crc kubenswrapper[4724]: I0226 11:33:06.347516 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:06 crc kubenswrapper[4724]: I0226 11:33:06.347819 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="ceilometer-central-agent" containerID="cri-o://393a2a65e400b32a8071e3114589ea56262d7c7ef7746528ee28bb2886942ff2" gracePeriod=30 Feb 26 11:33:06 crc kubenswrapper[4724]: I0226 11:33:06.347956 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="proxy-httpd" containerID="cri-o://b87cd3d6cba4bb3a31380c07febfd6588e8a780f697c2746e229f5148837e7d0" gracePeriod=30 Feb 26 11:33:06 crc kubenswrapper[4724]: I0226 11:33:06.348007 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="sg-core" containerID="cri-o://fe61b74073a05c91641147c0ed890963faccb0ec50d4414fa05b2aa63da9b913" gracePeriod=30 Feb 26 11:33:06 crc kubenswrapper[4724]: I0226 11:33:06.348038 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="ceilometer-notification-agent" containerID="cri-o://0b9d1bfe262ed33fc110fb4af6cb076c0db5cc1033fbb1900ee7af1e33b24413" gracePeriod=30 Feb 26 11:33:06 crc kubenswrapper[4724]: I0226 11:33:06.459417 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.196:3000/\": read tcp 10.217.0.2:51894->10.217.0.196:3000: read: connection reset by peer" Feb 26 11:33:06 crc kubenswrapper[4724]: I0226 11:33:06.459798 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.196:3000/\": dial tcp 10.217.0.196:3000: connect: connection refused" Feb 26 11:33:06 crc kubenswrapper[4724]: I0226 11:33:06.859816 4724 generic.go:334] "Generic (PLEG): container finished" podID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerID="b87cd3d6cba4bb3a31380c07febfd6588e8a780f697c2746e229f5148837e7d0" exitCode=0 Feb 26 11:33:06 crc kubenswrapper[4724]: I0226 11:33:06.859860 4724 generic.go:334] "Generic (PLEG): container finished" podID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerID="fe61b74073a05c91641147c0ed890963faccb0ec50d4414fa05b2aa63da9b913" exitCode=2 Feb 26 11:33:06 crc kubenswrapper[4724]: I0226 11:33:06.859885 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2972a18-f09b-4535-ae61-0e6b9498d094","Type":"ContainerDied","Data":"b87cd3d6cba4bb3a31380c07febfd6588e8a780f697c2746e229f5148837e7d0"} Feb 26 11:33:06 crc kubenswrapper[4724]: I0226 11:33:06.859919 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2972a18-f09b-4535-ae61-0e6b9498d094","Type":"ContainerDied","Data":"fe61b74073a05c91641147c0ed890963faccb0ec50d4414fa05b2aa63da9b913"} Feb 26 11:33:07 crc kubenswrapper[4724]: I0226 11:33:07.923052 4724 generic.go:334] "Generic (PLEG): container finished" podID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerID="393a2a65e400b32a8071e3114589ea56262d7c7ef7746528ee28bb2886942ff2" exitCode=0 Feb 26 11:33:07 crc kubenswrapper[4724]: I0226 11:33:07.923160 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2972a18-f09b-4535-ae61-0e6b9498d094","Type":"ContainerDied","Data":"393a2a65e400b32a8071e3114589ea56262d7c7ef7746528ee28bb2886942ff2"} Feb 26 11:33:08 crc kubenswrapper[4724]: I0226 11:33:08.948214 4724 generic.go:334] "Generic (PLEG): container finished" podID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerID="0b9d1bfe262ed33fc110fb4af6cb076c0db5cc1033fbb1900ee7af1e33b24413" exitCode=0 Feb 26 11:33:08 crc kubenswrapper[4724]: I0226 11:33:08.948304 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2972a18-f09b-4535-ae61-0e6b9498d094","Type":"ContainerDied","Data":"0b9d1bfe262ed33fc110fb4af6cb076c0db5cc1033fbb1900ee7af1e33b24413"} Feb 26 11:33:08 crc kubenswrapper[4724]: I0226 11:33:08.952264 4724 generic.go:334] "Generic (PLEG): container finished" podID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerID="7099fe5c31115c0b722be7a13c0a9feb5c472f77246d6698e652b193791a6781" exitCode=137 Feb 26 11:33:08 crc kubenswrapper[4724]: I0226 11:33:08.952350 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ddfb9fd96-hzc8c" event={"ID":"fa39614a-db84-4214-baa1-bd7cbc7b5ae0","Type":"ContainerDied","Data":"7099fe5c31115c0b722be7a13c0a9feb5c472f77246d6698e652b193791a6781"} Feb 26 11:33:08 crc kubenswrapper[4724]: I0226 11:33:08.952422 4724 scope.go:117] "RemoveContainer" containerID="cf119be6b682f8400345d567636d81c24d1362c00c424d4a82811c66edd703a0" Feb 26 11:33:08 crc kubenswrapper[4724]: I0226 11:33:08.958864 4724 generic.go:334] "Generic (PLEG): container finished" podID="e4c4b3ae-030b-4e33-9779-2ffa39196a76" containerID="ac398868e5679a7aa01f6bdf65598f3111cd3c8e4085be5a0b71236c8e2306eb" exitCode=137 Feb 26 11:33:08 crc kubenswrapper[4724]: I0226 11:33:08.958903 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57977849d4-8s5ds" event={"ID":"e4c4b3ae-030b-4e33-9779-2ffa39196a76","Type":"ContainerDied","Data":"ac398868e5679a7aa01f6bdf65598f3111cd3c8e4085be5a0b71236c8e2306eb"} Feb 26 11:33:14 crc kubenswrapper[4724]: E0226 11:33:14.768943 4724 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.612s" Feb 26 11:33:21 crc kubenswrapper[4724]: E0226 11:33:21.956615 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Feb 26 11:33:21 crc kubenswrapper[4724]: E0226 11:33:21.957303 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jsn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-vzfph_openstack(41d636b5-9092-4373-a1f9-8c79f5b9ddaa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 11:33:21 crc kubenswrapper[4724]: E0226 11:33:21.959435 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-vzfph" podUID="41d636b5-9092-4373-a1f9-8c79f5b9ddaa" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.420713 4724 scope.go:117] "RemoveContainer" containerID="3e5edab1e2c718511750fd9327e7561944102843f5433d3bb1fb9259ca86717b" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.494173 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.769625 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-sg-core-conf-yaml\") pod \"a2972a18-f09b-4535-ae61-0e6b9498d094\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.769759 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2972a18-f09b-4535-ae61-0e6b9498d094-log-httpd\") pod \"a2972a18-f09b-4535-ae61-0e6b9498d094\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.771569 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-config-data\") pod \"a2972a18-f09b-4535-ae61-0e6b9498d094\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.771649 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-scripts\") pod \"a2972a18-f09b-4535-ae61-0e6b9498d094\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.771712 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2972a18-f09b-4535-ae61-0e6b9498d094-run-httpd\") pod \"a2972a18-f09b-4535-ae61-0e6b9498d094\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.771766 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s986\" (UniqueName: \"kubernetes.io/projected/a2972a18-f09b-4535-ae61-0e6b9498d094-kube-api-access-8s986\") pod \"a2972a18-f09b-4535-ae61-0e6b9498d094\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.771837 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-combined-ca-bundle\") pod \"a2972a18-f09b-4535-ae61-0e6b9498d094\" (UID: \"a2972a18-f09b-4535-ae61-0e6b9498d094\") " Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.774016 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2972a18-f09b-4535-ae61-0e6b9498d094-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a2972a18-f09b-4535-ae61-0e6b9498d094" (UID: "a2972a18-f09b-4535-ae61-0e6b9498d094"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.774641 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2972a18-f09b-4535-ae61-0e6b9498d094-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a2972a18-f09b-4535-ae61-0e6b9498d094" (UID: "a2972a18-f09b-4535-ae61-0e6b9498d094"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.774977 4724 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2972a18-f09b-4535-ae61-0e6b9498d094-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.775005 4724 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2972a18-f09b-4535-ae61-0e6b9498d094-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.779134 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2972a18-f09b-4535-ae61-0e6b9498d094-kube-api-access-8s986" (OuterVolumeSpecName: "kube-api-access-8s986") pod "a2972a18-f09b-4535-ae61-0e6b9498d094" (UID: "a2972a18-f09b-4535-ae61-0e6b9498d094"). InnerVolumeSpecName "kube-api-access-8s986". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.787825 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-scripts" (OuterVolumeSpecName: "scripts") pod "a2972a18-f09b-4535-ae61-0e6b9498d094" (UID: "a2972a18-f09b-4535-ae61-0e6b9498d094"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.819306 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a2972a18-f09b-4535-ae61-0e6b9498d094" (UID: "a2972a18-f09b-4535-ae61-0e6b9498d094"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.880555 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.880611 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s986\" (UniqueName: \"kubernetes.io/projected/a2972a18-f09b-4535-ae61-0e6b9498d094-kube-api-access-8s986\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.880629 4724 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.898333 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-config-data" (OuterVolumeSpecName: "config-data") pod "a2972a18-f09b-4535-ae61-0e6b9498d094" (UID: "a2972a18-f09b-4535-ae61-0e6b9498d094"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.930565 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2972a18-f09b-4535-ae61-0e6b9498d094" (UID: "a2972a18-f09b-4535-ae61-0e6b9498d094"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.952198 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ddfb9fd96-hzc8c" event={"ID":"fa39614a-db84-4214-baa1-bd7cbc7b5ae0","Type":"ContainerStarted","Data":"95449fe5b1852e70ef5d4115673dda1bb1e3c75529c1bdd990fe212a5d65423d"} Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.956494 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57977849d4-8s5ds" event={"ID":"e4c4b3ae-030b-4e33-9779-2ffa39196a76","Type":"ContainerStarted","Data":"c1e56bc3b66b69d49f4a4b48a457e32cf04613d3551e4a503c181676b9bbfc82"} Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.963402 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2972a18-f09b-4535-ae61-0e6b9498d094","Type":"ContainerDied","Data":"db69f43307b8931addd5cdea8a6bddd6c238d5db0c357e17cb7fdfa91781aff6"} Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.963500 4724 scope.go:117] "RemoveContainer" containerID="b87cd3d6cba4bb3a31380c07febfd6588e8a780f697c2746e229f5148837e7d0" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.963425 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:33:22 crc kubenswrapper[4724]: E0226 11:33:22.969387 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-vzfph" podUID="41d636b5-9092-4373-a1f9-8c79f5b9ddaa" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.990364 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:22 crc kubenswrapper[4724]: I0226 11:33:22.990403 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2972a18-f09b-4535-ae61-0e6b9498d094-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.060973 4724 scope.go:117] "RemoveContainer" containerID="fe61b74073a05c91641147c0ed890963faccb0ec50d4414fa05b2aa63da9b913" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.092056 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.095947 4724 scope.go:117] "RemoveContainer" containerID="0b9d1bfe262ed33fc110fb4af6cb076c0db5cc1033fbb1900ee7af1e33b24413" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.106109 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127003 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:23 crc kubenswrapper[4724]: E0226 11:33:23.127401 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ee5971-289d-4cf3-852d-e6473c97582f" containerName="heat-engine" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127415 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ee5971-289d-4cf3-852d-e6473c97582f" containerName="heat-engine" Feb 26 11:33:23 crc kubenswrapper[4724]: E0226 11:33:23.127427 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb2bad0-3923-4242-9339-b88cc85fc206" containerName="heat-cfnapi" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127433 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb2bad0-3923-4242-9339-b88cc85fc206" containerName="heat-cfnapi" Feb 26 11:33:23 crc kubenswrapper[4724]: E0226 11:33:23.127443 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e3e4a-c268-4d2e-8489-fc72d7dd385a" containerName="init" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127449 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e3e4a-c268-4d2e-8489-fc72d7dd385a" containerName="init" Feb 26 11:33:23 crc kubenswrapper[4724]: E0226 11:33:23.127462 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e3e4a-c268-4d2e-8489-fc72d7dd385a" containerName="dnsmasq-dns" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127467 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e3e4a-c268-4d2e-8489-fc72d7dd385a" containerName="dnsmasq-dns" Feb 26 11:33:23 crc kubenswrapper[4724]: E0226 11:33:23.127475 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb2bad0-3923-4242-9339-b88cc85fc206" containerName="heat-cfnapi" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127482 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb2bad0-3923-4242-9339-b88cc85fc206" containerName="heat-cfnapi" Feb 26 11:33:23 crc kubenswrapper[4724]: E0226 11:33:23.127490 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="ceilometer-notification-agent" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127496 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="ceilometer-notification-agent" Feb 26 11:33:23 crc kubenswrapper[4724]: E0226 11:33:23.127506 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506fa3b2-fb8c-4481-9dda-e1af6c9ff27d" containerName="heat-api" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127512 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="506fa3b2-fb8c-4481-9dda-e1af6c9ff27d" containerName="heat-api" Feb 26 11:33:23 crc kubenswrapper[4724]: E0226 11:33:23.127522 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="proxy-httpd" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127528 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="proxy-httpd" Feb 26 11:33:23 crc kubenswrapper[4724]: E0226 11:33:23.127551 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="ceilometer-central-agent" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127557 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="ceilometer-central-agent" Feb 26 11:33:23 crc kubenswrapper[4724]: E0226 11:33:23.127568 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="sg-core" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127573 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="sg-core" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127736 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfb2bad0-3923-4242-9339-b88cc85fc206" containerName="heat-cfnapi" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127748 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfb2bad0-3923-4242-9339-b88cc85fc206" containerName="heat-cfnapi" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127757 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="506fa3b2-fb8c-4481-9dda-e1af6c9ff27d" containerName="heat-api" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127764 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="ceilometer-notification-agent" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127773 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ee5971-289d-4cf3-852d-e6473c97582f" containerName="heat-engine" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127784 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="ceilometer-central-agent" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127790 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="sg-core" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127799 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e3e4a-c268-4d2e-8489-fc72d7dd385a" containerName="dnsmasq-dns" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127812 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" containerName="proxy-httpd" Feb 26 11:33:23 crc kubenswrapper[4724]: E0226 11:33:23.127990 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506fa3b2-fb8c-4481-9dda-e1af6c9ff27d" containerName="heat-api" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.127997 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="506fa3b2-fb8c-4481-9dda-e1af6c9ff27d" containerName="heat-api" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.128207 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="506fa3b2-fb8c-4481-9dda-e1af6c9ff27d" containerName="heat-api" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.129558 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.136009 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.136299 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.136582 4724 scope.go:117] "RemoveContainer" containerID="393a2a65e400b32a8071e3114589ea56262d7c7ef7746528ee28bb2886942ff2" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.226128 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.295805 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db35a650-999a-45fc-8bb6-85b86ac7feba-log-httpd\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.295894 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.295926 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db35a650-999a-45fc-8bb6-85b86ac7feba-run-httpd\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.295990 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-scripts\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.296012 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-config-data\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.296033 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b2ps\" (UniqueName: \"kubernetes.io/projected/db35a650-999a-45fc-8bb6-85b86ac7feba-kube-api-access-8b2ps\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.296093 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.397671 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db35a650-999a-45fc-8bb6-85b86ac7feba-log-httpd\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.397963 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.398088 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db35a650-999a-45fc-8bb6-85b86ac7feba-run-httpd\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.398192 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db35a650-999a-45fc-8bb6-85b86ac7feba-log-httpd\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.398328 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-scripts\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.398551 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db35a650-999a-45fc-8bb6-85b86ac7feba-run-httpd\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.398871 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-config-data\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.399437 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b2ps\" (UniqueName: \"kubernetes.io/projected/db35a650-999a-45fc-8bb6-85b86ac7feba-kube-api-access-8b2ps\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.399602 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.404765 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.404995 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.405615 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-config-data\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.406003 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-scripts\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.486225 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b2ps\" (UniqueName: \"kubernetes.io/projected/db35a650-999a-45fc-8bb6-85b86ac7feba-kube-api-access-8b2ps\") pod \"ceilometer-0\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " pod="openstack/ceilometer-0" Feb 26 11:33:23 crc kubenswrapper[4724]: I0226 11:33:23.748064 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:33:24 crc kubenswrapper[4724]: I0226 11:33:24.007251 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2972a18-f09b-4535-ae61-0e6b9498d094" path="/var/lib/kubelet/pods/a2972a18-f09b-4535-ae61-0e6b9498d094/volumes" Feb 26 11:33:24 crc kubenswrapper[4724]: I0226 11:33:24.419891 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:24 crc kubenswrapper[4724]: I0226 11:33:24.524364 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:33:25 crc kubenswrapper[4724]: I0226 11:33:25.070600 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db35a650-999a-45fc-8bb6-85b86ac7feba","Type":"ContainerStarted","Data":"8e40d91a23886a1ba294991fce2154fbca59c7f94b607c93804f64d27565828c"} Feb 26 11:33:26 crc kubenswrapper[4724]: I0226 11:33:26.080673 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db35a650-999a-45fc-8bb6-85b86ac7feba","Type":"ContainerStarted","Data":"6aec32bf39088fc922c9b532d6aafd73cbdb270f60f384f063c2b0c408015dcf"} Feb 26 11:33:27 crc kubenswrapper[4724]: I0226 11:33:27.093934 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db35a650-999a-45fc-8bb6-85b86ac7feba","Type":"ContainerStarted","Data":"25f77826f7ccccb3eab037451fce2fd54ffb84f853a1eee34ea5e492cb3b5fc2"} Feb 26 11:33:28 crc kubenswrapper[4724]: I0226 11:33:28.117663 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:33:28 crc kubenswrapper[4724]: I0226 11:33:28.118056 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:33:28 crc kubenswrapper[4724]: I0226 11:33:28.139983 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db35a650-999a-45fc-8bb6-85b86ac7feba","Type":"ContainerStarted","Data":"b0362de03f2cd567687b118d0f828f83e56ce784e121a4bc79ae72581617668a"} Feb 26 11:33:28 crc kubenswrapper[4724]: I0226 11:33:28.366855 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:33:28 crc kubenswrapper[4724]: I0226 11:33:28.367961 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:33:30 crc kubenswrapper[4724]: I0226 11:33:30.162231 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db35a650-999a-45fc-8bb6-85b86ac7feba","Type":"ContainerStarted","Data":"59ff5d07080f5052f23a5fd6e3d21fab7f6babe570448b56edac0b7937e74839"} Feb 26 11:33:30 crc kubenswrapper[4724]: I0226 11:33:30.162986 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="ceilometer-central-agent" containerID="cri-o://6aec32bf39088fc922c9b532d6aafd73cbdb270f60f384f063c2b0c408015dcf" gracePeriod=30 Feb 26 11:33:30 crc kubenswrapper[4724]: I0226 11:33:30.163084 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:33:30 crc kubenswrapper[4724]: I0226 11:33:30.163571 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="proxy-httpd" containerID="cri-o://59ff5d07080f5052f23a5fd6e3d21fab7f6babe570448b56edac0b7937e74839" gracePeriod=30 Feb 26 11:33:30 crc kubenswrapper[4724]: I0226 11:33:30.163627 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="sg-core" containerID="cri-o://b0362de03f2cd567687b118d0f828f83e56ce784e121a4bc79ae72581617668a" gracePeriod=30 Feb 26 11:33:30 crc kubenswrapper[4724]: I0226 11:33:30.163664 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="ceilometer-notification-agent" containerID="cri-o://25f77826f7ccccb3eab037451fce2fd54ffb84f853a1eee34ea5e492cb3b5fc2" gracePeriod=30 Feb 26 11:33:30 crc kubenswrapper[4724]: I0226 11:33:30.203258 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.000719608 podStartE2EDuration="7.203233548s" podCreationTimestamp="2026-02-26 11:33:23 +0000 UTC" firstStartedPulling="2026-02-26 11:33:24.535271231 +0000 UTC m=+1671.191010346" lastFinishedPulling="2026-02-26 11:33:29.737785171 +0000 UTC m=+1676.393524286" observedRunningTime="2026-02-26 11:33:30.19071234 +0000 UTC m=+1676.846451455" watchObservedRunningTime="2026-02-26 11:33:30.203233548 +0000 UTC m=+1676.858972683" Feb 26 11:33:31 crc kubenswrapper[4724]: I0226 11:33:31.179039 4724 generic.go:334] "Generic (PLEG): container finished" podID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerID="b0362de03f2cd567687b118d0f828f83e56ce784e121a4bc79ae72581617668a" exitCode=2 Feb 26 11:33:31 crc kubenswrapper[4724]: I0226 11:33:31.179464 4724 generic.go:334] "Generic (PLEG): container finished" podID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerID="25f77826f7ccccb3eab037451fce2fd54ffb84f853a1eee34ea5e492cb3b5fc2" exitCode=0 Feb 26 11:33:31 crc kubenswrapper[4724]: I0226 11:33:31.179296 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db35a650-999a-45fc-8bb6-85b86ac7feba","Type":"ContainerDied","Data":"b0362de03f2cd567687b118d0f828f83e56ce784e121a4bc79ae72581617668a"} Feb 26 11:33:31 crc kubenswrapper[4724]: I0226 11:33:31.179606 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db35a650-999a-45fc-8bb6-85b86ac7feba","Type":"ContainerDied","Data":"25f77826f7ccccb3eab037451fce2fd54ffb84f853a1eee34ea5e492cb3b5fc2"} Feb 26 11:33:38 crc kubenswrapper[4724]: I0226 11:33:38.062352 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.154:8443: connect: connection refused" Feb 26 11:33:38 crc kubenswrapper[4724]: I0226 11:33:38.368119 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57977849d4-8s5ds" podUID="e4c4b3ae-030b-4e33-9779-2ffa39196a76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Feb 26 11:33:41 crc kubenswrapper[4724]: I0226 11:33:41.363084 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vzfph" event={"ID":"41d636b5-9092-4373-a1f9-8c79f5b9ddaa","Type":"ContainerStarted","Data":"524c1bfe8f3d0d91ea2d6f151b6b555d1cb1ea11319c9dd099fb62aa16cc2055"} Feb 26 11:33:41 crc kubenswrapper[4724]: I0226 11:33:41.383036 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-vzfph" podStartSLOduration=3.629279182 podStartE2EDuration="50.3830087s" podCreationTimestamp="2026-02-26 11:32:51 +0000 UTC" firstStartedPulling="2026-02-26 11:32:53.435861632 +0000 UTC m=+1640.091600747" lastFinishedPulling="2026-02-26 11:33:40.18959115 +0000 UTC m=+1686.845330265" observedRunningTime="2026-02-26 11:33:41.38141019 +0000 UTC m=+1688.037149325" watchObservedRunningTime="2026-02-26 11:33:41.3830087 +0000 UTC m=+1688.038747815" Feb 26 11:33:42 crc kubenswrapper[4724]: I0226 11:33:42.376896 4724 generic.go:334] "Generic (PLEG): container finished" podID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerID="6aec32bf39088fc922c9b532d6aafd73cbdb270f60f384f063c2b0c408015dcf" exitCode=0 Feb 26 11:33:42 crc kubenswrapper[4724]: I0226 11:33:42.376988 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db35a650-999a-45fc-8bb6-85b86ac7feba","Type":"ContainerDied","Data":"6aec32bf39088fc922c9b532d6aafd73cbdb270f60f384f063c2b0c408015dcf"} Feb 26 11:33:44 crc kubenswrapper[4724]: I0226 11:33:44.398030 4724 generic.go:334] "Generic (PLEG): container finished" podID="b1c44043-5a26-44d6-bcf3-9f723e9e3f06" containerID="c753f580483db5959bdfd5aee618750fac49d88337f44474e57ca97e16e48578" exitCode=137 Feb 26 11:33:44 crc kubenswrapper[4724]: I0226 11:33:44.399791 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78c4954f9c-cxzbb" event={"ID":"b1c44043-5a26-44d6-bcf3-9f723e9e3f06","Type":"ContainerDied","Data":"c753f580483db5959bdfd5aee618750fac49d88337f44474e57ca97e16e48578"} Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.168732 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.361878 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-combined-ca-bundle\") pod \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.361939 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s8p9\" (UniqueName: \"kubernetes.io/projected/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-kube-api-access-7s8p9\") pod \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.361966 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-config-data\") pod \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.362164 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-config-data-custom\") pod \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\" (UID: \"b1c44043-5a26-44d6-bcf3-9f723e9e3f06\") " Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.382601 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b1c44043-5a26-44d6-bcf3-9f723e9e3f06" (UID: "b1c44043-5a26-44d6-bcf3-9f723e9e3f06"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.382780 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-kube-api-access-7s8p9" (OuterVolumeSpecName: "kube-api-access-7s8p9") pod "b1c44043-5a26-44d6-bcf3-9f723e9e3f06" (UID: "b1c44043-5a26-44d6-bcf3-9f723e9e3f06"). InnerVolumeSpecName "kube-api-access-7s8p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.399327 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1c44043-5a26-44d6-bcf3-9f723e9e3f06" (UID: "b1c44043-5a26-44d6-bcf3-9f723e9e3f06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.451582 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78c4954f9c-cxzbb" event={"ID":"b1c44043-5a26-44d6-bcf3-9f723e9e3f06","Type":"ContainerDied","Data":"74cc554dd6a7009ecaed57e60d0ca0303c6ec2f2e689bbbbbafc5fdd7c7ad462"} Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.452019 4724 scope.go:117] "RemoveContainer" containerID="c753f580483db5959bdfd5aee618750fac49d88337f44474e57ca97e16e48578" Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.452250 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78c4954f9c-cxzbb" Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.467629 4724 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.467937 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.469036 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s8p9\" (UniqueName: \"kubernetes.io/projected/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-kube-api-access-7s8p9\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.520404 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-config-data" (OuterVolumeSpecName: "config-data") pod "b1c44043-5a26-44d6-bcf3-9f723e9e3f06" (UID: "b1c44043-5a26-44d6-bcf3-9f723e9e3f06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.571587 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1c44043-5a26-44d6-bcf3-9f723e9e3f06-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:45 crc kubenswrapper[4724]: E0226 11:33:45.776361 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e104b5a_57be_474d_957f_25a86e9111a1.slice/crio-conmon-091a311b1bfca31d143e5d62395c0bfa2e51555aa390c76af0d02a00687fe41e.scope\": RecentStats: unable to find data in memory cache]" Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.828017 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78c4954f9c-cxzbb"] Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.838029 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-78c4954f9c-cxzbb"] Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.867984 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.986488 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pczvp\" (UniqueName: \"kubernetes.io/projected/4e104b5a-57be-474d-957f-25a86e9111a1-kube-api-access-pczvp\") pod \"4e104b5a-57be-474d-957f-25a86e9111a1\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.986628 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-combined-ca-bundle\") pod \"4e104b5a-57be-474d-957f-25a86e9111a1\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.986746 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-config-data-custom\") pod \"4e104b5a-57be-474d-957f-25a86e9111a1\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.986910 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-config-data\") pod \"4e104b5a-57be-474d-957f-25a86e9111a1\" (UID: \"4e104b5a-57be-474d-957f-25a86e9111a1\") " Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.992323 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e104b5a-57be-474d-957f-25a86e9111a1-kube-api-access-pczvp" (OuterVolumeSpecName: "kube-api-access-pczvp") pod "4e104b5a-57be-474d-957f-25a86e9111a1" (UID: "4e104b5a-57be-474d-957f-25a86e9111a1"). InnerVolumeSpecName "kube-api-access-pczvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.995326 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4e104b5a-57be-474d-957f-25a86e9111a1" (UID: "4e104b5a-57be-474d-957f-25a86e9111a1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:45 crc kubenswrapper[4724]: I0226 11:33:45.995519 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1c44043-5a26-44d6-bcf3-9f723e9e3f06" path="/var/lib/kubelet/pods/b1c44043-5a26-44d6-bcf3-9f723e9e3f06/volumes" Feb 26 11:33:46 crc kubenswrapper[4724]: I0226 11:33:46.018829 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e104b5a-57be-474d-957f-25a86e9111a1" (UID: "4e104b5a-57be-474d-957f-25a86e9111a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:46 crc kubenswrapper[4724]: I0226 11:33:46.041333 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-config-data" (OuterVolumeSpecName: "config-data") pod "4e104b5a-57be-474d-957f-25a86e9111a1" (UID: "4e104b5a-57be-474d-957f-25a86e9111a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:46 crc kubenswrapper[4724]: I0226 11:33:46.089012 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pczvp\" (UniqueName: \"kubernetes.io/projected/4e104b5a-57be-474d-957f-25a86e9111a1-kube-api-access-pczvp\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:46 crc kubenswrapper[4724]: I0226 11:33:46.089065 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:46 crc kubenswrapper[4724]: I0226 11:33:46.089080 4724 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:46 crc kubenswrapper[4724]: I0226 11:33:46.089092 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e104b5a-57be-474d-957f-25a86e9111a1-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:46 crc kubenswrapper[4724]: I0226 11:33:46.462960 4724 generic.go:334] "Generic (PLEG): container finished" podID="4e104b5a-57be-474d-957f-25a86e9111a1" containerID="091a311b1bfca31d143e5d62395c0bfa2e51555aa390c76af0d02a00687fe41e" exitCode=137 Feb 26 11:33:46 crc kubenswrapper[4724]: I0226 11:33:46.462998 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b57bf547-ctb72" Feb 26 11:33:46 crc kubenswrapper[4724]: I0226 11:33:46.463005 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b57bf547-ctb72" event={"ID":"4e104b5a-57be-474d-957f-25a86e9111a1","Type":"ContainerDied","Data":"091a311b1bfca31d143e5d62395c0bfa2e51555aa390c76af0d02a00687fe41e"} Feb 26 11:33:46 crc kubenswrapper[4724]: I0226 11:33:46.463032 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b57bf547-ctb72" event={"ID":"4e104b5a-57be-474d-957f-25a86e9111a1","Type":"ContainerDied","Data":"5c40a5488a7a3ad30c18abfec8024bd145f756b5137493b583895e3e22c811d1"} Feb 26 11:33:46 crc kubenswrapper[4724]: I0226 11:33:46.463048 4724 scope.go:117] "RemoveContainer" containerID="091a311b1bfca31d143e5d62395c0bfa2e51555aa390c76af0d02a00687fe41e" Feb 26 11:33:46 crc kubenswrapper[4724]: I0226 11:33:46.559204 4724 scope.go:117] "RemoveContainer" containerID="091a311b1bfca31d143e5d62395c0bfa2e51555aa390c76af0d02a00687fe41e" Feb 26 11:33:46 crc kubenswrapper[4724]: E0226 11:33:46.559747 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"091a311b1bfca31d143e5d62395c0bfa2e51555aa390c76af0d02a00687fe41e\": container with ID starting with 091a311b1bfca31d143e5d62395c0bfa2e51555aa390c76af0d02a00687fe41e not found: ID does not exist" containerID="091a311b1bfca31d143e5d62395c0bfa2e51555aa390c76af0d02a00687fe41e" Feb 26 11:33:46 crc kubenswrapper[4724]: I0226 11:33:46.559784 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091a311b1bfca31d143e5d62395c0bfa2e51555aa390c76af0d02a00687fe41e"} err="failed to get container status \"091a311b1bfca31d143e5d62395c0bfa2e51555aa390c76af0d02a00687fe41e\": rpc error: code = NotFound desc = could not find container \"091a311b1bfca31d143e5d62395c0bfa2e51555aa390c76af0d02a00687fe41e\": container with ID starting with 091a311b1bfca31d143e5d62395c0bfa2e51555aa390c76af0d02a00687fe41e not found: ID does not exist" Feb 26 11:33:46 crc kubenswrapper[4724]: I0226 11:33:46.561022 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b57bf547-ctb72"] Feb 26 11:33:46 crc kubenswrapper[4724]: I0226 11:33:46.571893 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7b57bf547-ctb72"] Feb 26 11:33:47 crc kubenswrapper[4724]: I0226 11:33:47.986898 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e104b5a-57be-474d-957f-25a86e9111a1" path="/var/lib/kubelet/pods/4e104b5a-57be-474d-957f-25a86e9111a1/volumes" Feb 26 11:33:48 crc kubenswrapper[4724]: I0226 11:33:48.061980 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.154:8443: connect: connection refused" Feb 26 11:33:48 crc kubenswrapper[4724]: I0226 11:33:48.367447 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57977849d4-8s5ds" podUID="e4c4b3ae-030b-4e33-9779-2ffa39196a76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Feb 26 11:33:51 crc kubenswrapper[4724]: I0226 11:33:51.236971 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:33:51 crc kubenswrapper[4724]: I0226 11:33:51.237727 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94be3313-633f-4595-8195-b96e91d607ce" containerName="glance-log" containerID="cri-o://16c71b54eec28fe9ff4de59e9710c2c74a9af78ce222076c21ef021233742034" gracePeriod=30 Feb 26 11:33:51 crc kubenswrapper[4724]: I0226 11:33:51.237903 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94be3313-633f-4595-8195-b96e91d607ce" containerName="glance-httpd" containerID="cri-o://6d76122482c7e3a10bd2190d7e3b9365fe187c527c403f205917bdec3ec81cb8" gracePeriod=30 Feb 26 11:33:51 crc kubenswrapper[4724]: I0226 11:33:51.509699 4724 generic.go:334] "Generic (PLEG): container finished" podID="94be3313-633f-4595-8195-b96e91d607ce" containerID="16c71b54eec28fe9ff4de59e9710c2c74a9af78ce222076c21ef021233742034" exitCode=143 Feb 26 11:33:51 crc kubenswrapper[4724]: I0226 11:33:51.509808 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94be3313-633f-4595-8195-b96e91d607ce","Type":"ContainerDied","Data":"16c71b54eec28fe9ff4de59e9710c2c74a9af78ce222076c21ef021233742034"} Feb 26 11:33:53 crc kubenswrapper[4724]: I0226 11:33:53.756057 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 11:33:55 crc kubenswrapper[4724]: I0226 11:33:55.286214 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="94be3313-633f-4595-8195-b96e91d607ce" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.160:9292/healthcheck\": dial tcp 10.217.0.160:9292: connect: connection refused" Feb 26 11:33:55 crc kubenswrapper[4724]: I0226 11:33:55.286214 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="94be3313-633f-4595-8195-b96e91d607ce" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.160:9292/healthcheck\": dial tcp 10.217.0.160:9292: connect: connection refused" Feb 26 11:33:55 crc kubenswrapper[4724]: I0226 11:33:55.561896 4724 generic.go:334] "Generic (PLEG): container finished" podID="94be3313-633f-4595-8195-b96e91d607ce" containerID="6d76122482c7e3a10bd2190d7e3b9365fe187c527c403f205917bdec3ec81cb8" exitCode=0 Feb 26 11:33:55 crc kubenswrapper[4724]: I0226 11:33:55.561954 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94be3313-633f-4595-8195-b96e91d607ce","Type":"ContainerDied","Data":"6d76122482c7e3a10bd2190d7e3b9365fe187c527c403f205917bdec3ec81cb8"} Feb 26 11:33:56 crc kubenswrapper[4724]: I0226 11:33:56.893485 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.018098 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tvdj\" (UniqueName: \"kubernetes.io/projected/94be3313-633f-4595-8195-b96e91d607ce-kube-api-access-2tvdj\") pod \"94be3313-633f-4595-8195-b96e91d607ce\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.018191 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94be3313-633f-4595-8195-b96e91d607ce-httpd-run\") pod \"94be3313-633f-4595-8195-b96e91d607ce\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.018239 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94be3313-633f-4595-8195-b96e91d607ce-logs\") pod \"94be3313-633f-4595-8195-b96e91d607ce\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.018297 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-config-data\") pod \"94be3313-633f-4595-8195-b96e91d607ce\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.018429 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-combined-ca-bundle\") pod \"94be3313-633f-4595-8195-b96e91d607ce\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.018528 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-scripts\") pod \"94be3313-633f-4595-8195-b96e91d607ce\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.018577 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-public-tls-certs\") pod \"94be3313-633f-4595-8195-b96e91d607ce\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.018636 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"94be3313-633f-4595-8195-b96e91d607ce\" (UID: \"94be3313-633f-4595-8195-b96e91d607ce\") " Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.018731 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94be3313-633f-4595-8195-b96e91d607ce-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "94be3313-633f-4595-8195-b96e91d607ce" (UID: "94be3313-633f-4595-8195-b96e91d607ce"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.019116 4724 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94be3313-633f-4595-8195-b96e91d607ce-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.020577 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94be3313-633f-4595-8195-b96e91d607ce-logs" (OuterVolumeSpecName: "logs") pod "94be3313-633f-4595-8195-b96e91d607ce" (UID: "94be3313-633f-4595-8195-b96e91d607ce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.027915 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-scripts" (OuterVolumeSpecName: "scripts") pod "94be3313-633f-4595-8195-b96e91d607ce" (UID: "94be3313-633f-4595-8195-b96e91d607ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.029679 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94be3313-633f-4595-8195-b96e91d607ce-kube-api-access-2tvdj" (OuterVolumeSpecName: "kube-api-access-2tvdj") pod "94be3313-633f-4595-8195-b96e91d607ce" (UID: "94be3313-633f-4595-8195-b96e91d607ce"). InnerVolumeSpecName "kube-api-access-2tvdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.052364 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "94be3313-633f-4595-8195-b96e91d607ce" (UID: "94be3313-633f-4595-8195-b96e91d607ce"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.075384 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94be3313-633f-4595-8195-b96e91d607ce" (UID: "94be3313-633f-4595-8195-b96e91d607ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.081644 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "94be3313-633f-4595-8195-b96e91d607ce" (UID: "94be3313-633f-4595-8195-b96e91d607ce"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.084092 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-config-data" (OuterVolumeSpecName: "config-data") pod "94be3313-633f-4595-8195-b96e91d607ce" (UID: "94be3313-633f-4595-8195-b96e91d607ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.121357 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tvdj\" (UniqueName: \"kubernetes.io/projected/94be3313-633f-4595-8195-b96e91d607ce-kube-api-access-2tvdj\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.121393 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94be3313-633f-4595-8195-b96e91d607ce-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.121406 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.121418 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.121428 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.121438 4724 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94be3313-633f-4595-8195-b96e91d607ce-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.121462 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.154520 4724 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.223476 4724 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.579879 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94be3313-633f-4595-8195-b96e91d607ce","Type":"ContainerDied","Data":"3e5a01f7cf1ec7a507b4abf17b06e2721adfdad068151684321aad82917f8944"} Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.579939 4724 scope.go:117] "RemoveContainer" containerID="6d76122482c7e3a10bd2190d7e3b9365fe187c527c403f205917bdec3ec81cb8" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.580166 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.619464 4724 scope.go:117] "RemoveContainer" containerID="16c71b54eec28fe9ff4de59e9710c2c74a9af78ce222076c21ef021233742034" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.625241 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.632117 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.676505 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:33:57 crc kubenswrapper[4724]: E0226 11:33:57.677265 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94be3313-633f-4595-8195-b96e91d607ce" containerName="glance-log" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.677373 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="94be3313-633f-4595-8195-b96e91d607ce" containerName="glance-log" Feb 26 11:33:57 crc kubenswrapper[4724]: E0226 11:33:57.677499 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94be3313-633f-4595-8195-b96e91d607ce" containerName="glance-httpd" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.677579 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="94be3313-633f-4595-8195-b96e91d607ce" containerName="glance-httpd" Feb 26 11:33:57 crc kubenswrapper[4724]: E0226 11:33:57.681802 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e104b5a-57be-474d-957f-25a86e9111a1" containerName="heat-cfnapi" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.682020 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e104b5a-57be-474d-957f-25a86e9111a1" containerName="heat-cfnapi" Feb 26 11:33:57 crc kubenswrapper[4724]: E0226 11:33:57.682154 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c44043-5a26-44d6-bcf3-9f723e9e3f06" containerName="heat-api" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.682262 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c44043-5a26-44d6-bcf3-9f723e9e3f06" containerName="heat-api" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.682727 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="94be3313-633f-4595-8195-b96e91d607ce" containerName="glance-log" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.682823 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1c44043-5a26-44d6-bcf3-9f723e9e3f06" containerName="heat-api" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.682922 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="94be3313-633f-4595-8195-b96e91d607ce" containerName="glance-httpd" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.683103 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e104b5a-57be-474d-957f-25a86e9111a1" containerName="heat-cfnapi" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.684549 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.695003 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.695432 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.708675 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.834693 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.834768 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdw6t\" (UniqueName: \"kubernetes.io/projected/3fdec6fc-d28c-456b-b3a9-6eae59d27655-kube-api-access-jdw6t\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.834827 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3fdec6fc-d28c-456b-b3a9-6eae59d27655-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.834859 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fdec6fc-d28c-456b-b3a9-6eae59d27655-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.834912 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fdec6fc-d28c-456b-b3a9-6eae59d27655-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.834953 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fdec6fc-d28c-456b-b3a9-6eae59d27655-scripts\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.834977 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fdec6fc-d28c-456b-b3a9-6eae59d27655-logs\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.834996 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fdec6fc-d28c-456b-b3a9-6eae59d27655-config-data\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.936899 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3fdec6fc-d28c-456b-b3a9-6eae59d27655-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.936948 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fdec6fc-d28c-456b-b3a9-6eae59d27655-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.937023 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fdec6fc-d28c-456b-b3a9-6eae59d27655-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.937065 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fdec6fc-d28c-456b-b3a9-6eae59d27655-scripts\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.937088 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fdec6fc-d28c-456b-b3a9-6eae59d27655-logs\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.937108 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fdec6fc-d28c-456b-b3a9-6eae59d27655-config-data\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.937195 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.937221 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdw6t\" (UniqueName: \"kubernetes.io/projected/3fdec6fc-d28c-456b-b3a9-6eae59d27655-kube-api-access-jdw6t\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.937716 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3fdec6fc-d28c-456b-b3a9-6eae59d27655-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.937726 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fdec6fc-d28c-456b-b3a9-6eae59d27655-logs\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.938049 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.941741 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fdec6fc-d28c-456b-b3a9-6eae59d27655-scripts\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.945361 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fdec6fc-d28c-456b-b3a9-6eae59d27655-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.956749 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fdec6fc-d28c-456b-b3a9-6eae59d27655-config-data\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.962843 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdw6t\" (UniqueName: \"kubernetes.io/projected/3fdec6fc-d28c-456b-b3a9-6eae59d27655-kube-api-access-jdw6t\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.963073 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fdec6fc-d28c-456b-b3a9-6eae59d27655-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.986826 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"3fdec6fc-d28c-456b-b3a9-6eae59d27655\") " pod="openstack/glance-default-external-api-0" Feb 26 11:33:57 crc kubenswrapper[4724]: I0226 11:33:57.993647 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94be3313-633f-4595-8195-b96e91d607ce" path="/var/lib/kubelet/pods/94be3313-633f-4595-8195-b96e91d607ce/volumes" Feb 26 11:33:58 crc kubenswrapper[4724]: I0226 11:33:58.008810 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 11:33:58 crc kubenswrapper[4724]: I0226 11:33:58.717404 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 11:33:59 crc kubenswrapper[4724]: I0226 11:33:59.600052 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3fdec6fc-d28c-456b-b3a9-6eae59d27655","Type":"ContainerStarted","Data":"7a0f7e93842443b5220a9fef1d662642b961c3d6a0d9e3fcdcce7b582fcb23c1"} Feb 26 11:34:00 crc kubenswrapper[4724]: I0226 11:34:00.234612 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535094-nv6rp"] Feb 26 11:34:00 crc kubenswrapper[4724]: I0226 11:34:00.236054 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535094-nv6rp" Feb 26 11:34:00 crc kubenswrapper[4724]: I0226 11:34:00.240892 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:34:00 crc kubenswrapper[4724]: I0226 11:34:00.241120 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:34:00 crc kubenswrapper[4724]: I0226 11:34:00.241301 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:34:00 crc kubenswrapper[4724]: I0226 11:34:00.290263 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535094-nv6rp"] Feb 26 11:34:00 crc kubenswrapper[4724]: I0226 11:34:00.399108 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gr9m\" (UniqueName: \"kubernetes.io/projected/235c375a-3a2e-4ec0-88d9-6aee5b464dd2-kube-api-access-2gr9m\") pod \"auto-csr-approver-29535094-nv6rp\" (UID: \"235c375a-3a2e-4ec0-88d9-6aee5b464dd2\") " pod="openshift-infra/auto-csr-approver-29535094-nv6rp" Feb 26 11:34:00 crc kubenswrapper[4724]: I0226 11:34:00.501376 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gr9m\" (UniqueName: \"kubernetes.io/projected/235c375a-3a2e-4ec0-88d9-6aee5b464dd2-kube-api-access-2gr9m\") pod \"auto-csr-approver-29535094-nv6rp\" (UID: \"235c375a-3a2e-4ec0-88d9-6aee5b464dd2\") " pod="openshift-infra/auto-csr-approver-29535094-nv6rp" Feb 26 11:34:00 crc kubenswrapper[4724]: I0226 11:34:00.537319 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gr9m\" (UniqueName: \"kubernetes.io/projected/235c375a-3a2e-4ec0-88d9-6aee5b464dd2-kube-api-access-2gr9m\") pod \"auto-csr-approver-29535094-nv6rp\" (UID: \"235c375a-3a2e-4ec0-88d9-6aee5b464dd2\") " pod="openshift-infra/auto-csr-approver-29535094-nv6rp" Feb 26 11:34:00 crc kubenswrapper[4724]: I0226 11:34:00.600243 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535094-nv6rp" Feb 26 11:34:00 crc kubenswrapper[4724]: I0226 11:34:00.639043 4724 generic.go:334] "Generic (PLEG): container finished" podID="41d636b5-9092-4373-a1f9-8c79f5b9ddaa" containerID="524c1bfe8f3d0d91ea2d6f151b6b555d1cb1ea11319c9dd099fb62aa16cc2055" exitCode=0 Feb 26 11:34:00 crc kubenswrapper[4724]: I0226 11:34:00.639207 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vzfph" event={"ID":"41d636b5-9092-4373-a1f9-8c79f5b9ddaa","Type":"ContainerDied","Data":"524c1bfe8f3d0d91ea2d6f151b6b555d1cb1ea11319c9dd099fb62aa16cc2055"} Feb 26 11:34:00 crc kubenswrapper[4724]: I0226 11:34:00.659268 4724 generic.go:334] "Generic (PLEG): container finished" podID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerID="59ff5d07080f5052f23a5fd6e3d21fab7f6babe570448b56edac0b7937e74839" exitCode=137 Feb 26 11:34:00 crc kubenswrapper[4724]: I0226 11:34:00.659307 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db35a650-999a-45fc-8bb6-85b86ac7feba","Type":"ContainerDied","Data":"59ff5d07080f5052f23a5fd6e3d21fab7f6babe570448b56edac0b7937e74839"} Feb 26 11:34:00 crc kubenswrapper[4724]: I0226 11:34:00.881517 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.012025 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db35a650-999a-45fc-8bb6-85b86ac7feba-log-httpd\") pod \"db35a650-999a-45fc-8bb6-85b86ac7feba\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.012150 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b2ps\" (UniqueName: \"kubernetes.io/projected/db35a650-999a-45fc-8bb6-85b86ac7feba-kube-api-access-8b2ps\") pod \"db35a650-999a-45fc-8bb6-85b86ac7feba\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.012222 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db35a650-999a-45fc-8bb6-85b86ac7feba-run-httpd\") pod \"db35a650-999a-45fc-8bb6-85b86ac7feba\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.012278 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-config-data\") pod \"db35a650-999a-45fc-8bb6-85b86ac7feba\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.012326 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-scripts\") pod \"db35a650-999a-45fc-8bb6-85b86ac7feba\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.012388 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-sg-core-conf-yaml\") pod \"db35a650-999a-45fc-8bb6-85b86ac7feba\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.012419 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-combined-ca-bundle\") pod \"db35a650-999a-45fc-8bb6-85b86ac7feba\" (UID: \"db35a650-999a-45fc-8bb6-85b86ac7feba\") " Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.016401 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db35a650-999a-45fc-8bb6-85b86ac7feba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "db35a650-999a-45fc-8bb6-85b86ac7feba" (UID: "db35a650-999a-45fc-8bb6-85b86ac7feba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.016430 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db35a650-999a-45fc-8bb6-85b86ac7feba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "db35a650-999a-45fc-8bb6-85b86ac7feba" (UID: "db35a650-999a-45fc-8bb6-85b86ac7feba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.021575 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db35a650-999a-45fc-8bb6-85b86ac7feba-kube-api-access-8b2ps" (OuterVolumeSpecName: "kube-api-access-8b2ps") pod "db35a650-999a-45fc-8bb6-85b86ac7feba" (UID: "db35a650-999a-45fc-8bb6-85b86ac7feba"). InnerVolumeSpecName "kube-api-access-8b2ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.025300 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-scripts" (OuterVolumeSpecName: "scripts") pod "db35a650-999a-45fc-8bb6-85b86ac7feba" (UID: "db35a650-999a-45fc-8bb6-85b86ac7feba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:01 crc kubenswrapper[4724]: W0226 11:34:01.083972 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod235c375a_3a2e_4ec0_88d9_6aee5b464dd2.slice/crio-8bac13d753f1ee5c0e9b616ee6bd776f3f16ea425b599d917a3c957494fd518c WatchSource:0}: Error finding container 8bac13d753f1ee5c0e9b616ee6bd776f3f16ea425b599d917a3c957494fd518c: Status 404 returned error can't find the container with id 8bac13d753f1ee5c0e9b616ee6bd776f3f16ea425b599d917a3c957494fd518c Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.091046 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535094-nv6rp"] Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.114501 4724 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db35a650-999a-45fc-8bb6-85b86ac7feba-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.114528 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b2ps\" (UniqueName: \"kubernetes.io/projected/db35a650-999a-45fc-8bb6-85b86ac7feba-kube-api-access-8b2ps\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.114540 4724 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db35a650-999a-45fc-8bb6-85b86ac7feba-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.114550 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.122760 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "db35a650-999a-45fc-8bb6-85b86ac7feba" (UID: "db35a650-999a-45fc-8bb6-85b86ac7feba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.203218 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-config-data" (OuterVolumeSpecName: "config-data") pod "db35a650-999a-45fc-8bb6-85b86ac7feba" (UID: "db35a650-999a-45fc-8bb6-85b86ac7feba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.207449 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db35a650-999a-45fc-8bb6-85b86ac7feba" (UID: "db35a650-999a-45fc-8bb6-85b86ac7feba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.216831 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.217106 4724 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.217283 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db35a650-999a-45fc-8bb6-85b86ac7feba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.363800 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.364103 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cf5ef727-2542-4452-aff8-f34f3edea383" containerName="glance-log" containerID="cri-o://221b6476efce42c4aba02169aa0b9f7bdba5d3b098bfe33b19d34eafe9498f83" gracePeriod=30 Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.364290 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cf5ef727-2542-4452-aff8-f34f3edea383" containerName="glance-httpd" containerID="cri-o://c34af2cb6a484a31df2576ead18e4b3a83248d1907d0706d6e0f6f656e3252e9" gracePeriod=30 Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.688276 4724 generic.go:334] "Generic (PLEG): container finished" podID="cf5ef727-2542-4452-aff8-f34f3edea383" containerID="221b6476efce42c4aba02169aa0b9f7bdba5d3b098bfe33b19d34eafe9498f83" exitCode=143 Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.688386 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf5ef727-2542-4452-aff8-f34f3edea383","Type":"ContainerDied","Data":"221b6476efce42c4aba02169aa0b9f7bdba5d3b098bfe33b19d34eafe9498f83"} Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.699149 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db35a650-999a-45fc-8bb6-85b86ac7feba","Type":"ContainerDied","Data":"8e40d91a23886a1ba294991fce2154fbca59c7f94b607c93804f64d27565828c"} Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.699564 4724 scope.go:117] "RemoveContainer" containerID="59ff5d07080f5052f23a5fd6e3d21fab7f6babe570448b56edac0b7937e74839" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.699798 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.718888 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535094-nv6rp" event={"ID":"235c375a-3a2e-4ec0-88d9-6aee5b464dd2","Type":"ContainerStarted","Data":"8bac13d753f1ee5c0e9b616ee6bd776f3f16ea425b599d917a3c957494fd518c"} Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.721191 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3fdec6fc-d28c-456b-b3a9-6eae59d27655","Type":"ContainerStarted","Data":"d3e059f3959973f3e2050aa7f86f5ab3e8aba6fb2b695e36cf6c0f3184c29ca3"} Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.763367 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.776763 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.796966 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:01 crc kubenswrapper[4724]: E0226 11:34:01.797507 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="proxy-httpd" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.797527 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="proxy-httpd" Feb 26 11:34:01 crc kubenswrapper[4724]: E0226 11:34:01.797554 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="sg-core" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.797564 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="sg-core" Feb 26 11:34:01 crc kubenswrapper[4724]: E0226 11:34:01.797602 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="ceilometer-notification-agent" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.797610 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="ceilometer-notification-agent" Feb 26 11:34:01 crc kubenswrapper[4724]: E0226 11:34:01.797627 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="ceilometer-central-agent" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.797635 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="ceilometer-central-agent" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.797860 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="sg-core" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.797876 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="ceilometer-central-agent" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.797897 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="ceilometer-notification-agent" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.797911 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" containerName="proxy-httpd" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.802743 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.803689 4724 scope.go:117] "RemoveContainer" containerID="b0362de03f2cd567687b118d0f828f83e56ce784e121a4bc79ae72581617668a" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.810431 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.810686 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.823056 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.876459 4724 scope.go:117] "RemoveContainer" containerID="25f77826f7ccccb3eab037451fce2fd54ffb84f853a1eee34ea5e492cb3b5fc2" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.927302 4724 scope.go:117] "RemoveContainer" containerID="6aec32bf39088fc922c9b532d6aafd73cbdb270f60f384f063c2b0c408015dcf" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.947932 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-config-data\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.948082 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.948193 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvsjs\" (UniqueName: \"kubernetes.io/projected/048e9f44-4790-4d6a-91fa-52955bb5b3cb-kube-api-access-kvsjs\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.948251 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/048e9f44-4790-4d6a-91fa-52955bb5b3cb-log-httpd\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.948320 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-scripts\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.948524 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/048e9f44-4790-4d6a-91fa-52955bb5b3cb-run-httpd\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.948557 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:01 crc kubenswrapper[4724]: I0226 11:34:01.998600 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db35a650-999a-45fc-8bb6-85b86ac7feba" path="/var/lib/kubelet/pods/db35a650-999a-45fc-8bb6-85b86ac7feba/volumes" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.050307 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.050361 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/048e9f44-4790-4d6a-91fa-52955bb5b3cb-run-httpd\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.050643 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-config-data\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.051694 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/048e9f44-4790-4d6a-91fa-52955bb5b3cb-run-httpd\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.052524 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.053315 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvsjs\" (UniqueName: \"kubernetes.io/projected/048e9f44-4790-4d6a-91fa-52955bb5b3cb-kube-api-access-kvsjs\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.053455 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/048e9f44-4790-4d6a-91fa-52955bb5b3cb-log-httpd\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.053608 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-scripts\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.054714 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/048e9f44-4790-4d6a-91fa-52955bb5b3cb-log-httpd\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.058637 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.081704 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvsjs\" (UniqueName: \"kubernetes.io/projected/048e9f44-4790-4d6a-91fa-52955bb5b3cb-kube-api-access-kvsjs\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.083755 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-config-data\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.083949 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.085108 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-scripts\") pod \"ceilometer-0\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " pod="openstack/ceilometer-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.137310 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.229554 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.364728 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-combined-ca-bundle\") pod \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.365100 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-scripts\") pod \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.365243 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-config-data\") pod \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.365287 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jsn2\" (UniqueName: \"kubernetes.io/projected/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-kube-api-access-9jsn2\") pod \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\" (UID: \"41d636b5-9092-4373-a1f9-8c79f5b9ddaa\") " Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.381152 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-scripts" (OuterVolumeSpecName: "scripts") pod "41d636b5-9092-4373-a1f9-8c79f5b9ddaa" (UID: "41d636b5-9092-4373-a1f9-8c79f5b9ddaa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.382616 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-kube-api-access-9jsn2" (OuterVolumeSpecName: "kube-api-access-9jsn2") pod "41d636b5-9092-4373-a1f9-8c79f5b9ddaa" (UID: "41d636b5-9092-4373-a1f9-8c79f5b9ddaa"). InnerVolumeSpecName "kube-api-access-9jsn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.419895 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41d636b5-9092-4373-a1f9-8c79f5b9ddaa" (UID: "41d636b5-9092-4373-a1f9-8c79f5b9ddaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.467701 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.467918 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.468017 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jsn2\" (UniqueName: \"kubernetes.io/projected/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-kube-api-access-9jsn2\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.485467 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-config-data" (OuterVolumeSpecName: "config-data") pod "41d636b5-9092-4373-a1f9-8c79f5b9ddaa" (UID: "41d636b5-9092-4373-a1f9-8c79f5b9ddaa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.569682 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41d636b5-9092-4373-a1f9-8c79f5b9ddaa-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.621523 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.740554 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3fdec6fc-d28c-456b-b3a9-6eae59d27655","Type":"ContainerStarted","Data":"4588256a3233d1332981cf57ed64803b88f745fd3129d0756f1fe7c05ba42262"} Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.742914 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"048e9f44-4790-4d6a-91fa-52955bb5b3cb","Type":"ContainerStarted","Data":"963ef042a4f1e6eec7c702faa1906d7ae23c26f76513d015a084478004b7f120"} Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.764370 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vzfph" event={"ID":"41d636b5-9092-4373-a1f9-8c79f5b9ddaa","Type":"ContainerDied","Data":"92d59f36daa876318c86d1bd3f8fc6f19dc1e4bd443950d5bfe07ef57907a08f"} Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.764420 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92d59f36daa876318c86d1bd3f8fc6f19dc1e4bd443950d5bfe07ef57907a08f" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.764497 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vzfph" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.771102 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.771083601 podStartE2EDuration="5.771083601s" podCreationTimestamp="2026-02-26 11:33:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:02.767089689 +0000 UTC m=+1709.422828804" watchObservedRunningTime="2026-02-26 11:34:02.771083601 +0000 UTC m=+1709.426822716" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.822495 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 11:34:02 crc kubenswrapper[4724]: E0226 11:34:02.822991 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d636b5-9092-4373-a1f9-8c79f5b9ddaa" containerName="nova-cell0-conductor-db-sync" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.823016 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d636b5-9092-4373-a1f9-8c79f5b9ddaa" containerName="nova-cell0-conductor-db-sync" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.823276 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d636b5-9092-4373-a1f9-8c79f5b9ddaa" containerName="nova-cell0-conductor-db-sync" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.824103 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.828196 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.828782 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-b6gk7" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.855108 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.964868 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.980976 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.981061 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sws2\" (UniqueName: \"kubernetes.io/projected/aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f-kube-api-access-5sws2\") pod \"nova-cell0-conductor-0\" (UID: \"aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.981134 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:34:02 crc kubenswrapper[4724]: I0226 11:34:02.995017 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:34:03 crc kubenswrapper[4724]: I0226 11:34:03.083952 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:34:03 crc kubenswrapper[4724]: I0226 11:34:03.084706 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sws2\" (UniqueName: \"kubernetes.io/projected/aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f-kube-api-access-5sws2\") pod \"nova-cell0-conductor-0\" (UID: \"aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:34:03 crc kubenswrapper[4724]: I0226 11:34:03.084863 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:34:03 crc kubenswrapper[4724]: I0226 11:34:03.089626 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:34:03 crc kubenswrapper[4724]: I0226 11:34:03.090529 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:34:03 crc kubenswrapper[4724]: I0226 11:34:03.105744 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sws2\" (UniqueName: \"kubernetes.io/projected/aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f-kube-api-access-5sws2\") pod \"nova-cell0-conductor-0\" (UID: \"aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f\") " pod="openstack/nova-cell0-conductor-0" Feb 26 11:34:03 crc kubenswrapper[4724]: I0226 11:34:03.146407 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 11:34:03 crc kubenswrapper[4724]: I0226 11:34:03.780647 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"048e9f44-4790-4d6a-91fa-52955bb5b3cb","Type":"ContainerStarted","Data":"82ec9effbe5ec63631a443001e329571e8ec03059956ec5c200796fb0d96a8f2"} Feb 26 11:34:03 crc kubenswrapper[4724]: I0226 11:34:03.783031 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535094-nv6rp" event={"ID":"235c375a-3a2e-4ec0-88d9-6aee5b464dd2","Type":"ContainerStarted","Data":"4157c795975368696249e150c6d3ade3f4b5dd8cbf8e5f014d54c297115943fc"} Feb 26 11:34:03 crc kubenswrapper[4724]: I0226 11:34:03.805397 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535094-nv6rp" podStartSLOduration=2.498598841 podStartE2EDuration="3.805369725s" podCreationTimestamp="2026-02-26 11:34:00 +0000 UTC" firstStartedPulling="2026-02-26 11:34:01.093099667 +0000 UTC m=+1707.748838782" lastFinishedPulling="2026-02-26 11:34:02.399870551 +0000 UTC m=+1709.055609666" observedRunningTime="2026-02-26 11:34:03.803822945 +0000 UTC m=+1710.459562070" watchObservedRunningTime="2026-02-26 11:34:03.805369725 +0000 UTC m=+1710.461108860" Feb 26 11:34:03 crc kubenswrapper[4724]: I0226 11:34:03.851764 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 11:34:04 crc kubenswrapper[4724]: I0226 11:34:04.361393 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:04 crc kubenswrapper[4724]: I0226 11:34:04.794868 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f","Type":"ContainerStarted","Data":"13bb0a70588c61e0a8449571080485c099197cf346cb55c42ed898fe64946f06"} Feb 26 11:34:04 crc kubenswrapper[4724]: I0226 11:34:04.794934 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f","Type":"ContainerStarted","Data":"2be5e2de60796c3b0ee6c1c3e0c4d10ec254b5e527a20db90c04f60065c7347c"} Feb 26 11:34:04 crc kubenswrapper[4724]: I0226 11:34:04.817960 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.817941967 podStartE2EDuration="2.817941967s" podCreationTimestamp="2026-02-26 11:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:04.812479998 +0000 UTC m=+1711.468219113" watchObservedRunningTime="2026-02-26 11:34:04.817941967 +0000 UTC m=+1711.473681082" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.745539 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.820855 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x8fd\" (UniqueName: \"kubernetes.io/projected/cf5ef727-2542-4452-aff8-f34f3edea383-kube-api-access-8x8fd\") pod \"cf5ef727-2542-4452-aff8-f34f3edea383\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.820915 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf5ef727-2542-4452-aff8-f34f3edea383-httpd-run\") pod \"cf5ef727-2542-4452-aff8-f34f3edea383\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.820968 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-config-data\") pod \"cf5ef727-2542-4452-aff8-f34f3edea383\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.821007 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-internal-tls-certs\") pod \"cf5ef727-2542-4452-aff8-f34f3edea383\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.821156 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-combined-ca-bundle\") pod \"cf5ef727-2542-4452-aff8-f34f3edea383\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.821285 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cf5ef727-2542-4452-aff8-f34f3edea383\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.821315 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-scripts\") pod \"cf5ef727-2542-4452-aff8-f34f3edea383\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.821351 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf5ef727-2542-4452-aff8-f34f3edea383-logs\") pod \"cf5ef727-2542-4452-aff8-f34f3edea383\" (UID: \"cf5ef727-2542-4452-aff8-f34f3edea383\") " Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.822213 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf5ef727-2542-4452-aff8-f34f3edea383-logs" (OuterVolumeSpecName: "logs") pod "cf5ef727-2542-4452-aff8-f34f3edea383" (UID: "cf5ef727-2542-4452-aff8-f34f3edea383"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.825733 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf5ef727-2542-4452-aff8-f34f3edea383-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cf5ef727-2542-4452-aff8-f34f3edea383" (UID: "cf5ef727-2542-4452-aff8-f34f3edea383"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.832466 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-scripts" (OuterVolumeSpecName: "scripts") pod "cf5ef727-2542-4452-aff8-f34f3edea383" (UID: "cf5ef727-2542-4452-aff8-f34f3edea383"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.836394 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf5ef727-2542-4452-aff8-f34f3edea383-kube-api-access-8x8fd" (OuterVolumeSpecName: "kube-api-access-8x8fd") pod "cf5ef727-2542-4452-aff8-f34f3edea383" (UID: "cf5ef727-2542-4452-aff8-f34f3edea383"). InnerVolumeSpecName "kube-api-access-8x8fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.836480 4724 generic.go:334] "Generic (PLEG): container finished" podID="cf5ef727-2542-4452-aff8-f34f3edea383" containerID="c34af2cb6a484a31df2576ead18e4b3a83248d1907d0706d6e0f6f656e3252e9" exitCode=0 Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.836589 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf5ef727-2542-4452-aff8-f34f3edea383","Type":"ContainerDied","Data":"c34af2cb6a484a31df2576ead18e4b3a83248d1907d0706d6e0f6f656e3252e9"} Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.836625 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf5ef727-2542-4452-aff8-f34f3edea383","Type":"ContainerDied","Data":"bbd24354fc5e252b81eabc20fa8b0a58fc6f310cfadcff1c33370c9b65a12127"} Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.836648 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.836652 4724 scope.go:117] "RemoveContainer" containerID="c34af2cb6a484a31df2576ead18e4b3a83248d1907d0706d6e0f6f656e3252e9" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.851213 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "cf5ef727-2542-4452-aff8-f34f3edea383" (UID: "cf5ef727-2542-4452-aff8-f34f3edea383"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.855126 4724 generic.go:334] "Generic (PLEG): container finished" podID="235c375a-3a2e-4ec0-88d9-6aee5b464dd2" containerID="4157c795975368696249e150c6d3ade3f4b5dd8cbf8e5f014d54c297115943fc" exitCode=0 Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.855830 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535094-nv6rp" event={"ID":"235c375a-3a2e-4ec0-88d9-6aee5b464dd2","Type":"ContainerDied","Data":"4157c795975368696249e150c6d3ade3f4b5dd8cbf8e5f014d54c297115943fc"} Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.861775 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"048e9f44-4790-4d6a-91fa-52955bb5b3cb","Type":"ContainerStarted","Data":"41ed7c3bdce4a3d6bcc9c8f1b0b42339323a3e3a8dd0d28068f0ae6c05b99c41"} Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.861810 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.925426 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.925691 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.925775 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf5ef727-2542-4452-aff8-f34f3edea383-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.925848 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x8fd\" (UniqueName: \"kubernetes.io/projected/cf5ef727-2542-4452-aff8-f34f3edea383-kube-api-access-8x8fd\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.925933 4724 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf5ef727-2542-4452-aff8-f34f3edea383-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.949109 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf5ef727-2542-4452-aff8-f34f3edea383" (UID: "cf5ef727-2542-4452-aff8-f34f3edea383"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.976021 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cf5ef727-2542-4452-aff8-f34f3edea383" (UID: "cf5ef727-2542-4452-aff8-f34f3edea383"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:05 crc kubenswrapper[4724]: I0226 11:34:05.999646 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-config-data" (OuterVolumeSpecName: "config-data") pod "cf5ef727-2542-4452-aff8-f34f3edea383" (UID: "cf5ef727-2542-4452-aff8-f34f3edea383"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.026809 4724 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.027609 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.027733 4724 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.027814 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf5ef727-2542-4452-aff8-f34f3edea383-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.027917 4724 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.134564 4724 scope.go:117] "RemoveContainer" containerID="221b6476efce42c4aba02169aa0b9f7bdba5d3b098bfe33b19d34eafe9498f83" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.215230 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.226875 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.260039 4724 scope.go:117] "RemoveContainer" containerID="c34af2cb6a484a31df2576ead18e4b3a83248d1907d0706d6e0f6f656e3252e9" Feb 26 11:34:06 crc kubenswrapper[4724]: E0226 11:34:06.260800 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c34af2cb6a484a31df2576ead18e4b3a83248d1907d0706d6e0f6f656e3252e9\": container with ID starting with c34af2cb6a484a31df2576ead18e4b3a83248d1907d0706d6e0f6f656e3252e9 not found: ID does not exist" containerID="c34af2cb6a484a31df2576ead18e4b3a83248d1907d0706d6e0f6f656e3252e9" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.260854 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34af2cb6a484a31df2576ead18e4b3a83248d1907d0706d6e0f6f656e3252e9"} err="failed to get container status \"c34af2cb6a484a31df2576ead18e4b3a83248d1907d0706d6e0f6f656e3252e9\": rpc error: code = NotFound desc = could not find container \"c34af2cb6a484a31df2576ead18e4b3a83248d1907d0706d6e0f6f656e3252e9\": container with ID starting with c34af2cb6a484a31df2576ead18e4b3a83248d1907d0706d6e0f6f656e3252e9 not found: ID does not exist" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.260888 4724 scope.go:117] "RemoveContainer" containerID="221b6476efce42c4aba02169aa0b9f7bdba5d3b098bfe33b19d34eafe9498f83" Feb 26 11:34:06 crc kubenswrapper[4724]: E0226 11:34:06.261260 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"221b6476efce42c4aba02169aa0b9f7bdba5d3b098bfe33b19d34eafe9498f83\": container with ID starting with 221b6476efce42c4aba02169aa0b9f7bdba5d3b098bfe33b19d34eafe9498f83 not found: ID does not exist" containerID="221b6476efce42c4aba02169aa0b9f7bdba5d3b098bfe33b19d34eafe9498f83" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.261281 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"221b6476efce42c4aba02169aa0b9f7bdba5d3b098bfe33b19d34eafe9498f83"} err="failed to get container status \"221b6476efce42c4aba02169aa0b9f7bdba5d3b098bfe33b19d34eafe9498f83\": rpc error: code = NotFound desc = could not find container \"221b6476efce42c4aba02169aa0b9f7bdba5d3b098bfe33b19d34eafe9498f83\": container with ID starting with 221b6476efce42c4aba02169aa0b9f7bdba5d3b098bfe33b19d34eafe9498f83 not found: ID does not exist" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.273227 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:34:06 crc kubenswrapper[4724]: E0226 11:34:06.273745 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf5ef727-2542-4452-aff8-f34f3edea383" containerName="glance-log" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.273775 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf5ef727-2542-4452-aff8-f34f3edea383" containerName="glance-log" Feb 26 11:34:06 crc kubenswrapper[4724]: E0226 11:34:06.273806 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf5ef727-2542-4452-aff8-f34f3edea383" containerName="glance-httpd" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.273817 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf5ef727-2542-4452-aff8-f34f3edea383" containerName="glance-httpd" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.274119 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf5ef727-2542-4452-aff8-f34f3edea383" containerName="glance-log" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.274144 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf5ef727-2542-4452-aff8-f34f3edea383" containerName="glance-httpd" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.278511 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.305555 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.307920 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.366036 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.434736 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-57977849d4-8s5ds" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.445627 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.448093 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.448166 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4468be96-ea3b-4b93-8c93-82b6e51401e1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.448249 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4468be96-ea3b-4b93-8c93-82b6e51401e1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.448278 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kftsx\" (UniqueName: \"kubernetes.io/projected/4468be96-ea3b-4b93-8c93-82b6e51401e1-kube-api-access-kftsx\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.448306 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4468be96-ea3b-4b93-8c93-82b6e51401e1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.448336 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4468be96-ea3b-4b93-8c93-82b6e51401e1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.448419 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4468be96-ea3b-4b93-8c93-82b6e51401e1-logs\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.448467 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4468be96-ea3b-4b93-8c93-82b6e51401e1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.558976 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.559055 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4468be96-ea3b-4b93-8c93-82b6e51401e1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.559106 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4468be96-ea3b-4b93-8c93-82b6e51401e1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.559132 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kftsx\" (UniqueName: \"kubernetes.io/projected/4468be96-ea3b-4b93-8c93-82b6e51401e1-kube-api-access-kftsx\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.559161 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4468be96-ea3b-4b93-8c93-82b6e51401e1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.559208 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4468be96-ea3b-4b93-8c93-82b6e51401e1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.559317 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4468be96-ea3b-4b93-8c93-82b6e51401e1-logs\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.559460 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4468be96-ea3b-4b93-8c93-82b6e51401e1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.560030 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4468be96-ea3b-4b93-8c93-82b6e51401e1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.560361 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4468be96-ea3b-4b93-8c93-82b6e51401e1-logs\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.560991 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.576042 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-ddfb9fd96-hzc8c"] Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.577684 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4468be96-ea3b-4b93-8c93-82b6e51401e1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.590448 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4468be96-ea3b-4b93-8c93-82b6e51401e1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.590820 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4468be96-ea3b-4b93-8c93-82b6e51401e1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.604776 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4468be96-ea3b-4b93-8c93-82b6e51401e1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.625283 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kftsx\" (UniqueName: \"kubernetes.io/projected/4468be96-ea3b-4b93-8c93-82b6e51401e1-kube-api-access-kftsx\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.720916 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4468be96-ea3b-4b93-8c93-82b6e51401e1\") " pod="openstack/glance-default-internal-api-0" Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.878483 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"048e9f44-4790-4d6a-91fa-52955bb5b3cb","Type":"ContainerStarted","Data":"28fc73408c6ff1ebe571e5a8de28f40a878a1685692b945016dce58323c2dc99"} Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.881055 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon-log" containerID="cri-o://6c0cf98c9d0fef3ab39c0703b5c93439207fec4b8a3f2f2032db879069cde925" gracePeriod=30 Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.881533 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" containerID="cri-o://95449fe5b1852e70ef5d4115673dda1bb1e3c75529c1bdd990fe212a5d65423d" gracePeriod=30 Feb 26 11:34:06 crc kubenswrapper[4724]: I0226 11:34:06.966151 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 11:34:07 crc kubenswrapper[4724]: I0226 11:34:07.258375 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535094-nv6rp" Feb 26 11:34:07 crc kubenswrapper[4724]: I0226 11:34:07.379104 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gr9m\" (UniqueName: \"kubernetes.io/projected/235c375a-3a2e-4ec0-88d9-6aee5b464dd2-kube-api-access-2gr9m\") pod \"235c375a-3a2e-4ec0-88d9-6aee5b464dd2\" (UID: \"235c375a-3a2e-4ec0-88d9-6aee5b464dd2\") " Feb 26 11:34:07 crc kubenswrapper[4724]: I0226 11:34:07.391205 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/235c375a-3a2e-4ec0-88d9-6aee5b464dd2-kube-api-access-2gr9m" (OuterVolumeSpecName: "kube-api-access-2gr9m") pod "235c375a-3a2e-4ec0-88d9-6aee5b464dd2" (UID: "235c375a-3a2e-4ec0-88d9-6aee5b464dd2"). InnerVolumeSpecName "kube-api-access-2gr9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:07 crc kubenswrapper[4724]: I0226 11:34:07.483958 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gr9m\" (UniqueName: \"kubernetes.io/projected/235c375a-3a2e-4ec0-88d9-6aee5b464dd2-kube-api-access-2gr9m\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:07 crc kubenswrapper[4724]: I0226 11:34:07.738041 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 11:34:07 crc kubenswrapper[4724]: W0226 11:34:07.776300 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4468be96_ea3b_4b93_8c93_82b6e51401e1.slice/crio-fc388c03e5c2dca7172fae531b4b360795fc42a52bbe02e2663ad106dcef2ef1 WatchSource:0}: Error finding container fc388c03e5c2dca7172fae531b4b360795fc42a52bbe02e2663ad106dcef2ef1: Status 404 returned error can't find the container with id fc388c03e5c2dca7172fae531b4b360795fc42a52bbe02e2663ad106dcef2ef1 Feb 26 11:34:07 crc kubenswrapper[4724]: I0226 11:34:07.900951 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535094-nv6rp" event={"ID":"235c375a-3a2e-4ec0-88d9-6aee5b464dd2","Type":"ContainerDied","Data":"8bac13d753f1ee5c0e9b616ee6bd776f3f16ea425b599d917a3c957494fd518c"} Feb 26 11:34:07 crc kubenswrapper[4724]: I0226 11:34:07.901001 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bac13d753f1ee5c0e9b616ee6bd776f3f16ea425b599d917a3c957494fd518c" Feb 26 11:34:07 crc kubenswrapper[4724]: I0226 11:34:07.900968 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535094-nv6rp" Feb 26 11:34:07 crc kubenswrapper[4724]: I0226 11:34:07.918295 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4468be96-ea3b-4b93-8c93-82b6e51401e1","Type":"ContainerStarted","Data":"fc388c03e5c2dca7172fae531b4b360795fc42a52bbe02e2663ad106dcef2ef1"} Feb 26 11:34:07 crc kubenswrapper[4724]: I0226 11:34:07.997234 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf5ef727-2542-4452-aff8-f34f3edea383" path="/var/lib/kubelet/pods/cf5ef727-2542-4452-aff8-f34f3edea383/volumes" Feb 26 11:34:08 crc kubenswrapper[4724]: I0226 11:34:08.010201 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 11:34:08 crc kubenswrapper[4724]: I0226 11:34:08.010264 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 11:34:08 crc kubenswrapper[4724]: I0226 11:34:08.073888 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 11:34:08 crc kubenswrapper[4724]: I0226 11:34:08.090541 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 11:34:08 crc kubenswrapper[4724]: I0226 11:34:08.189976 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535088-zp6m5"] Feb 26 11:34:08 crc kubenswrapper[4724]: I0226 11:34:08.203153 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535088-zp6m5"] Feb 26 11:34:08 crc kubenswrapper[4724]: I0226 11:34:08.962866 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4468be96-ea3b-4b93-8c93-82b6e51401e1","Type":"ContainerStarted","Data":"4d4cc37d8b0c22a39b5f606937452a651071353682c8473c1a50d1cab0f9e89b"} Feb 26 11:34:08 crc kubenswrapper[4724]: I0226 11:34:08.963193 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 11:34:08 crc kubenswrapper[4724]: I0226 11:34:08.963376 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 11:34:09 crc kubenswrapper[4724]: I0226 11:34:09.973793 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4468be96-ea3b-4b93-8c93-82b6e51401e1","Type":"ContainerStarted","Data":"032f62704cca3c3c09b40d8e97fb6b4c48dff342986625c78e730f442c5c3ac4"} Feb 26 11:34:09 crc kubenswrapper[4724]: I0226 11:34:09.987764 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07227daa-9b2f-4573-a280-84d80a8b9db7" path="/var/lib/kubelet/pods/07227daa-9b2f-4573-a280-84d80a8b9db7/volumes" Feb 26 11:34:10 crc kubenswrapper[4724]: I0226 11:34:10.005743 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.005725061 podStartE2EDuration="4.005725061s" podCreationTimestamp="2026-02-26 11:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:10.000031976 +0000 UTC m=+1716.655771101" watchObservedRunningTime="2026-02-26 11:34:10.005725061 +0000 UTC m=+1716.661464176" Feb 26 11:34:10 crc kubenswrapper[4724]: I0226 11:34:10.055562 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:53086->10.217.0.154:8443: read: connection reset by peer" Feb 26 11:34:10 crc kubenswrapper[4724]: I0226 11:34:10.985122 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"048e9f44-4790-4d6a-91fa-52955bb5b3cb","Type":"ContainerStarted","Data":"0c952b62f7bdc1edefdb98180df262ee1945403b727b08efbde27c5f2f3f33c0"} Feb 26 11:34:10 crc kubenswrapper[4724]: I0226 11:34:10.985515 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:34:10 crc kubenswrapper[4724]: I0226 11:34:10.985148 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="ceilometer-central-agent" containerID="cri-o://82ec9effbe5ec63631a443001e329571e8ec03059956ec5c200796fb0d96a8f2" gracePeriod=30 Feb 26 11:34:10 crc kubenswrapper[4724]: I0226 11:34:10.985609 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="proxy-httpd" containerID="cri-o://0c952b62f7bdc1edefdb98180df262ee1945403b727b08efbde27c5f2f3f33c0" gracePeriod=30 Feb 26 11:34:10 crc kubenswrapper[4724]: I0226 11:34:10.985868 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="sg-core" containerID="cri-o://28fc73408c6ff1ebe571e5a8de28f40a878a1685692b945016dce58323c2dc99" gracePeriod=30 Feb 26 11:34:10 crc kubenswrapper[4724]: I0226 11:34:10.985655 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="ceilometer-notification-agent" containerID="cri-o://41ed7c3bdce4a3d6bcc9c8f1b0b42339323a3e3a8dd0d28068f0ae6c05b99c41" gracePeriod=30 Feb 26 11:34:10 crc kubenswrapper[4724]: I0226 11:34:10.997086 4724 generic.go:334] "Generic (PLEG): container finished" podID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerID="95449fe5b1852e70ef5d4115673dda1bb1e3c75529c1bdd990fe212a5d65423d" exitCode=0 Feb 26 11:34:10 crc kubenswrapper[4724]: I0226 11:34:10.997868 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ddfb9fd96-hzc8c" event={"ID":"fa39614a-db84-4214-baa1-bd7cbc7b5ae0","Type":"ContainerDied","Data":"95449fe5b1852e70ef5d4115673dda1bb1e3c75529c1bdd990fe212a5d65423d"} Feb 26 11:34:10 crc kubenswrapper[4724]: I0226 11:34:10.997959 4724 scope.go:117] "RemoveContainer" containerID="7099fe5c31115c0b722be7a13c0a9feb5c472f77246d6698e652b193791a6781" Feb 26 11:34:10 crc kubenswrapper[4724]: I0226 11:34:10.998253 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:34:10 crc kubenswrapper[4724]: I0226 11:34:10.998272 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:34:11 crc kubenswrapper[4724]: I0226 11:34:11.028742 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.626356123 podStartE2EDuration="10.028719368s" podCreationTimestamp="2026-02-26 11:34:01 +0000 UTC" firstStartedPulling="2026-02-26 11:34:02.627134751 +0000 UTC m=+1709.282873866" lastFinishedPulling="2026-02-26 11:34:10.029497996 +0000 UTC m=+1716.685237111" observedRunningTime="2026-02-26 11:34:11.013260085 +0000 UTC m=+1717.668999210" watchObservedRunningTime="2026-02-26 11:34:11.028719368 +0000 UTC m=+1717.684458503" Feb 26 11:34:12 crc kubenswrapper[4724]: I0226 11:34:12.015124 4724 generic.go:334] "Generic (PLEG): container finished" podID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerID="0c952b62f7bdc1edefdb98180df262ee1945403b727b08efbde27c5f2f3f33c0" exitCode=0 Feb 26 11:34:12 crc kubenswrapper[4724]: I0226 11:34:12.015534 4724 generic.go:334] "Generic (PLEG): container finished" podID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerID="28fc73408c6ff1ebe571e5a8de28f40a878a1685692b945016dce58323c2dc99" exitCode=2 Feb 26 11:34:12 crc kubenswrapper[4724]: I0226 11:34:12.015546 4724 generic.go:334] "Generic (PLEG): container finished" podID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerID="41ed7c3bdce4a3d6bcc9c8f1b0b42339323a3e3a8dd0d28068f0ae6c05b99c41" exitCode=0 Feb 26 11:34:12 crc kubenswrapper[4724]: I0226 11:34:12.015266 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"048e9f44-4790-4d6a-91fa-52955bb5b3cb","Type":"ContainerDied","Data":"0c952b62f7bdc1edefdb98180df262ee1945403b727b08efbde27c5f2f3f33c0"} Feb 26 11:34:12 crc kubenswrapper[4724]: I0226 11:34:12.015596 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"048e9f44-4790-4d6a-91fa-52955bb5b3cb","Type":"ContainerDied","Data":"28fc73408c6ff1ebe571e5a8de28f40a878a1685692b945016dce58323c2dc99"} Feb 26 11:34:12 crc kubenswrapper[4724]: I0226 11:34:12.015612 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"048e9f44-4790-4d6a-91fa-52955bb5b3cb","Type":"ContainerDied","Data":"41ed7c3bdce4a3d6bcc9c8f1b0b42339323a3e3a8dd0d28068f0ae6c05b99c41"} Feb 26 11:34:13 crc kubenswrapper[4724]: I0226 11:34:13.175512 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 26 11:34:13 crc kubenswrapper[4724]: I0226 11:34:13.604941 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 11:34:13 crc kubenswrapper[4724]: I0226 11:34:13.605394 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 11:34:13 crc kubenswrapper[4724]: I0226 11:34:13.944107 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-wsfkc"] Feb 26 11:34:13 crc kubenswrapper[4724]: E0226 11:34:13.944686 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235c375a-3a2e-4ec0-88d9-6aee5b464dd2" containerName="oc" Feb 26 11:34:13 crc kubenswrapper[4724]: I0226 11:34:13.944709 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="235c375a-3a2e-4ec0-88d9-6aee5b464dd2" containerName="oc" Feb 26 11:34:13 crc kubenswrapper[4724]: I0226 11:34:13.944942 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="235c375a-3a2e-4ec0-88d9-6aee5b464dd2" containerName="oc" Feb 26 11:34:13 crc kubenswrapper[4724]: I0226 11:34:13.945607 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:13 crc kubenswrapper[4724]: I0226 11:34:13.949261 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 26 11:34:13 crc kubenswrapper[4724]: I0226 11:34:13.949486 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 26 11:34:13 crc kubenswrapper[4724]: I0226 11:34:13.964155 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wsfkc"] Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.010098 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-scripts\") pod \"nova-cell0-cell-mapping-wsfkc\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.010162 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbjdl\" (UniqueName: \"kubernetes.io/projected/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-kube-api-access-gbjdl\") pod \"nova-cell0-cell-mapping-wsfkc\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.010224 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-config-data\") pod \"nova-cell0-cell-mapping-wsfkc\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.010270 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wsfkc\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.055130 4724 generic.go:334] "Generic (PLEG): container finished" podID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerID="82ec9effbe5ec63631a443001e329571e8ec03059956ec5c200796fb0d96a8f2" exitCode=0 Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.055398 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"048e9f44-4790-4d6a-91fa-52955bb5b3cb","Type":"ContainerDied","Data":"82ec9effbe5ec63631a443001e329571e8ec03059956ec5c200796fb0d96a8f2"} Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.117705 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-scripts\") pod \"nova-cell0-cell-mapping-wsfkc\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.117796 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbjdl\" (UniqueName: \"kubernetes.io/projected/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-kube-api-access-gbjdl\") pod \"nova-cell0-cell-mapping-wsfkc\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.117877 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-config-data\") pod \"nova-cell0-cell-mapping-wsfkc\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.117937 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wsfkc\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.136119 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-config-data\") pod \"nova-cell0-cell-mapping-wsfkc\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.137791 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-scripts\") pod \"nova-cell0-cell-mapping-wsfkc\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.170514 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wsfkc\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.181031 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbjdl\" (UniqueName: \"kubernetes.io/projected/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-kube-api-access-gbjdl\") pod \"nova-cell0-cell-mapping-wsfkc\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.267164 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.387397 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.388924 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.460692 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.460763 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.460787 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmzgm\" (UniqueName: \"kubernetes.io/projected/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-kube-api-access-nmzgm\") pod \"nova-cell1-novncproxy-0\" (UID: \"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.461115 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.517396 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.596822 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.596897 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.596930 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmzgm\" (UniqueName: \"kubernetes.io/projected/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-kube-api-access-nmzgm\") pod \"nova-cell1-novncproxy-0\" (UID: \"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.638171 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.648237 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.650284 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.656033 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.667830 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmzgm\" (UniqueName: \"kubernetes.io/projected/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-kube-api-access-nmzgm\") pod \"nova-cell1-novncproxy-0\" (UID: \"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.670765 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.693268 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.698309 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e70ee5-2312-436e-83ab-c365c8447761-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.698403 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9e70ee5-2312-436e-83ab-c365c8447761-config-data\") pod \"nova-api-0\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.698483 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpqfp\" (UniqueName: \"kubernetes.io/projected/b9e70ee5-2312-436e-83ab-c365c8447761-kube-api-access-rpqfp\") pod \"nova-api-0\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.698548 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9e70ee5-2312-436e-83ab-c365c8447761-logs\") pod \"nova-api-0\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.728785 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.808346 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpqfp\" (UniqueName: \"kubernetes.io/projected/b9e70ee5-2312-436e-83ab-c365c8447761-kube-api-access-rpqfp\") pod \"nova-api-0\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.808715 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9e70ee5-2312-436e-83ab-c365c8447761-logs\") pod \"nova-api-0\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.808777 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e70ee5-2312-436e-83ab-c365c8447761-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.808818 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9e70ee5-2312-436e-83ab-c365c8447761-config-data\") pod \"nova-api-0\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.811601 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9e70ee5-2312-436e-83ab-c365c8447761-logs\") pod \"nova-api-0\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.812074 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.815408 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:14 crc kubenswrapper[4724]: E0226 11:34:14.816060 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="proxy-httpd" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.816077 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="proxy-httpd" Feb 26 11:34:14 crc kubenswrapper[4724]: E0226 11:34:14.816109 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="ceilometer-central-agent" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.816137 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="ceilometer-central-agent" Feb 26 11:34:14 crc kubenswrapper[4724]: E0226 11:34:14.816156 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="ceilometer-notification-agent" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.816165 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="ceilometer-notification-agent" Feb 26 11:34:14 crc kubenswrapper[4724]: E0226 11:34:14.816227 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="sg-core" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.816236 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="sg-core" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.816482 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="ceilometer-central-agent" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.816499 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="sg-core" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.816533 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="ceilometer-notification-agent" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.816541 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" containerName="proxy-httpd" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.817824 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.819435 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e70ee5-2312-436e-83ab-c365c8447761-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.847384 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.860534 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9e70ee5-2312-436e-83ab-c365c8447761-config-data\") pod \"nova-api-0\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.876232 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpqfp\" (UniqueName: \"kubernetes.io/projected/b9e70ee5-2312-436e-83ab-c365c8447761-kube-api-access-rpqfp\") pod \"nova-api-0\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " pod="openstack/nova-api-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.887252 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.916155 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/048e9f44-4790-4d6a-91fa-52955bb5b3cb-run-httpd\") pod \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.916204 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-scripts\") pod \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.916389 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-combined-ca-bundle\") pod \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.916571 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvsjs\" (UniqueName: \"kubernetes.io/projected/048e9f44-4790-4d6a-91fa-52955bb5b3cb-kube-api-access-kvsjs\") pod \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.916588 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/048e9f44-4790-4d6a-91fa-52955bb5b3cb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "048e9f44-4790-4d6a-91fa-52955bb5b3cb" (UID: "048e9f44-4790-4d6a-91fa-52955bb5b3cb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.916613 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/048e9f44-4790-4d6a-91fa-52955bb5b3cb-log-httpd\") pod \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.916648 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-config-data\") pod \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.916675 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-sg-core-conf-yaml\") pod \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\" (UID: \"048e9f44-4790-4d6a-91fa-52955bb5b3cb\") " Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.917040 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea9e253b-edea-4b96-a04d-30e3d8282eb1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " pod="openstack/nova-metadata-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.917699 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea9e253b-edea-4b96-a04d-30e3d8282eb1-config-data\") pod \"nova-metadata-0\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " pod="openstack/nova-metadata-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.917781 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea9e253b-edea-4b96-a04d-30e3d8282eb1-logs\") pod \"nova-metadata-0\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " pod="openstack/nova-metadata-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.917846 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q99p6\" (UniqueName: \"kubernetes.io/projected/ea9e253b-edea-4b96-a04d-30e3d8282eb1-kube-api-access-q99p6\") pod \"nova-metadata-0\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " pod="openstack/nova-metadata-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.918051 4724 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/048e9f44-4790-4d6a-91fa-52955bb5b3cb-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.918954 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/048e9f44-4790-4d6a-91fa-52955bb5b3cb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "048e9f44-4790-4d6a-91fa-52955bb5b3cb" (UID: "048e9f44-4790-4d6a-91fa-52955bb5b3cb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.922733 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-scripts" (OuterVolumeSpecName: "scripts") pod "048e9f44-4790-4d6a-91fa-52955bb5b3cb" (UID: "048e9f44-4790-4d6a-91fa-52955bb5b3cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.957433 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/048e9f44-4790-4d6a-91fa-52955bb5b3cb-kube-api-access-kvsjs" (OuterVolumeSpecName: "kube-api-access-kvsjs") pod "048e9f44-4790-4d6a-91fa-52955bb5b3cb" (UID: "048e9f44-4790-4d6a-91fa-52955bb5b3cb"). InnerVolumeSpecName "kube-api-access-kvsjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.963577 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.965155 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:34:14 crc kubenswrapper[4724]: I0226 11:34:14.976868 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.011937 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.025071 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea9e253b-edea-4b96-a04d-30e3d8282eb1-config-data\") pod \"nova-metadata-0\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " pod="openstack/nova-metadata-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.025206 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c641824-adb3-47ca-88e7-8ae6b13b28ea-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6c641824-adb3-47ca-88e7-8ae6b13b28ea\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.025237 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c641824-adb3-47ca-88e7-8ae6b13b28ea-config-data\") pod \"nova-scheduler-0\" (UID: \"6c641824-adb3-47ca-88e7-8ae6b13b28ea\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.025287 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea9e253b-edea-4b96-a04d-30e3d8282eb1-logs\") pod \"nova-metadata-0\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " pod="openstack/nova-metadata-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.025383 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q99p6\" (UniqueName: \"kubernetes.io/projected/ea9e253b-edea-4b96-a04d-30e3d8282eb1-kube-api-access-q99p6\") pod \"nova-metadata-0\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " pod="openstack/nova-metadata-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.025546 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j42pq\" (UniqueName: \"kubernetes.io/projected/6c641824-adb3-47ca-88e7-8ae6b13b28ea-kube-api-access-j42pq\") pod \"nova-scheduler-0\" (UID: \"6c641824-adb3-47ca-88e7-8ae6b13b28ea\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.025660 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea9e253b-edea-4b96-a04d-30e3d8282eb1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " pod="openstack/nova-metadata-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.025800 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.025822 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvsjs\" (UniqueName: \"kubernetes.io/projected/048e9f44-4790-4d6a-91fa-52955bb5b3cb-kube-api-access-kvsjs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.025836 4724 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/048e9f44-4790-4d6a-91fa-52955bb5b3cb-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.032768 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea9e253b-edea-4b96-a04d-30e3d8282eb1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " pod="openstack/nova-metadata-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.033033 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea9e253b-edea-4b96-a04d-30e3d8282eb1-logs\") pod \"nova-metadata-0\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " pod="openstack/nova-metadata-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.034136 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea9e253b-edea-4b96-a04d-30e3d8282eb1-config-data\") pod \"nova-metadata-0\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " pod="openstack/nova-metadata-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.072263 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nzxmp"] Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.080350 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.107766 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nzxmp"] Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.107909 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.120515 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "048e9f44-4790-4d6a-91fa-52955bb5b3cb" (UID: "048e9f44-4790-4d6a-91fa-52955bb5b3cb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.127703 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c641824-adb3-47ca-88e7-8ae6b13b28ea-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6c641824-adb3-47ca-88e7-8ae6b13b28ea\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.127737 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c641824-adb3-47ca-88e7-8ae6b13b28ea-config-data\") pod \"nova-scheduler-0\" (UID: \"6c641824-adb3-47ca-88e7-8ae6b13b28ea\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.127825 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j42pq\" (UniqueName: \"kubernetes.io/projected/6c641824-adb3-47ca-88e7-8ae6b13b28ea-kube-api-access-j42pq\") pod \"nova-scheduler-0\" (UID: \"6c641824-adb3-47ca-88e7-8ae6b13b28ea\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.127922 4724 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.177647 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c641824-adb3-47ca-88e7-8ae6b13b28ea-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6c641824-adb3-47ca-88e7-8ae6b13b28ea\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.184708 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j42pq\" (UniqueName: \"kubernetes.io/projected/6c641824-adb3-47ca-88e7-8ae6b13b28ea-kube-api-access-j42pq\") pod \"nova-scheduler-0\" (UID: \"6c641824-adb3-47ca-88e7-8ae6b13b28ea\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.195788 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q99p6\" (UniqueName: \"kubernetes.io/projected/ea9e253b-edea-4b96-a04d-30e3d8282eb1-kube-api-access-q99p6\") pod \"nova-metadata-0\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " pod="openstack/nova-metadata-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.201685 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"048e9f44-4790-4d6a-91fa-52955bb5b3cb","Type":"ContainerDied","Data":"963ef042a4f1e6eec7c702faa1906d7ae23c26f76513d015a084478004b7f120"} Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.201735 4724 scope.go:117] "RemoveContainer" containerID="0c952b62f7bdc1edefdb98180df262ee1945403b727b08efbde27c5f2f3f33c0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.201948 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.204822 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c641824-adb3-47ca-88e7-8ae6b13b28ea-config-data\") pod \"nova-scheduler-0\" (UID: \"6c641824-adb3-47ca-88e7-8ae6b13b28ea\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.230231 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.230309 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.230413 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.230496 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-dns-svc\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.231618 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnlnq\" (UniqueName: \"kubernetes.io/projected/2746e33a-3533-4464-abfb-2ead8cf17856-kube-api-access-qnlnq\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.231692 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-config\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.261940 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "048e9f44-4790-4d6a-91fa-52955bb5b3cb" (UID: "048e9f44-4790-4d6a-91fa-52955bb5b3cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.307226 4724 scope.go:117] "RemoveContainer" containerID="28fc73408c6ff1ebe571e5a8de28f40a878a1685692b945016dce58323c2dc99" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.327942 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.337118 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.337363 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-dns-svc\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.337431 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnlnq\" (UniqueName: \"kubernetes.io/projected/2746e33a-3533-4464-abfb-2ead8cf17856-kube-api-access-qnlnq\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.337460 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-config\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.337590 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.337635 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.337725 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.338944 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.338966 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.340090 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-dns-svc\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.341071 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.347066 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-config\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.365042 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnlnq\" (UniqueName: \"kubernetes.io/projected/2746e33a-3533-4464-abfb-2ead8cf17856-kube-api-access-qnlnq\") pod \"dnsmasq-dns-7877d89589-nzxmp\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.430821 4724 scope.go:117] "RemoveContainer" containerID="41ed7c3bdce4a3d6bcc9c8f1b0b42339323a3e3a8dd0d28068f0ae6c05b99c41" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.449378 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-config-data" (OuterVolumeSpecName: "config-data") pod "048e9f44-4790-4d6a-91fa-52955bb5b3cb" (UID: "048e9f44-4790-4d6a-91fa-52955bb5b3cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.494432 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.503109 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.546875 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048e9f44-4790-4d6a-91fa-52955bb5b3cb-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.603937 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wsfkc"] Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.683329 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.832311 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.858121 4724 scope.go:117] "RemoveContainer" containerID="82ec9effbe5ec63631a443001e329571e8ec03059956ec5c200796fb0d96a8f2" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.894163 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.938098 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.945610 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.951810 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:34:15 crc kubenswrapper[4724]: I0226 11:34:15.952088 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.018793 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="048e9f44-4790-4d6a-91fa-52955bb5b3cb" path="/var/lib/kubelet/pods/048e9f44-4790-4d6a-91fa-52955bb5b3cb/volumes" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.032264 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.060250 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44j7j\" (UniqueName: \"kubernetes.io/projected/0818e705-e62a-4d4f-9fa3-47e66a0f8946-kube-api-access-44j7j\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.060372 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0818e705-e62a-4d4f-9fa3-47e66a0f8946-run-httpd\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.060563 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-scripts\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.060751 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.060834 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-config-data\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.060917 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0818e705-e62a-4d4f-9fa3-47e66a0f8946-log-httpd\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.061012 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.169276 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.169606 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-config-data\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.169674 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0818e705-e62a-4d4f-9fa3-47e66a0f8946-log-httpd\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.169710 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.169863 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44j7j\" (UniqueName: \"kubernetes.io/projected/0818e705-e62a-4d4f-9fa3-47e66a0f8946-kube-api-access-44j7j\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.169904 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0818e705-e62a-4d4f-9fa3-47e66a0f8946-run-httpd\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.169994 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-scripts\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.171863 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0818e705-e62a-4d4f-9fa3-47e66a0f8946-log-httpd\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.172887 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0818e705-e62a-4d4f-9fa3-47e66a0f8946-run-httpd\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.187667 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-scripts\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.190888 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.191719 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.241716 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-config-data\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.250960 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44j7j\" (UniqueName: \"kubernetes.io/projected/0818e705-e62a-4d4f-9fa3-47e66a0f8946-kube-api-access-44j7j\") pod \"ceilometer-0\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.295368 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.326501 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pqttj"] Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.377550 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.420407 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.420637 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.516091 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pqttj"] Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.546031 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pqttj\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.546119 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r24bf\" (UniqueName: \"kubernetes.io/projected/d532a325-83f4-45d6-8363-8fab02ca4afc-kube-api-access-r24bf\") pod \"nova-cell1-conductor-db-sync-pqttj\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.546191 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-scripts\") pod \"nova-cell1-conductor-db-sync-pqttj\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.546265 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-config-data\") pod \"nova-cell1-conductor-db-sync-pqttj\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.565561 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2","Type":"ContainerStarted","Data":"832596a61e3b496717ba979fd5b5274d6bab4373c62739c530718dce064943d3"} Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.595032 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wsfkc" event={"ID":"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d","Type":"ContainerStarted","Data":"1a74ab8ad11c279bb7e9666b6bdb6a4b12af16704f6467d742d9b7c4b36842d5"} Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.615369 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.649497 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-config-data\") pod \"nova-cell1-conductor-db-sync-pqttj\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.649642 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pqttj\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.649712 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r24bf\" (UniqueName: \"kubernetes.io/projected/d532a325-83f4-45d6-8363-8fab02ca4afc-kube-api-access-r24bf\") pod \"nova-cell1-conductor-db-sync-pqttj\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.649767 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-scripts\") pod \"nova-cell1-conductor-db-sync-pqttj\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.679937 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.703290 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-config-data\") pod \"nova-cell1-conductor-db-sync-pqttj\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.713485 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-scripts\") pod \"nova-cell1-conductor-db-sync-pqttj\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.714687 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r24bf\" (UniqueName: \"kubernetes.io/projected/d532a325-83f4-45d6-8363-8fab02ca4afc-kube-api-access-r24bf\") pod \"nova-cell1-conductor-db-sync-pqttj\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.733423 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pqttj\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.790641 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.864741 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.912261 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.912304 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.967425 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 11:34:16 crc kubenswrapper[4724]: I0226 11:34:16.967469 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 11:34:17 crc kubenswrapper[4724]: I0226 11:34:17.060361 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nzxmp"] Feb 26 11:34:17 crc kubenswrapper[4724]: W0226 11:34:17.077390 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2746e33a_3533_4464_abfb_2ead8cf17856.slice/crio-1fe70fce880e2944d2cb675d9b2a490a72cdd8c05260e57729da78b62470e83d WatchSource:0}: Error finding container 1fe70fce880e2944d2cb675d9b2a490a72cdd8c05260e57729da78b62470e83d: Status 404 returned error can't find the container with id 1fe70fce880e2944d2cb675d9b2a490a72cdd8c05260e57729da78b62470e83d Feb 26 11:34:17 crc kubenswrapper[4724]: I0226 11:34:17.152218 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 11:34:17 crc kubenswrapper[4724]: I0226 11:34:17.177090 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 11:34:17 crc kubenswrapper[4724]: I0226 11:34:17.405210 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:17 crc kubenswrapper[4724]: I0226 11:34:17.628920 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wsfkc" event={"ID":"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d","Type":"ContainerStarted","Data":"d15d1e88bef1821a3610412a77c674c3b0a76248f6c7eeb262765f4a14d32856"} Feb 26 11:34:17 crc kubenswrapper[4724]: I0226 11:34:17.649636 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea9e253b-edea-4b96-a04d-30e3d8282eb1","Type":"ContainerStarted","Data":"79ad56eb63d0f1b5c1d97594d23e8d7794dca87a78c34cc2107eb4d983b5be8d"} Feb 26 11:34:17 crc kubenswrapper[4724]: I0226 11:34:17.652696 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pqttj"] Feb 26 11:34:17 crc kubenswrapper[4724]: I0226 11:34:17.676337 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-wsfkc" podStartSLOduration=4.676316999 podStartE2EDuration="4.676316999s" podCreationTimestamp="2026-02-26 11:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:17.674705788 +0000 UTC m=+1724.330444903" watchObservedRunningTime="2026-02-26 11:34:17.676316999 +0000 UTC m=+1724.332056114" Feb 26 11:34:17 crc kubenswrapper[4724]: I0226 11:34:17.677049 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0818e705-e62a-4d4f-9fa3-47e66a0f8946","Type":"ContainerStarted","Data":"8d438d2a002cc75deb27d39c0eea0f7497b346dabf162e42ecce5f27938a50f8"} Feb 26 11:34:17 crc kubenswrapper[4724]: I0226 11:34:17.701015 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-nzxmp" event={"ID":"2746e33a-3533-4464-abfb-2ead8cf17856","Type":"ContainerStarted","Data":"1fe70fce880e2944d2cb675d9b2a490a72cdd8c05260e57729da78b62470e83d"} Feb 26 11:34:17 crc kubenswrapper[4724]: I0226 11:34:17.711866 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9e70ee5-2312-436e-83ab-c365c8447761","Type":"ContainerStarted","Data":"3266fc228952a277c7a62fc535446149bad372ff9ff2129522856bff35a2f0a8"} Feb 26 11:34:17 crc kubenswrapper[4724]: I0226 11:34:17.721268 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c641824-adb3-47ca-88e7-8ae6b13b28ea","Type":"ContainerStarted","Data":"d758c2d25330c9e411c471effa330d987bb7d0955918e22c3c68e82528e8aee7"} Feb 26 11:34:17 crc kubenswrapper[4724]: I0226 11:34:17.721817 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 11:34:17 crc kubenswrapper[4724]: I0226 11:34:17.722253 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 11:34:18 crc kubenswrapper[4724]: I0226 11:34:18.061914 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.154:8443: connect: connection refused" Feb 26 11:34:18 crc kubenswrapper[4724]: I0226 11:34:18.512623 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:18 crc kubenswrapper[4724]: I0226 11:34:18.531602 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:34:18 crc kubenswrapper[4724]: I0226 11:34:18.757772 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pqttj" event={"ID":"d532a325-83f4-45d6-8363-8fab02ca4afc","Type":"ContainerStarted","Data":"563a5f8a59eb586e1fef7cd004568c34552bfbb258e006ce774a199146989847"} Feb 26 11:34:18 crc kubenswrapper[4724]: I0226 11:34:18.758082 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pqttj" event={"ID":"d532a325-83f4-45d6-8363-8fab02ca4afc","Type":"ContainerStarted","Data":"552a035df672019e957d8e0094ef6df214b8d47b359b555c0f4e9b7cef9e2083"} Feb 26 11:34:18 crc kubenswrapper[4724]: I0226 11:34:18.764700 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0818e705-e62a-4d4f-9fa3-47e66a0f8946","Type":"ContainerStarted","Data":"34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017"} Feb 26 11:34:18 crc kubenswrapper[4724]: I0226 11:34:18.775572 4724 generic.go:334] "Generic (PLEG): container finished" podID="2746e33a-3533-4464-abfb-2ead8cf17856" containerID="d6dfc41e3a156c983d48353577cdc743bba7e8b4b20a78185d148d3298779533" exitCode=0 Feb 26 11:34:18 crc kubenswrapper[4724]: I0226 11:34:18.775872 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-nzxmp" event={"ID":"2746e33a-3533-4464-abfb-2ead8cf17856","Type":"ContainerDied","Data":"d6dfc41e3a156c983d48353577cdc743bba7e8b4b20a78185d148d3298779533"} Feb 26 11:34:18 crc kubenswrapper[4724]: I0226 11:34:18.782122 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-pqttj" podStartSLOduration=2.7820992909999998 podStartE2EDuration="2.782099291s" podCreationTimestamp="2026-02-26 11:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:18.777093364 +0000 UTC m=+1725.432832489" watchObservedRunningTime="2026-02-26 11:34:18.782099291 +0000 UTC m=+1725.437838406" Feb 26 11:34:19 crc kubenswrapper[4724]: I0226 11:34:19.797288 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-nzxmp" event={"ID":"2746e33a-3533-4464-abfb-2ead8cf17856","Type":"ContainerStarted","Data":"c101124757a7f5d9c7f1a946596eeb61327948f2e381e400eeaeab6bb26c0e81"} Feb 26 11:34:19 crc kubenswrapper[4724]: I0226 11:34:19.841811 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7877d89589-nzxmp" podStartSLOduration=5.841793312 podStartE2EDuration="5.841793312s" podCreationTimestamp="2026-02-26 11:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:19.83974294 +0000 UTC m=+1726.495482065" watchObservedRunningTime="2026-02-26 11:34:19.841793312 +0000 UTC m=+1726.497532427" Feb 26 11:34:20 crc kubenswrapper[4724]: I0226 11:34:20.503669 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:22 crc kubenswrapper[4724]: I0226 11:34:22.057094 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 11:34:22 crc kubenswrapper[4724]: I0226 11:34:22.057507 4724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 11:34:22 crc kubenswrapper[4724]: I0226 11:34:22.338697 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 11:34:24 crc kubenswrapper[4724]: I0226 11:34:24.889158 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea9e253b-edea-4b96-a04d-30e3d8282eb1","Type":"ContainerStarted","Data":"37b8f5b9b52dd46d9cc4ceeabed35a5acb222663c99f880a6d19f1904986f0c4"} Feb 26 11:34:24 crc kubenswrapper[4724]: I0226 11:34:24.889770 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea9e253b-edea-4b96-a04d-30e3d8282eb1","Type":"ContainerStarted","Data":"247f14e5e3d607fcdea201c039e4cfa41cb9e38998310f877f94de2a5527091b"} Feb 26 11:34:24 crc kubenswrapper[4724]: I0226 11:34:24.889921 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ea9e253b-edea-4b96-a04d-30e3d8282eb1" containerName="nova-metadata-log" containerID="cri-o://247f14e5e3d607fcdea201c039e4cfa41cb9e38998310f877f94de2a5527091b" gracePeriod=30 Feb 26 11:34:24 crc kubenswrapper[4724]: I0226 11:34:24.890558 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ea9e253b-edea-4b96-a04d-30e3d8282eb1" containerName="nova-metadata-metadata" containerID="cri-o://37b8f5b9b52dd46d9cc4ceeabed35a5acb222663c99f880a6d19f1904986f0c4" gracePeriod=30 Feb 26 11:34:24 crc kubenswrapper[4724]: I0226 11:34:24.903811 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0818e705-e62a-4d4f-9fa3-47e66a0f8946","Type":"ContainerStarted","Data":"902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3"} Feb 26 11:34:24 crc kubenswrapper[4724]: I0226 11:34:24.904080 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0818e705-e62a-4d4f-9fa3-47e66a0f8946","Type":"ContainerStarted","Data":"53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57"} Feb 26 11:34:24 crc kubenswrapper[4724]: I0226 11:34:24.908374 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9e70ee5-2312-436e-83ab-c365c8447761","Type":"ContainerStarted","Data":"4deb859c06fe259b5e1eca922df71457ca6d347f7bd781c915f6e1f60ee0b235"} Feb 26 11:34:24 crc kubenswrapper[4724]: I0226 11:34:24.908429 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9e70ee5-2312-436e-83ab-c365c8447761","Type":"ContainerStarted","Data":"bcd6074a9e059708b00e9cc3441dbf5fc0a3c9aa0a20191dc981b1443fd94592"} Feb 26 11:34:24 crc kubenswrapper[4724]: I0226 11:34:24.910697 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2","Type":"ContainerStarted","Data":"4278a4afaa0f65060c9be0d90e1980e0650ebf5279a0b0f4149e7ff3784f2f83"} Feb 26 11:34:24 crc kubenswrapper[4724]: I0226 11:34:24.910724 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b96a3fdf-1e3b-47fb-a073-26bc8acb78d2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4278a4afaa0f65060c9be0d90e1980e0650ebf5279a0b0f4149e7ff3784f2f83" gracePeriod=30 Feb 26 11:34:24 crc kubenswrapper[4724]: I0226 11:34:24.918230 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c641824-adb3-47ca-88e7-8ae6b13b28ea","Type":"ContainerStarted","Data":"c15bec7f37de83d565911ceb354680d5d49fae2859ee295290b413bf406cd000"} Feb 26 11:34:24 crc kubenswrapper[4724]: I0226 11:34:24.928237 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.187319926 podStartE2EDuration="10.928215679s" podCreationTimestamp="2026-02-26 11:34:14 +0000 UTC" firstStartedPulling="2026-02-26 11:34:16.809049294 +0000 UTC m=+1723.464788409" lastFinishedPulling="2026-02-26 11:34:23.549945047 +0000 UTC m=+1730.205684162" observedRunningTime="2026-02-26 11:34:24.913262328 +0000 UTC m=+1731.569001463" watchObservedRunningTime="2026-02-26 11:34:24.928215679 +0000 UTC m=+1731.583954804" Feb 26 11:34:24 crc kubenswrapper[4724]: I0226 11:34:24.936654 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.29651404 podStartE2EDuration="10.936631143s" podCreationTimestamp="2026-02-26 11:34:14 +0000 UTC" firstStartedPulling="2026-02-26 11:34:15.858342995 +0000 UTC m=+1722.514082110" lastFinishedPulling="2026-02-26 11:34:23.498460098 +0000 UTC m=+1730.154199213" observedRunningTime="2026-02-26 11:34:24.935998667 +0000 UTC m=+1731.591737792" watchObservedRunningTime="2026-02-26 11:34:24.936631143 +0000 UTC m=+1731.592370258" Feb 26 11:34:24 crc kubenswrapper[4724]: I0226 11:34:24.963884 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.59573066 podStartE2EDuration="10.963863865s" podCreationTimestamp="2026-02-26 11:34:14 +0000 UTC" firstStartedPulling="2026-02-26 11:34:16.411329069 +0000 UTC m=+1723.067068184" lastFinishedPulling="2026-02-26 11:34:23.779462274 +0000 UTC m=+1730.435201389" observedRunningTime="2026-02-26 11:34:24.961923216 +0000 UTC m=+1731.617662331" watchObservedRunningTime="2026-02-26 11:34:24.963863865 +0000 UTC m=+1731.619602980" Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.080640 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.080691 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.328902 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.328951 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.432244 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.450357 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.234401074 podStartE2EDuration="11.450336148s" podCreationTimestamp="2026-02-26 11:34:14 +0000 UTC" firstStartedPulling="2026-02-26 11:34:16.503140774 +0000 UTC m=+1723.158879889" lastFinishedPulling="2026-02-26 11:34:23.719075848 +0000 UTC m=+1730.374814963" observedRunningTime="2026-02-26 11:34:25.001453371 +0000 UTC m=+1731.657192486" watchObservedRunningTime="2026-02-26 11:34:25.450336148 +0000 UTC m=+1732.106075263" Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.495625 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.495669 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.510336 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.605042 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dcl5w"] Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.605550 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" podUID="ee99c1c0-7f32-43c2-a559-5ff89e37a1ff" containerName="dnsmasq-dns" containerID="cri-o://f93b9dad1ecf2b8546d11834589eb95fac7383a65714b01795185f8e02ab1be6" gracePeriod=10 Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.958695 4724 generic.go:334] "Generic (PLEG): container finished" podID="ee99c1c0-7f32-43c2-a559-5ff89e37a1ff" containerID="f93b9dad1ecf2b8546d11834589eb95fac7383a65714b01795185f8e02ab1be6" exitCode=0 Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.958949 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" event={"ID":"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff","Type":"ContainerDied","Data":"f93b9dad1ecf2b8546d11834589eb95fac7383a65714b01795185f8e02ab1be6"} Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.963672 4724 generic.go:334] "Generic (PLEG): container finished" podID="ea9e253b-edea-4b96-a04d-30e3d8282eb1" containerID="37b8f5b9b52dd46d9cc4ceeabed35a5acb222663c99f880a6d19f1904986f0c4" exitCode=0 Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.963709 4724 generic.go:334] "Generic (PLEG): container finished" podID="ea9e253b-edea-4b96-a04d-30e3d8282eb1" containerID="247f14e5e3d607fcdea201c039e4cfa41cb9e38998310f877f94de2a5527091b" exitCode=143 Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.963808 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea9e253b-edea-4b96-a04d-30e3d8282eb1","Type":"ContainerDied","Data":"37b8f5b9b52dd46d9cc4ceeabed35a5acb222663c99f880a6d19f1904986f0c4"} Feb 26 11:34:25 crc kubenswrapper[4724]: I0226 11:34:25.963870 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea9e253b-edea-4b96-a04d-30e3d8282eb1","Type":"ContainerDied","Data":"247f14e5e3d607fcdea201c039e4cfa41cb9e38998310f877f94de2a5527091b"} Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.087602 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.130960 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b9e70ee5-2312-436e-83ab-c365c8447761" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.187510 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b9e70ee5-2312-436e-83ab-c365c8447761" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.284984 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.453446 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea9e253b-edea-4b96-a04d-30e3d8282eb1-logs\") pod \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.453622 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea9e253b-edea-4b96-a04d-30e3d8282eb1-config-data\") pod \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.453813 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q99p6\" (UniqueName: \"kubernetes.io/projected/ea9e253b-edea-4b96-a04d-30e3d8282eb1-kube-api-access-q99p6\") pod \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.453901 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea9e253b-edea-4b96-a04d-30e3d8282eb1-combined-ca-bundle\") pod \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\" (UID: \"ea9e253b-edea-4b96-a04d-30e3d8282eb1\") " Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.457795 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea9e253b-edea-4b96-a04d-30e3d8282eb1-logs" (OuterVolumeSpecName: "logs") pod "ea9e253b-edea-4b96-a04d-30e3d8282eb1" (UID: "ea9e253b-edea-4b96-a04d-30e3d8282eb1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.462399 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea9e253b-edea-4b96-a04d-30e3d8282eb1-kube-api-access-q99p6" (OuterVolumeSpecName: "kube-api-access-q99p6") pod "ea9e253b-edea-4b96-a04d-30e3d8282eb1" (UID: "ea9e253b-edea-4b96-a04d-30e3d8282eb1"). InnerVolumeSpecName "kube-api-access-q99p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.508197 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.526542 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea9e253b-edea-4b96-a04d-30e3d8282eb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea9e253b-edea-4b96-a04d-30e3d8282eb1" (UID: "ea9e253b-edea-4b96-a04d-30e3d8282eb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.537827 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea9e253b-edea-4b96-a04d-30e3d8282eb1-config-data" (OuterVolumeSpecName: "config-data") pod "ea9e253b-edea-4b96-a04d-30e3d8282eb1" (UID: "ea9e253b-edea-4b96-a04d-30e3d8282eb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.558636 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea9e253b-edea-4b96-a04d-30e3d8282eb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.558668 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea9e253b-edea-4b96-a04d-30e3d8282eb1-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.558679 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea9e253b-edea-4b96-a04d-30e3d8282eb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.558688 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q99p6\" (UniqueName: \"kubernetes.io/projected/ea9e253b-edea-4b96-a04d-30e3d8282eb1-kube-api-access-q99p6\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.659685 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-dns-svc\") pod \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.659842 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnkz8\" (UniqueName: \"kubernetes.io/projected/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-kube-api-access-jnkz8\") pod \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.659916 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-dns-swift-storage-0\") pod \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.659992 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-ovsdbserver-nb\") pod \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.660041 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-ovsdbserver-sb\") pod \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.660242 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-config\") pod \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\" (UID: \"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff\") " Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.678024 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-kube-api-access-jnkz8" (OuterVolumeSpecName: "kube-api-access-jnkz8") pod "ee99c1c0-7f32-43c2-a559-5ff89e37a1ff" (UID: "ee99c1c0-7f32-43c2-a559-5ff89e37a1ff"). InnerVolumeSpecName "kube-api-access-jnkz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.738415 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-config" (OuterVolumeSpecName: "config") pod "ee99c1c0-7f32-43c2-a559-5ff89e37a1ff" (UID: "ee99c1c0-7f32-43c2-a559-5ff89e37a1ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.765919 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee99c1c0-7f32-43c2-a559-5ff89e37a1ff" (UID: "ee99c1c0-7f32-43c2-a559-5ff89e37a1ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.770221 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.770267 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnkz8\" (UniqueName: \"kubernetes.io/projected/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-kube-api-access-jnkz8\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.791348 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ee99c1c0-7f32-43c2-a559-5ff89e37a1ff" (UID: "ee99c1c0-7f32-43c2-a559-5ff89e37a1ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.809712 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ee99c1c0-7f32-43c2-a559-5ff89e37a1ff" (UID: "ee99c1c0-7f32-43c2-a559-5ff89e37a1ff"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.877988 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ee99c1c0-7f32-43c2-a559-5ff89e37a1ff" (UID: "ee99c1c0-7f32-43c2-a559-5ff89e37a1ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.878457 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.878483 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.878501 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:26.878515 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.043300 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" event={"ID":"ee99c1c0-7f32-43c2-a559-5ff89e37a1ff","Type":"ContainerDied","Data":"76bb2cb2d58dcb16e18a415bb4fb81d75f94d44023c4872ae71ae870158cf1ad"} Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.043354 4724 scope.go:117] "RemoveContainer" containerID="f93b9dad1ecf2b8546d11834589eb95fac7383a65714b01795185f8e02ab1be6" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.043536 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-dcl5w" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.082727 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea9e253b-edea-4b96-a04d-30e3d8282eb1","Type":"ContainerDied","Data":"79ad56eb63d0f1b5c1d97594d23e8d7794dca87a78c34cc2107eb4d983b5be8d"} Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.082871 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.147864 4724 scope.go:117] "RemoveContainer" containerID="a23afe89200de432a0a27176e53969b38466d0d4035e813f17a04a81115d7c2f" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.200739 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dcl5w"] Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.235824 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-dcl5w"] Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.261252 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.291273 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.318251 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:27 crc kubenswrapper[4724]: E0226 11:34:27.318791 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9e253b-edea-4b96-a04d-30e3d8282eb1" containerName="nova-metadata-metadata" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.318811 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9e253b-edea-4b96-a04d-30e3d8282eb1" containerName="nova-metadata-metadata" Feb 26 11:34:27 crc kubenswrapper[4724]: E0226 11:34:27.318827 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee99c1c0-7f32-43c2-a559-5ff89e37a1ff" containerName="init" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.318835 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee99c1c0-7f32-43c2-a559-5ff89e37a1ff" containerName="init" Feb 26 11:34:27 crc kubenswrapper[4724]: E0226 11:34:27.318856 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9e253b-edea-4b96-a04d-30e3d8282eb1" containerName="nova-metadata-log" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.318864 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9e253b-edea-4b96-a04d-30e3d8282eb1" containerName="nova-metadata-log" Feb 26 11:34:27 crc kubenswrapper[4724]: E0226 11:34:27.318876 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee99c1c0-7f32-43c2-a559-5ff89e37a1ff" containerName="dnsmasq-dns" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.318885 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee99c1c0-7f32-43c2-a559-5ff89e37a1ff" containerName="dnsmasq-dns" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.319093 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea9e253b-edea-4b96-a04d-30e3d8282eb1" containerName="nova-metadata-log" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.319121 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee99c1c0-7f32-43c2-a559-5ff89e37a1ff" containerName="dnsmasq-dns" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.319139 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea9e253b-edea-4b96-a04d-30e3d8282eb1" containerName="nova-metadata-metadata" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.320438 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.321590 4724 scope.go:117] "RemoveContainer" containerID="37b8f5b9b52dd46d9cc4ceeabed35a5acb222663c99f880a6d19f1904986f0c4" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.326418 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.326603 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.362772 4724 scope.go:117] "RemoveContainer" containerID="247f14e5e3d607fcdea201c039e4cfa41cb9e38998310f877f94de2a5527091b" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.364582 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.494458 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kntp9\" (UniqueName: \"kubernetes.io/projected/e376e046-91ef-4f7d-b094-1486a82c2239-kube-api-access-kntp9\") pod \"nova-metadata-0\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.494517 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.494589 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e376e046-91ef-4f7d-b094-1486a82c2239-logs\") pod \"nova-metadata-0\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.494609 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.494659 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-config-data\") pod \"nova-metadata-0\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.595803 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e376e046-91ef-4f7d-b094-1486a82c2239-logs\") pod \"nova-metadata-0\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.596036 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.596162 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-config-data\") pod \"nova-metadata-0\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.596424 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kntp9\" (UniqueName: \"kubernetes.io/projected/e376e046-91ef-4f7d-b094-1486a82c2239-kube-api-access-kntp9\") pod \"nova-metadata-0\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.596520 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.606650 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.607075 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e376e046-91ef-4f7d-b094-1486a82c2239-logs\") pod \"nova-metadata-0\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.610921 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.642269 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kntp9\" (UniqueName: \"kubernetes.io/projected/e376e046-91ef-4f7d-b094-1486a82c2239-kube-api-access-kntp9\") pod \"nova-metadata-0\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.643208 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-config-data\") pod \"nova-metadata-0\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " pod="openstack/nova-metadata-0" Feb 26 11:34:27 crc kubenswrapper[4724]: I0226 11:34:27.666739 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:34:28 crc kubenswrapper[4724]: I0226 11:34:28.036854 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea9e253b-edea-4b96-a04d-30e3d8282eb1" path="/var/lib/kubelet/pods/ea9e253b-edea-4b96-a04d-30e3d8282eb1/volumes" Feb 26 11:34:28 crc kubenswrapper[4724]: I0226 11:34:28.038023 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee99c1c0-7f32-43c2-a559-5ff89e37a1ff" path="/var/lib/kubelet/pods/ee99c1c0-7f32-43c2-a559-5ff89e37a1ff/volumes" Feb 26 11:34:28 crc kubenswrapper[4724]: I0226 11:34:28.062772 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.154:8443: connect: connection refused" Feb 26 11:34:28 crc kubenswrapper[4724]: I0226 11:34:28.062911 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:34:28 crc kubenswrapper[4724]: I0226 11:34:28.208250 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0818e705-e62a-4d4f-9fa3-47e66a0f8946","Type":"ContainerStarted","Data":"9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2"} Feb 26 11:34:28 crc kubenswrapper[4724]: I0226 11:34:28.208404 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:34:28 crc kubenswrapper[4724]: I0226 11:34:28.244781 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.731377281 podStartE2EDuration="13.244753685s" podCreationTimestamp="2026-02-26 11:34:15 +0000 UTC" firstStartedPulling="2026-02-26 11:34:17.415733392 +0000 UTC m=+1724.071472507" lastFinishedPulling="2026-02-26 11:34:26.929109796 +0000 UTC m=+1733.584848911" observedRunningTime="2026-02-26 11:34:28.229747953 +0000 UTC m=+1734.885487078" watchObservedRunningTime="2026-02-26 11:34:28.244753685 +0000 UTC m=+1734.900492800" Feb 26 11:34:28 crc kubenswrapper[4724]: I0226 11:34:28.416755 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:28 crc kubenswrapper[4724]: W0226 11:34:28.420733 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode376e046_91ef_4f7d_b094_1486a82c2239.slice/crio-61c1602946d48cf77b6fdc67d8a098c36b80274605547dc7d083b3c608ef5ee6 WatchSource:0}: Error finding container 61c1602946d48cf77b6fdc67d8a098c36b80274605547dc7d083b3c608ef5ee6: Status 404 returned error can't find the container with id 61c1602946d48cf77b6fdc67d8a098c36b80274605547dc7d083b3c608ef5ee6 Feb 26 11:34:29 crc kubenswrapper[4724]: I0226 11:34:29.219424 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e376e046-91ef-4f7d-b094-1486a82c2239","Type":"ContainerStarted","Data":"504049bad125ab2ac138c7723de8767476e9c06a86bcf9a9fcf9b63a26ed679d"} Feb 26 11:34:29 crc kubenswrapper[4724]: I0226 11:34:29.219923 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e376e046-91ef-4f7d-b094-1486a82c2239","Type":"ContainerStarted","Data":"61c1602946d48cf77b6fdc67d8a098c36b80274605547dc7d083b3c608ef5ee6"} Feb 26 11:34:29 crc kubenswrapper[4724]: I0226 11:34:29.730087 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:30 crc kubenswrapper[4724]: I0226 11:34:30.245780 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e376e046-91ef-4f7d-b094-1486a82c2239","Type":"ContainerStarted","Data":"56b403307b9a0e19993e9d96b8c43461d16c348c1385c8e4c2f97d974c14746e"} Feb 26 11:34:30 crc kubenswrapper[4724]: I0226 11:34:30.283918 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.283894146 podStartE2EDuration="3.283894146s" podCreationTimestamp="2026-02-26 11:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:30.266163035 +0000 UTC m=+1736.921902160" watchObservedRunningTime="2026-02-26 11:34:30.283894146 +0000 UTC m=+1736.939633261" Feb 26 11:34:32 crc kubenswrapper[4724]: I0226 11:34:32.668511 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 11:34:32 crc kubenswrapper[4724]: I0226 11:34:32.669001 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 11:34:33 crc kubenswrapper[4724]: I0226 11:34:33.312464 4724 generic.go:334] "Generic (PLEG): container finished" podID="3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d" containerID="d15d1e88bef1821a3610412a77c674c3b0a76248f6c7eeb262765f4a14d32856" exitCode=0 Feb 26 11:34:33 crc kubenswrapper[4724]: I0226 11:34:33.312549 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wsfkc" event={"ID":"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d","Type":"ContainerDied","Data":"d15d1e88bef1821a3610412a77c674c3b0a76248f6c7eeb262765f4a14d32856"} Feb 26 11:34:34 crc kubenswrapper[4724]: I0226 11:34:34.816218 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:34 crc kubenswrapper[4724]: I0226 11:34:34.922077 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbjdl\" (UniqueName: \"kubernetes.io/projected/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-kube-api-access-gbjdl\") pod \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " Feb 26 11:34:34 crc kubenswrapper[4724]: I0226 11:34:34.923204 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-config-data\") pod \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " Feb 26 11:34:34 crc kubenswrapper[4724]: I0226 11:34:34.923240 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-combined-ca-bundle\") pod \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " Feb 26 11:34:34 crc kubenswrapper[4724]: I0226 11:34:34.923483 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-scripts\") pod \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\" (UID: \"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d\") " Feb 26 11:34:34 crc kubenswrapper[4724]: I0226 11:34:34.929639 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-scripts" (OuterVolumeSpecName: "scripts") pod "3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d" (UID: "3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:34 crc kubenswrapper[4724]: I0226 11:34:34.930698 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-kube-api-access-gbjdl" (OuterVolumeSpecName: "kube-api-access-gbjdl") pod "3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d" (UID: "3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d"). InnerVolumeSpecName "kube-api-access-gbjdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:34 crc kubenswrapper[4724]: I0226 11:34:34.958783 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-config-data" (OuterVolumeSpecName: "config-data") pod "3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d" (UID: "3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:34 crc kubenswrapper[4724]: I0226 11:34:34.959301 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d" (UID: "3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.027392 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.027435 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.027446 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.027457 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbjdl\" (UniqueName: \"kubernetes.io/projected/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d-kube-api-access-gbjdl\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.091382 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.091469 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.092088 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.092115 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.095265 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.095680 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.890622 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wsfkc" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.892274 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wsfkc" event={"ID":"3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d","Type":"ContainerDied","Data":"1a74ab8ad11c279bb7e9666b6bdb6a4b12af16704f6467d742d9b7c4b36842d5"} Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.892319 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a74ab8ad11c279bb7e9666b6bdb6a4b12af16704f6467d742d9b7c4b36842d5" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.937490 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-9jpvw"] Feb 26 11:34:35 crc kubenswrapper[4724]: E0226 11:34:35.938076 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d" containerName="nova-manage" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.938094 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d" containerName="nova-manage" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.938372 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d" containerName="nova-manage" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.941644 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.991015 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.991115 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-config\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.991240 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.991282 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.991370 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:35 crc kubenswrapper[4724]: I0226 11:34:35.991515 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkxlk\" (UniqueName: \"kubernetes.io/projected/cc258ae0-3005-4720-bcde-7a7be93c5dd0-kube-api-access-pkxlk\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.035860 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-9jpvw"] Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.081696 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.081935 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6c641824-adb3-47ca-88e7-8ae6b13b28ea" containerName="nova-scheduler-scheduler" containerID="cri-o://c15bec7f37de83d565911ceb354680d5d49fae2859ee295290b413bf406cd000" gracePeriod=30 Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.440762 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.440855 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.441010 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkxlk\" (UniqueName: \"kubernetes.io/projected/cc258ae0-3005-4720-bcde-7a7be93c5dd0-kube-api-access-pkxlk\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.441126 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.441200 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-config\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.441264 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.442577 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.445136 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.445149 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.446041 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.446541 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-config\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.530106 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkxlk\" (UniqueName: \"kubernetes.io/projected/cc258ae0-3005-4720-bcde-7a7be93c5dd0-kube-api-access-pkxlk\") pod \"dnsmasq-dns-6d99f6bc7f-9jpvw\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.546992 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.597759 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.598010 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e376e046-91ef-4f7d-b094-1486a82c2239" containerName="nova-metadata-log" containerID="cri-o://504049bad125ab2ac138c7723de8767476e9c06a86bcf9a9fcf9b63a26ed679d" gracePeriod=30 Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.598478 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e376e046-91ef-4f7d-b094-1486a82c2239" containerName="nova-metadata-metadata" containerID="cri-o://56b403307b9a0e19993e9d96b8c43461d16c348c1385c8e4c2f97d974c14746e" gracePeriod=30 Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.775321 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.912036 4724 generic.go:334] "Generic (PLEG): container finished" podID="e376e046-91ef-4f7d-b094-1486a82c2239" containerID="504049bad125ab2ac138c7723de8767476e9c06a86bcf9a9fcf9b63a26ed679d" exitCode=143 Feb 26 11:34:36 crc kubenswrapper[4724]: I0226 11:34:36.912253 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e376e046-91ef-4f7d-b094-1486a82c2239","Type":"ContainerDied","Data":"504049bad125ab2ac138c7723de8767476e9c06a86bcf9a9fcf9b63a26ed679d"} Feb 26 11:34:37 crc kubenswrapper[4724]: W0226 11:34:37.454219 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc258ae0_3005_4720_bcde_7a7be93c5dd0.slice/crio-00afa9433dc1c6e7c86a48f627cba7a08fb6a424fd594fdd7d7835f66d155505 WatchSource:0}: Error finding container 00afa9433dc1c6e7c86a48f627cba7a08fb6a424fd594fdd7d7835f66d155505: Status 404 returned error can't find the container with id 00afa9433dc1c6e7c86a48f627cba7a08fb6a424fd594fdd7d7835f66d155505 Feb 26 11:34:37 crc kubenswrapper[4724]: I0226 11:34:37.456547 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-9jpvw"] Feb 26 11:34:37 crc kubenswrapper[4724]: I0226 11:34:37.848132 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.009382 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-config-data\") pod \"e376e046-91ef-4f7d-b094-1486a82c2239\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.009445 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-nova-metadata-tls-certs\") pod \"e376e046-91ef-4f7d-b094-1486a82c2239\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.009566 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kntp9\" (UniqueName: \"kubernetes.io/projected/e376e046-91ef-4f7d-b094-1486a82c2239-kube-api-access-kntp9\") pod \"e376e046-91ef-4f7d-b094-1486a82c2239\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.009693 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e376e046-91ef-4f7d-b094-1486a82c2239-logs\") pod \"e376e046-91ef-4f7d-b094-1486a82c2239\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.009811 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-combined-ca-bundle\") pod \"e376e046-91ef-4f7d-b094-1486a82c2239\" (UID: \"e376e046-91ef-4f7d-b094-1486a82c2239\") " Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.012868 4724 generic.go:334] "Generic (PLEG): container finished" podID="e376e046-91ef-4f7d-b094-1486a82c2239" containerID="56b403307b9a0e19993e9d96b8c43461d16c348c1385c8e4c2f97d974c14746e" exitCode=0 Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.013000 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.043021 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e376e046-91ef-4f7d-b094-1486a82c2239-logs" (OuterVolumeSpecName: "logs") pod "e376e046-91ef-4f7d-b094-1486a82c2239" (UID: "e376e046-91ef-4f7d-b094-1486a82c2239"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.057850 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e376e046-91ef-4f7d-b094-1486a82c2239-kube-api-access-kntp9" (OuterVolumeSpecName: "kube-api-access-kntp9") pod "e376e046-91ef-4f7d-b094-1486a82c2239" (UID: "e376e046-91ef-4f7d-b094-1486a82c2239"). InnerVolumeSpecName "kube-api-access-kntp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.058235 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" event={"ID":"cc258ae0-3005-4720-bcde-7a7be93c5dd0","Type":"ContainerStarted","Data":"00afa9433dc1c6e7c86a48f627cba7a08fb6a424fd594fdd7d7835f66d155505"} Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.058272 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e376e046-91ef-4f7d-b094-1486a82c2239","Type":"ContainerDied","Data":"56b403307b9a0e19993e9d96b8c43461d16c348c1385c8e4c2f97d974c14746e"} Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.058293 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e376e046-91ef-4f7d-b094-1486a82c2239","Type":"ContainerDied","Data":"61c1602946d48cf77b6fdc67d8a098c36b80274605547dc7d083b3c608ef5ee6"} Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.058314 4724 scope.go:117] "RemoveContainer" containerID="56b403307b9a0e19993e9d96b8c43461d16c348c1385c8e4c2f97d974c14746e" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.067100 4724 generic.go:334] "Generic (PLEG): container finished" podID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerID="6c0cf98c9d0fef3ab39c0703b5c93439207fec4b8a3f2f2032db879069cde925" exitCode=137 Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.067373 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b9e70ee5-2312-436e-83ab-c365c8447761" containerName="nova-api-log" containerID="cri-o://bcd6074a9e059708b00e9cc3441dbf5fc0a3c9aa0a20191dc981b1443fd94592" gracePeriod=30 Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.067811 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ddfb9fd96-hzc8c" event={"ID":"fa39614a-db84-4214-baa1-bd7cbc7b5ae0","Type":"ContainerDied","Data":"6c0cf98c9d0fef3ab39c0703b5c93439207fec4b8a3f2f2032db879069cde925"} Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.068212 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b9e70ee5-2312-436e-83ab-c365c8447761" containerName="nova-api-api" containerID="cri-o://4deb859c06fe259b5e1eca922df71457ca6d347f7bd781c915f6e1f60ee0b235" gracePeriod=30 Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.117276 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kntp9\" (UniqueName: \"kubernetes.io/projected/e376e046-91ef-4f7d-b094-1486a82c2239-kube-api-access-kntp9\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.117306 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e376e046-91ef-4f7d-b094-1486a82c2239-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.146423 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-config-data" (OuterVolumeSpecName: "config-data") pod "e376e046-91ef-4f7d-b094-1486a82c2239" (UID: "e376e046-91ef-4f7d-b094-1486a82c2239"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.174927 4724 scope.go:117] "RemoveContainer" containerID="504049bad125ab2ac138c7723de8767476e9c06a86bcf9a9fcf9b63a26ed679d" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.200711 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e376e046-91ef-4f7d-b094-1486a82c2239" (UID: "e376e046-91ef-4f7d-b094-1486a82c2239"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.219287 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.219313 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.246747 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e376e046-91ef-4f7d-b094-1486a82c2239" (UID: "e376e046-91ef-4f7d-b094-1486a82c2239"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.296421 4724 scope.go:117] "RemoveContainer" containerID="56b403307b9a0e19993e9d96b8c43461d16c348c1385c8e4c2f97d974c14746e" Feb 26 11:34:38 crc kubenswrapper[4724]: E0226 11:34:38.309354 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56b403307b9a0e19993e9d96b8c43461d16c348c1385c8e4c2f97d974c14746e\": container with ID starting with 56b403307b9a0e19993e9d96b8c43461d16c348c1385c8e4c2f97d974c14746e not found: ID does not exist" containerID="56b403307b9a0e19993e9d96b8c43461d16c348c1385c8e4c2f97d974c14746e" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.309412 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b403307b9a0e19993e9d96b8c43461d16c348c1385c8e4c2f97d974c14746e"} err="failed to get container status \"56b403307b9a0e19993e9d96b8c43461d16c348c1385c8e4c2f97d974c14746e\": rpc error: code = NotFound desc = could not find container \"56b403307b9a0e19993e9d96b8c43461d16c348c1385c8e4c2f97d974c14746e\": container with ID starting with 56b403307b9a0e19993e9d96b8c43461d16c348c1385c8e4c2f97d974c14746e not found: ID does not exist" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.309443 4724 scope.go:117] "RemoveContainer" containerID="504049bad125ab2ac138c7723de8767476e9c06a86bcf9a9fcf9b63a26ed679d" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.323610 4724 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e376e046-91ef-4f7d-b094-1486a82c2239-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:38 crc kubenswrapper[4724]: E0226 11:34:38.323679 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"504049bad125ab2ac138c7723de8767476e9c06a86bcf9a9fcf9b63a26ed679d\": container with ID starting with 504049bad125ab2ac138c7723de8767476e9c06a86bcf9a9fcf9b63a26ed679d not found: ID does not exist" containerID="504049bad125ab2ac138c7723de8767476e9c06a86bcf9a9fcf9b63a26ed679d" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.323701 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"504049bad125ab2ac138c7723de8767476e9c06a86bcf9a9fcf9b63a26ed679d"} err="failed to get container status \"504049bad125ab2ac138c7723de8767476e9c06a86bcf9a9fcf9b63a26ed679d\": rpc error: code = NotFound desc = could not find container \"504049bad125ab2ac138c7723de8767476e9c06a86bcf9a9fcf9b63a26ed679d\": container with ID starting with 504049bad125ab2ac138c7723de8767476e9c06a86bcf9a9fcf9b63a26ed679d not found: ID does not exist" Feb 26 11:34:38 crc kubenswrapper[4724]: E0226 11:34:38.330751 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9e70ee5_2312_436e_83ab_c365c8447761.slice/crio-bcd6074a9e059708b00e9cc3441dbf5fc0a3c9aa0a20191dc981b1443fd94592.scope\": RecentStats: unable to find data in memory cache]" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.437283 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.452169 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.482992 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.483309 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:38 crc kubenswrapper[4724]: E0226 11:34:38.483922 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.484011 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" Feb 26 11:34:38 crc kubenswrapper[4724]: E0226 11:34:38.484108 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.484193 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" Feb 26 11:34:38 crc kubenswrapper[4724]: E0226 11:34:38.484291 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.484356 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" Feb 26 11:34:38 crc kubenswrapper[4724]: E0226 11:34:38.484448 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon-log" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.484516 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon-log" Feb 26 11:34:38 crc kubenswrapper[4724]: E0226 11:34:38.484596 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e376e046-91ef-4f7d-b094-1486a82c2239" containerName="nova-metadata-log" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.484657 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e376e046-91ef-4f7d-b094-1486a82c2239" containerName="nova-metadata-log" Feb 26 11:34:38 crc kubenswrapper[4724]: E0226 11:34:38.484749 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e376e046-91ef-4f7d-b094-1486a82c2239" containerName="nova-metadata-metadata" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.484821 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e376e046-91ef-4f7d-b094-1486a82c2239" containerName="nova-metadata-metadata" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.485117 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e376e046-91ef-4f7d-b094-1486a82c2239" containerName="nova-metadata-log" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.485223 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e376e046-91ef-4f7d-b094-1486a82c2239" containerName="nova-metadata-metadata" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.485303 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon-log" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.485368 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.485434 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.485511 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.487220 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.492258 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.498638 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.498788 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.629971 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-config-data\") pod \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.630221 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87dbf\" (UniqueName: \"kubernetes.io/projected/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-kube-api-access-87dbf\") pod \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.630269 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-horizon-tls-certs\") pod \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.630322 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-scripts\") pod \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.630418 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-combined-ca-bundle\") pod \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.630477 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-horizon-secret-key\") pod \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.630508 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-logs\") pod \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\" (UID: \"fa39614a-db84-4214-baa1-bd7cbc7b5ae0\") " Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.630854 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-config-data\") pod \"nova-metadata-0\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.630888 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99a58e01-ce00-4c33-8d7c-046711b4ef9a-logs\") pod \"nova-metadata-0\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.630989 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkrpv\" (UniqueName: \"kubernetes.io/projected/99a58e01-ce00-4c33-8d7c-046711b4ef9a-kube-api-access-zkrpv\") pod \"nova-metadata-0\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.631047 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.631105 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.632216 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-logs" (OuterVolumeSpecName: "logs") pod "fa39614a-db84-4214-baa1-bd7cbc7b5ae0" (UID: "fa39614a-db84-4214-baa1-bd7cbc7b5ae0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.672265 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fa39614a-db84-4214-baa1-bd7cbc7b5ae0" (UID: "fa39614a-db84-4214-baa1-bd7cbc7b5ae0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.673860 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-kube-api-access-87dbf" (OuterVolumeSpecName: "kube-api-access-87dbf") pod "fa39614a-db84-4214-baa1-bd7cbc7b5ae0" (UID: "fa39614a-db84-4214-baa1-bd7cbc7b5ae0"). InnerVolumeSpecName "kube-api-access-87dbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.700285 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-scripts" (OuterVolumeSpecName: "scripts") pod "fa39614a-db84-4214-baa1-bd7cbc7b5ae0" (UID: "fa39614a-db84-4214-baa1-bd7cbc7b5ae0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.712305 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa39614a-db84-4214-baa1-bd7cbc7b5ae0" (UID: "fa39614a-db84-4214-baa1-bd7cbc7b5ae0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.737325 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.737399 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.737484 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-config-data\") pod \"nova-metadata-0\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.737500 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99a58e01-ce00-4c33-8d7c-046711b4ef9a-logs\") pod \"nova-metadata-0\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.737565 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkrpv\" (UniqueName: \"kubernetes.io/projected/99a58e01-ce00-4c33-8d7c-046711b4ef9a-kube-api-access-zkrpv\") pod \"nova-metadata-0\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.737615 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.737624 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.737634 4724 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.737652 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.737662 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87dbf\" (UniqueName: \"kubernetes.io/projected/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-kube-api-access-87dbf\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.739525 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99a58e01-ce00-4c33-8d7c-046711b4ef9a-logs\") pod \"nova-metadata-0\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.748209 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.749001 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-config-data" (OuterVolumeSpecName: "config-data") pod "fa39614a-db84-4214-baa1-bd7cbc7b5ae0" (UID: "fa39614a-db84-4214-baa1-bd7cbc7b5ae0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.752784 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.754987 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-config-data\") pod \"nova-metadata-0\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.758687 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkrpv\" (UniqueName: \"kubernetes.io/projected/99a58e01-ce00-4c33-8d7c-046711b4ef9a-kube-api-access-zkrpv\") pod \"nova-metadata-0\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.764413 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "fa39614a-db84-4214-baa1-bd7cbc7b5ae0" (UID: "fa39614a-db84-4214-baa1-bd7cbc7b5ae0"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.809294 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.839082 4724 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:38 crc kubenswrapper[4724]: I0226 11:34:38.839119 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa39614a-db84-4214-baa1-bd7cbc7b5ae0-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:39 crc kubenswrapper[4724]: I0226 11:34:39.091661 4724 generic.go:334] "Generic (PLEG): container finished" podID="b9e70ee5-2312-436e-83ab-c365c8447761" containerID="bcd6074a9e059708b00e9cc3441dbf5fc0a3c9aa0a20191dc981b1443fd94592" exitCode=143 Feb 26 11:34:39 crc kubenswrapper[4724]: I0226 11:34:39.092007 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9e70ee5-2312-436e-83ab-c365c8447761","Type":"ContainerDied","Data":"bcd6074a9e059708b00e9cc3441dbf5fc0a3c9aa0a20191dc981b1443fd94592"} Feb 26 11:34:39 crc kubenswrapper[4724]: I0226 11:34:39.098871 4724 generic.go:334] "Generic (PLEG): container finished" podID="cc258ae0-3005-4720-bcde-7a7be93c5dd0" containerID="1effd0b596865bb4b4f9296c953bb348d8b56a911231e548079e793d890679c4" exitCode=0 Feb 26 11:34:39 crc kubenswrapper[4724]: I0226 11:34:39.098972 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" event={"ID":"cc258ae0-3005-4720-bcde-7a7be93c5dd0","Type":"ContainerDied","Data":"1effd0b596865bb4b4f9296c953bb348d8b56a911231e548079e793d890679c4"} Feb 26 11:34:39 crc kubenswrapper[4724]: I0226 11:34:39.111631 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ddfb9fd96-hzc8c" event={"ID":"fa39614a-db84-4214-baa1-bd7cbc7b5ae0","Type":"ContainerDied","Data":"c6e21027ba0c7d09f5de31fa4c76eb438c2522455921165156e20f56089e2b47"} Feb 26 11:34:39 crc kubenswrapper[4724]: I0226 11:34:39.111678 4724 scope.go:117] "RemoveContainer" containerID="95449fe5b1852e70ef5d4115673dda1bb1e3c75529c1bdd990fe212a5d65423d" Feb 26 11:34:39 crc kubenswrapper[4724]: I0226 11:34:39.111798 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ddfb9fd96-hzc8c" Feb 26 11:34:39 crc kubenswrapper[4724]: I0226 11:34:39.249067 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-ddfb9fd96-hzc8c"] Feb 26 11:34:39 crc kubenswrapper[4724]: I0226 11:34:39.272254 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-ddfb9fd96-hzc8c"] Feb 26 11:34:39 crc kubenswrapper[4724]: I0226 11:34:39.374743 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:34:39 crc kubenswrapper[4724]: I0226 11:34:39.386574 4724 scope.go:117] "RemoveContainer" containerID="6c0cf98c9d0fef3ab39c0703b5c93439207fec4b8a3f2f2032db879069cde925" Feb 26 11:34:40 crc kubenswrapper[4724]: I0226 11:34:40.016224 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e376e046-91ef-4f7d-b094-1486a82c2239" path="/var/lib/kubelet/pods/e376e046-91ef-4f7d-b094-1486a82c2239/volumes" Feb 26 11:34:40 crc kubenswrapper[4724]: I0226 11:34:40.018105 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" path="/var/lib/kubelet/pods/fa39614a-db84-4214-baa1-bd7cbc7b5ae0/volumes" Feb 26 11:34:40 crc kubenswrapper[4724]: I0226 11:34:40.124113 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" event={"ID":"cc258ae0-3005-4720-bcde-7a7be93c5dd0","Type":"ContainerStarted","Data":"60bcb618d2445bf9f6312c22847c593458bb0d0ed2aa7e1cbf04e86767703a74"} Feb 26 11:34:40 crc kubenswrapper[4724]: I0226 11:34:40.125250 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:40 crc kubenswrapper[4724]: I0226 11:34:40.131011 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"99a58e01-ce00-4c33-8d7c-046711b4ef9a","Type":"ContainerStarted","Data":"c28ce24718f972547d34c6a7e47adceff5982e14d5c594c48ebe0fad746de577"} Feb 26 11:34:40 crc kubenswrapper[4724]: I0226 11:34:40.131061 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"99a58e01-ce00-4c33-8d7c-046711b4ef9a","Type":"ContainerStarted","Data":"b382b7484911baa49c95f1b1715cbbfecece3a443ad410a8e2f08ecfc26d79bd"} Feb 26 11:34:40 crc kubenswrapper[4724]: I0226 11:34:40.131085 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"99a58e01-ce00-4c33-8d7c-046711b4ef9a","Type":"ContainerStarted","Data":"8683b8f623cc8d845843cc1099c829e04659abc7aadfa4c0807ddfabcda4337f"} Feb 26 11:34:40 crc kubenswrapper[4724]: I0226 11:34:40.132779 4724 generic.go:334] "Generic (PLEG): container finished" podID="d532a325-83f4-45d6-8363-8fab02ca4afc" containerID="563a5f8a59eb586e1fef7cd004568c34552bfbb258e006ce774a199146989847" exitCode=0 Feb 26 11:34:40 crc kubenswrapper[4724]: I0226 11:34:40.132813 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pqttj" event={"ID":"d532a325-83f4-45d6-8363-8fab02ca4afc","Type":"ContainerDied","Data":"563a5f8a59eb586e1fef7cd004568c34552bfbb258e006ce774a199146989847"} Feb 26 11:34:40 crc kubenswrapper[4724]: I0226 11:34:40.151689 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" podStartSLOduration=5.151665432 podStartE2EDuration="5.151665432s" podCreationTimestamp="2026-02-26 11:34:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:40.146681805 +0000 UTC m=+1746.802420930" watchObservedRunningTime="2026-02-26 11:34:40.151665432 +0000 UTC m=+1746.807404547" Feb 26 11:34:40 crc kubenswrapper[4724]: I0226 11:34:40.179841 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.179821808 podStartE2EDuration="2.179821808s" podCreationTimestamp="2026-02-26 11:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:40.179771927 +0000 UTC m=+1746.835511062" watchObservedRunningTime="2026-02-26 11:34:40.179821808 +0000 UTC m=+1746.835560933" Feb 26 11:34:40 crc kubenswrapper[4724]: E0226 11:34:40.331867 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c15bec7f37de83d565911ceb354680d5d49fae2859ee295290b413bf406cd000" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 11:34:40 crc kubenswrapper[4724]: E0226 11:34:40.334545 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c15bec7f37de83d565911ceb354680d5d49fae2859ee295290b413bf406cd000" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 11:34:40 crc kubenswrapper[4724]: E0226 11:34:40.337741 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c15bec7f37de83d565911ceb354680d5d49fae2859ee295290b413bf406cd000" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 11:34:40 crc kubenswrapper[4724]: E0226 11:34:40.337810 4724 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6c641824-adb3-47ca-88e7-8ae6b13b28ea" containerName="nova-scheduler-scheduler" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.150834 4724 generic.go:334] "Generic (PLEG): container finished" podID="6c641824-adb3-47ca-88e7-8ae6b13b28ea" containerID="c15bec7f37de83d565911ceb354680d5d49fae2859ee295290b413bf406cd000" exitCode=0 Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.150985 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c641824-adb3-47ca-88e7-8ae6b13b28ea","Type":"ContainerDied","Data":"c15bec7f37de83d565911ceb354680d5d49fae2859ee295290b413bf406cd000"} Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.341777 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.515577 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c641824-adb3-47ca-88e7-8ae6b13b28ea-combined-ca-bundle\") pod \"6c641824-adb3-47ca-88e7-8ae6b13b28ea\" (UID: \"6c641824-adb3-47ca-88e7-8ae6b13b28ea\") " Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.515718 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j42pq\" (UniqueName: \"kubernetes.io/projected/6c641824-adb3-47ca-88e7-8ae6b13b28ea-kube-api-access-j42pq\") pod \"6c641824-adb3-47ca-88e7-8ae6b13b28ea\" (UID: \"6c641824-adb3-47ca-88e7-8ae6b13b28ea\") " Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.515924 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c641824-adb3-47ca-88e7-8ae6b13b28ea-config-data\") pod \"6c641824-adb3-47ca-88e7-8ae6b13b28ea\" (UID: \"6c641824-adb3-47ca-88e7-8ae6b13b28ea\") " Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.534424 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c641824-adb3-47ca-88e7-8ae6b13b28ea-kube-api-access-j42pq" (OuterVolumeSpecName: "kube-api-access-j42pq") pod "6c641824-adb3-47ca-88e7-8ae6b13b28ea" (UID: "6c641824-adb3-47ca-88e7-8ae6b13b28ea"). InnerVolumeSpecName "kube-api-access-j42pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.545655 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c641824-adb3-47ca-88e7-8ae6b13b28ea-config-data" (OuterVolumeSpecName: "config-data") pod "6c641824-adb3-47ca-88e7-8ae6b13b28ea" (UID: "6c641824-adb3-47ca-88e7-8ae6b13b28ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.564479 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c641824-adb3-47ca-88e7-8ae6b13b28ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c641824-adb3-47ca-88e7-8ae6b13b28ea" (UID: "6c641824-adb3-47ca-88e7-8ae6b13b28ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.618562 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c641824-adb3-47ca-88e7-8ae6b13b28ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.618603 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j42pq\" (UniqueName: \"kubernetes.io/projected/6c641824-adb3-47ca-88e7-8ae6b13b28ea-kube-api-access-j42pq\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.618619 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c641824-adb3-47ca-88e7-8ae6b13b28ea-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.626438 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.626772 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="ceilometer-central-agent" containerID="cri-o://34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017" gracePeriod=30 Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.627252 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="sg-core" containerID="cri-o://902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3" gracePeriod=30 Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.627303 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="proxy-httpd" containerID="cri-o://9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2" gracePeriod=30 Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.627327 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="ceilometer-notification-agent" containerID="cri-o://53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57" gracePeriod=30 Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.636663 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.210:3000/\": read tcp 10.217.0.2:36696->10.217.0.210:3000: read: connection reset by peer" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.677245 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.822266 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r24bf\" (UniqueName: \"kubernetes.io/projected/d532a325-83f4-45d6-8363-8fab02ca4afc-kube-api-access-r24bf\") pod \"d532a325-83f4-45d6-8363-8fab02ca4afc\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.822496 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-config-data\") pod \"d532a325-83f4-45d6-8363-8fab02ca4afc\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.822591 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-scripts\") pod \"d532a325-83f4-45d6-8363-8fab02ca4afc\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.822678 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-combined-ca-bundle\") pod \"d532a325-83f4-45d6-8363-8fab02ca4afc\" (UID: \"d532a325-83f4-45d6-8363-8fab02ca4afc\") " Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.834886 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d532a325-83f4-45d6-8363-8fab02ca4afc-kube-api-access-r24bf" (OuterVolumeSpecName: "kube-api-access-r24bf") pod "d532a325-83f4-45d6-8363-8fab02ca4afc" (UID: "d532a325-83f4-45d6-8363-8fab02ca4afc"). InnerVolumeSpecName "kube-api-access-r24bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.847362 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-scripts" (OuterVolumeSpecName: "scripts") pod "d532a325-83f4-45d6-8363-8fab02ca4afc" (UID: "d532a325-83f4-45d6-8363-8fab02ca4afc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.895873 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d532a325-83f4-45d6-8363-8fab02ca4afc" (UID: "d532a325-83f4-45d6-8363-8fab02ca4afc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.927080 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.927113 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r24bf\" (UniqueName: \"kubernetes.io/projected/d532a325-83f4-45d6-8363-8fab02ca4afc-kube-api-access-r24bf\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.927122 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:41 crc kubenswrapper[4724]: I0226 11:34:41.928442 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-config-data" (OuterVolumeSpecName: "config-data") pod "d532a325-83f4-45d6-8363-8fab02ca4afc" (UID: "d532a325-83f4-45d6-8363-8fab02ca4afc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.030940 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d532a325-83f4-45d6-8363-8fab02ca4afc-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.187413 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pqttj" event={"ID":"d532a325-83f4-45d6-8363-8fab02ca4afc","Type":"ContainerDied","Data":"552a035df672019e957d8e0094ef6df214b8d47b359b555c0f4e9b7cef9e2083"} Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.187452 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="552a035df672019e957d8e0094ef6df214b8d47b359b555c0f4e9b7cef9e2083" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.187456 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pqttj" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.195099 4724 generic.go:334] "Generic (PLEG): container finished" podID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerID="9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2" exitCode=0 Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.195131 4724 generic.go:334] "Generic (PLEG): container finished" podID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerID="902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3" exitCode=2 Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.195194 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0818e705-e62a-4d4f-9fa3-47e66a0f8946","Type":"ContainerDied","Data":"9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2"} Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.195252 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0818e705-e62a-4d4f-9fa3-47e66a0f8946","Type":"ContainerDied","Data":"902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3"} Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.204311 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c641824-adb3-47ca-88e7-8ae6b13b28ea","Type":"ContainerDied","Data":"d758c2d25330c9e411c471effa330d987bb7d0955918e22c3c68e82528e8aee7"} Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.204385 4724 scope.go:117] "RemoveContainer" containerID="c15bec7f37de83d565911ceb354680d5d49fae2859ee295290b413bf406cd000" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.204710 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.246308 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.294576 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.317519 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:34:42 crc kubenswrapper[4724]: E0226 11:34:42.318070 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d532a325-83f4-45d6-8363-8fab02ca4afc" containerName="nova-cell1-conductor-db-sync" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.318095 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d532a325-83f4-45d6-8363-8fab02ca4afc" containerName="nova-cell1-conductor-db-sync" Feb 26 11:34:42 crc kubenswrapper[4724]: E0226 11:34:42.318145 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c641824-adb3-47ca-88e7-8ae6b13b28ea" containerName="nova-scheduler-scheduler" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.318157 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c641824-adb3-47ca-88e7-8ae6b13b28ea" containerName="nova-scheduler-scheduler" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.318400 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c641824-adb3-47ca-88e7-8ae6b13b28ea" containerName="nova-scheduler-scheduler" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.318424 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="d532a325-83f4-45d6-8363-8fab02ca4afc" containerName="nova-cell1-conductor-db-sync" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.319275 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.322559 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.334719 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.352865 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.376958 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.377112 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.382821 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.442423 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919b78bb-6cec-4e04-a51b-464b175630e5-config-data\") pod \"nova-scheduler-0\" (UID: \"919b78bb-6cec-4e04-a51b-464b175630e5\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.442531 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919b78bb-6cec-4e04-a51b-464b175630e5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"919b78bb-6cec-4e04-a51b-464b175630e5\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.442592 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n7d9\" (UniqueName: \"kubernetes.io/projected/919b78bb-6cec-4e04-a51b-464b175630e5-kube-api-access-7n7d9\") pod \"nova-scheduler-0\" (UID: \"919b78bb-6cec-4e04-a51b-464b175630e5\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.545722 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcgtt\" (UniqueName: \"kubernetes.io/projected/8b01b6fe-7860-4ea8-9a62-4113061e1d42-kube-api-access-qcgtt\") pod \"nova-cell1-conductor-0\" (UID: \"8b01b6fe-7860-4ea8-9a62-4113061e1d42\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.545779 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919b78bb-6cec-4e04-a51b-464b175630e5-config-data\") pod \"nova-scheduler-0\" (UID: \"919b78bb-6cec-4e04-a51b-464b175630e5\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.545845 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919b78bb-6cec-4e04-a51b-464b175630e5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"919b78bb-6cec-4e04-a51b-464b175630e5\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.545873 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n7d9\" (UniqueName: \"kubernetes.io/projected/919b78bb-6cec-4e04-a51b-464b175630e5-kube-api-access-7n7d9\") pod \"nova-scheduler-0\" (UID: \"919b78bb-6cec-4e04-a51b-464b175630e5\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.545895 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b01b6fe-7860-4ea8-9a62-4113061e1d42-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8b01b6fe-7860-4ea8-9a62-4113061e1d42\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.545957 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b01b6fe-7860-4ea8-9a62-4113061e1d42-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8b01b6fe-7860-4ea8-9a62-4113061e1d42\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.550073 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919b78bb-6cec-4e04-a51b-464b175630e5-config-data\") pod \"nova-scheduler-0\" (UID: \"919b78bb-6cec-4e04-a51b-464b175630e5\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.555931 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919b78bb-6cec-4e04-a51b-464b175630e5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"919b78bb-6cec-4e04-a51b-464b175630e5\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.582871 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n7d9\" (UniqueName: \"kubernetes.io/projected/919b78bb-6cec-4e04-a51b-464b175630e5-kube-api-access-7n7d9\") pod \"nova-scheduler-0\" (UID: \"919b78bb-6cec-4e04-a51b-464b175630e5\") " pod="openstack/nova-scheduler-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.647949 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b01b6fe-7860-4ea8-9a62-4113061e1d42-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8b01b6fe-7860-4ea8-9a62-4113061e1d42\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.648131 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcgtt\" (UniqueName: \"kubernetes.io/projected/8b01b6fe-7860-4ea8-9a62-4113061e1d42-kube-api-access-qcgtt\") pod \"nova-cell1-conductor-0\" (UID: \"8b01b6fe-7860-4ea8-9a62-4113061e1d42\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.648224 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b01b6fe-7860-4ea8-9a62-4113061e1d42-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8b01b6fe-7860-4ea8-9a62-4113061e1d42\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.653472 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b01b6fe-7860-4ea8-9a62-4113061e1d42-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8b01b6fe-7860-4ea8-9a62-4113061e1d42\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.654939 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b01b6fe-7860-4ea8-9a62-4113061e1d42-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8b01b6fe-7860-4ea8-9a62-4113061e1d42\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.677941 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcgtt\" (UniqueName: \"kubernetes.io/projected/8b01b6fe-7860-4ea8-9a62-4113061e1d42-kube-api-access-qcgtt\") pod \"nova-cell1-conductor-0\" (UID: \"8b01b6fe-7860-4ea8-9a62-4113061e1d42\") " pod="openstack/nova-cell1-conductor-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.744166 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.745156 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 26 11:34:42 crc kubenswrapper[4724]: I0226 11:34:42.958950 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.056203 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-config-data\") pod \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.056256 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44j7j\" (UniqueName: \"kubernetes.io/projected/0818e705-e62a-4d4f-9fa3-47e66a0f8946-kube-api-access-44j7j\") pod \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.056341 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-sg-core-conf-yaml\") pod \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.056452 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-combined-ca-bundle\") pod \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.056505 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0818e705-e62a-4d4f-9fa3-47e66a0f8946-log-httpd\") pod \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.056564 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-scripts\") pod \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.056613 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0818e705-e62a-4d4f-9fa3-47e66a0f8946-run-httpd\") pod \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\" (UID: \"0818e705-e62a-4d4f-9fa3-47e66a0f8946\") " Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.057313 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0818e705-e62a-4d4f-9fa3-47e66a0f8946-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0818e705-e62a-4d4f-9fa3-47e66a0f8946" (UID: "0818e705-e62a-4d4f-9fa3-47e66a0f8946"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.057554 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0818e705-e62a-4d4f-9fa3-47e66a0f8946-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0818e705-e62a-4d4f-9fa3-47e66a0f8946" (UID: "0818e705-e62a-4d4f-9fa3-47e66a0f8946"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.061501 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-ddfb9fd96-hzc8c" podUID="fa39614a-db84-4214-baa1-bd7cbc7b5ae0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.154:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.061797 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-scripts" (OuterVolumeSpecName: "scripts") pod "0818e705-e62a-4d4f-9fa3-47e66a0f8946" (UID: "0818e705-e62a-4d4f-9fa3-47e66a0f8946"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.079511 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0818e705-e62a-4d4f-9fa3-47e66a0f8946-kube-api-access-44j7j" (OuterVolumeSpecName: "kube-api-access-44j7j") pod "0818e705-e62a-4d4f-9fa3-47e66a0f8946" (UID: "0818e705-e62a-4d4f-9fa3-47e66a0f8946"). InnerVolumeSpecName "kube-api-access-44j7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.123284 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0818e705-e62a-4d4f-9fa3-47e66a0f8946" (UID: "0818e705-e62a-4d4f-9fa3-47e66a0f8946"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.159639 4724 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.159821 4724 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0818e705-e62a-4d4f-9fa3-47e66a0f8946-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.159835 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.159847 4724 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0818e705-e62a-4d4f-9fa3-47e66a0f8946-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.159859 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44j7j\" (UniqueName: \"kubernetes.io/projected/0818e705-e62a-4d4f-9fa3-47e66a0f8946-kube-api-access-44j7j\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.238148 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0818e705-e62a-4d4f-9fa3-47e66a0f8946" (UID: "0818e705-e62a-4d4f-9fa3-47e66a0f8946"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.251878 4724 generic.go:334] "Generic (PLEG): container finished" podID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerID="53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57" exitCode=0 Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.251929 4724 generic.go:334] "Generic (PLEG): container finished" podID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerID="34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017" exitCode=0 Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.251981 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0818e705-e62a-4d4f-9fa3-47e66a0f8946","Type":"ContainerDied","Data":"53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57"} Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.252013 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0818e705-e62a-4d4f-9fa3-47e66a0f8946","Type":"ContainerDied","Data":"34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017"} Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.252024 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0818e705-e62a-4d4f-9fa3-47e66a0f8946","Type":"ContainerDied","Data":"8d438d2a002cc75deb27d39c0eea0f7497b346dabf162e42ecce5f27938a50f8"} Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.252041 4724 scope.go:117] "RemoveContainer" containerID="9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.252423 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.261281 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.265541 4724 generic.go:334] "Generic (PLEG): container finished" podID="b9e70ee5-2312-436e-83ab-c365c8447761" containerID="4deb859c06fe259b5e1eca922df71457ca6d347f7bd781c915f6e1f60ee0b235" exitCode=0 Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.265600 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9e70ee5-2312-436e-83ab-c365c8447761","Type":"ContainerDied","Data":"4deb859c06fe259b5e1eca922df71457ca6d347f7bd781c915f6e1f60ee0b235"} Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.340801 4724 scope.go:117] "RemoveContainer" containerID="902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.380627 4724 scope.go:117] "RemoveContainer" containerID="53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.417348 4724 scope.go:117] "RemoveContainer" containerID="34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.419547 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-config-data" (OuterVolumeSpecName: "config-data") pod "0818e705-e62a-4d4f-9fa3-47e66a0f8946" (UID: "0818e705-e62a-4d4f-9fa3-47e66a0f8946"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.449549 4724 scope.go:117] "RemoveContainer" containerID="9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2" Feb 26 11:34:43 crc kubenswrapper[4724]: E0226 11:34:43.450036 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2\": container with ID starting with 9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2 not found: ID does not exist" containerID="9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.450074 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2"} err="failed to get container status \"9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2\": rpc error: code = NotFound desc = could not find container \"9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2\": container with ID starting with 9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2 not found: ID does not exist" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.450100 4724 scope.go:117] "RemoveContainer" containerID="902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3" Feb 26 11:34:43 crc kubenswrapper[4724]: E0226 11:34:43.450904 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3\": container with ID starting with 902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3 not found: ID does not exist" containerID="902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.450966 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3"} err="failed to get container status \"902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3\": rpc error: code = NotFound desc = could not find container \"902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3\": container with ID starting with 902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3 not found: ID does not exist" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.451001 4724 scope.go:117] "RemoveContainer" containerID="53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57" Feb 26 11:34:43 crc kubenswrapper[4724]: E0226 11:34:43.451794 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57\": container with ID starting with 53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57 not found: ID does not exist" containerID="53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.451826 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57"} err="failed to get container status \"53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57\": rpc error: code = NotFound desc = could not find container \"53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57\": container with ID starting with 53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57 not found: ID does not exist" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.451841 4724 scope.go:117] "RemoveContainer" containerID="34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017" Feb 26 11:34:43 crc kubenswrapper[4724]: E0226 11:34:43.452541 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017\": container with ID starting with 34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017 not found: ID does not exist" containerID="34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.452570 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017"} err="failed to get container status \"34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017\": rpc error: code = NotFound desc = could not find container \"34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017\": container with ID starting with 34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017 not found: ID does not exist" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.452593 4724 scope.go:117] "RemoveContainer" containerID="9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.453272 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2"} err="failed to get container status \"9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2\": rpc error: code = NotFound desc = could not find container \"9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2\": container with ID starting with 9fe504db03fd2b3b8c11f68f7a2998e62a169847530eb0869d4ef75122dcbab2 not found: ID does not exist" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.453294 4724 scope.go:117] "RemoveContainer" containerID="902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.453561 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3"} err="failed to get container status \"902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3\": rpc error: code = NotFound desc = could not find container \"902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3\": container with ID starting with 902ffbf3b115cc7c27beb7fe8af8c6615cecca1f800211fa6ddb90b6010a0fe3 not found: ID does not exist" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.453585 4724 scope.go:117] "RemoveContainer" containerID="53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.453778 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57"} err="failed to get container status \"53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57\": rpc error: code = NotFound desc = could not find container \"53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57\": container with ID starting with 53380a8df95fe7a1f27cef2b2ac446054aed4f29571a9f7c8ed2f7dfc1a3cb57 not found: ID does not exist" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.453803 4724 scope.go:117] "RemoveContainer" containerID="34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.454050 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017"} err="failed to get container status \"34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017\": rpc error: code = NotFound desc = could not find container \"34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017\": container with ID starting with 34477595c3626af57599663643e9182227b4e830a4b8d2ae216924e831113017 not found: ID does not exist" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.465437 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0818e705-e62a-4d4f-9fa3-47e66a0f8946-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.470837 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.566748 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e70ee5-2312-436e-83ab-c365c8447761-combined-ca-bundle\") pod \"b9e70ee5-2312-436e-83ab-c365c8447761\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.566900 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9e70ee5-2312-436e-83ab-c365c8447761-logs\") pod \"b9e70ee5-2312-436e-83ab-c365c8447761\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.567058 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9e70ee5-2312-436e-83ab-c365c8447761-config-data\") pod \"b9e70ee5-2312-436e-83ab-c365c8447761\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.567120 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpqfp\" (UniqueName: \"kubernetes.io/projected/b9e70ee5-2312-436e-83ab-c365c8447761-kube-api-access-rpqfp\") pod \"b9e70ee5-2312-436e-83ab-c365c8447761\" (UID: \"b9e70ee5-2312-436e-83ab-c365c8447761\") " Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.568341 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9e70ee5-2312-436e-83ab-c365c8447761-logs" (OuterVolumeSpecName: "logs") pod "b9e70ee5-2312-436e-83ab-c365c8447761" (UID: "b9e70ee5-2312-436e-83ab-c365c8447761"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.576727 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9e70ee5-2312-436e-83ab-c365c8447761-kube-api-access-rpqfp" (OuterVolumeSpecName: "kube-api-access-rpqfp") pod "b9e70ee5-2312-436e-83ab-c365c8447761" (UID: "b9e70ee5-2312-436e-83ab-c365c8447761"). InnerVolumeSpecName "kube-api-access-rpqfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.599817 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:34:43 crc kubenswrapper[4724]: W0226 11:34:43.622599 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod919b78bb_6cec_4e04_a51b_464b175630e5.slice/crio-c0d5b26d94c9244fa0a8283e35db760ea257410a1ee1ca236288a3a93ec5040e WatchSource:0}: Error finding container c0d5b26d94c9244fa0a8283e35db760ea257410a1ee1ca236288a3a93ec5040e: Status 404 returned error can't find the container with id c0d5b26d94c9244fa0a8283e35db760ea257410a1ee1ca236288a3a93ec5040e Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.641149 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:43 crc kubenswrapper[4724]: W0226 11:34:43.670877 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b01b6fe_7860_4ea8_9a62_4113061e1d42.slice/crio-b40accda2441ef359c2d3f2a3ad545de42891a61449c75ed3d82aad77b5bad63 WatchSource:0}: Error finding container b40accda2441ef359c2d3f2a3ad545de42891a61449c75ed3d82aad77b5bad63: Status 404 returned error can't find the container with id b40accda2441ef359c2d3f2a3ad545de42891a61449c75ed3d82aad77b5bad63 Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.672116 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpqfp\" (UniqueName: \"kubernetes.io/projected/b9e70ee5-2312-436e-83ab-c365c8447761-kube-api-access-rpqfp\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.672153 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9e70ee5-2312-436e-83ab-c365c8447761-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.680370 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e70ee5-2312-436e-83ab-c365c8447761-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9e70ee5-2312-436e-83ab-c365c8447761" (UID: "b9e70ee5-2312-436e-83ab-c365c8447761"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.680582 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.684849 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e70ee5-2312-436e-83ab-c365c8447761-config-data" (OuterVolumeSpecName: "config-data") pod "b9e70ee5-2312-436e-83ab-c365c8447761" (UID: "b9e70ee5-2312-436e-83ab-c365c8447761"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.698469 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.711862 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:43 crc kubenswrapper[4724]: E0226 11:34:43.712352 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e70ee5-2312-436e-83ab-c365c8447761" containerName="nova-api-api" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.712369 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e70ee5-2312-436e-83ab-c365c8447761" containerName="nova-api-api" Feb 26 11:34:43 crc kubenswrapper[4724]: E0226 11:34:43.712398 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="sg-core" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.712406 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="sg-core" Feb 26 11:34:43 crc kubenswrapper[4724]: E0226 11:34:43.712420 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="ceilometer-central-agent" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.712429 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="ceilometer-central-agent" Feb 26 11:34:43 crc kubenswrapper[4724]: E0226 11:34:43.712447 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="ceilometer-notification-agent" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.712455 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="ceilometer-notification-agent" Feb 26 11:34:43 crc kubenswrapper[4724]: E0226 11:34:43.712464 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e70ee5-2312-436e-83ab-c365c8447761" containerName="nova-api-log" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.712471 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e70ee5-2312-436e-83ab-c365c8447761" containerName="nova-api-log" Feb 26 11:34:43 crc kubenswrapper[4724]: E0226 11:34:43.712491 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="proxy-httpd" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.712498 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="proxy-httpd" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.712721 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="sg-core" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.712735 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="ceilometer-notification-agent" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.712751 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="ceilometer-central-agent" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.712758 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" containerName="proxy-httpd" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.712775 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e70ee5-2312-436e-83ab-c365c8447761" containerName="nova-api-api" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.712791 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e70ee5-2312-436e-83ab-c365c8447761" containerName="nova-api-log" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.716999 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.719742 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.724579 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.727320 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.781473 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9e70ee5-2312-436e-83ab-c365c8447761-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.781826 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e70ee5-2312-436e-83ab-c365c8447761-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.809979 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.811466 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.883557 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.883608 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.883648 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qdkk\" (UniqueName: \"kubernetes.io/projected/b81ddf59-7703-441c-aba4-94c804a7d830-kube-api-access-4qdkk\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.883687 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-config-data\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.883729 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81ddf59-7703-441c-aba4-94c804a7d830-run-httpd\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.883763 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81ddf59-7703-441c-aba4-94c804a7d830-log-httpd\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.883995 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-scripts\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.985589 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-config-data\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.985658 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81ddf59-7703-441c-aba4-94c804a7d830-run-httpd\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.985681 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81ddf59-7703-441c-aba4-94c804a7d830-log-httpd\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.985731 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-scripts\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.985859 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.985894 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.985939 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qdkk\" (UniqueName: \"kubernetes.io/projected/b81ddf59-7703-441c-aba4-94c804a7d830-kube-api-access-4qdkk\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.987818 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81ddf59-7703-441c-aba4-94c804a7d830-run-httpd\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.988069 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81ddf59-7703-441c-aba4-94c804a7d830-log-httpd\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.990384 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0818e705-e62a-4d4f-9fa3-47e66a0f8946" path="/var/lib/kubelet/pods/0818e705-e62a-4d4f-9fa3-47e66a0f8946/volumes" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.992124 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-config-data\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.993517 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c641824-adb3-47ca-88e7-8ae6b13b28ea" path="/var/lib/kubelet/pods/6c641824-adb3-47ca-88e7-8ae6b13b28ea/volumes" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.994827 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-scripts\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:43 crc kubenswrapper[4724]: I0226 11:34:43.995325 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.003353 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.012391 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qdkk\" (UniqueName: \"kubernetes.io/projected/b81ddf59-7703-441c-aba4-94c804a7d830-kube-api-access-4qdkk\") pod \"ceilometer-0\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " pod="openstack/ceilometer-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.206723 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.292059 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"919b78bb-6cec-4e04-a51b-464b175630e5","Type":"ContainerStarted","Data":"780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb"} Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.293360 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"919b78bb-6cec-4e04-a51b-464b175630e5","Type":"ContainerStarted","Data":"c0d5b26d94c9244fa0a8283e35db760ea257410a1ee1ca236288a3a93ec5040e"} Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.303892 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9e70ee5-2312-436e-83ab-c365c8447761","Type":"ContainerDied","Data":"3266fc228952a277c7a62fc535446149bad372ff9ff2129522856bff35a2f0a8"} Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.303967 4724 scope.go:117] "RemoveContainer" containerID="4deb859c06fe259b5e1eca922df71457ca6d347f7bd781c915f6e1f60ee0b235" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.303962 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.313144 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8b01b6fe-7860-4ea8-9a62-4113061e1d42","Type":"ContainerStarted","Data":"134f6c78ad4cd586512fcb624acb76a4e8aa0b4e5b77040e84d8c6b1c9f12476"} Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.313203 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8b01b6fe-7860-4ea8-9a62-4113061e1d42","Type":"ContainerStarted","Data":"b40accda2441ef359c2d3f2a3ad545de42891a61449c75ed3d82aad77b5bad63"} Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.313217 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.324537 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.324510556 podStartE2EDuration="2.324510556s" podCreationTimestamp="2026-02-26 11:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:44.314100771 +0000 UTC m=+1750.969839886" watchObservedRunningTime="2026-02-26 11:34:44.324510556 +0000 UTC m=+1750.980249671" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.347420 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.357888 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.396670 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.398573 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.400495 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.400468977 podStartE2EDuration="2.400468977s" podCreationTimestamp="2026-02-26 11:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:44.379467473 +0000 UTC m=+1751.035206588" watchObservedRunningTime="2026-02-26 11:34:44.400468977 +0000 UTC m=+1751.056208092" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.403167 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.403413 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.403704 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.408552 4724 scope.go:117] "RemoveContainer" containerID="bcd6074a9e059708b00e9cc3441dbf5fc0a3c9aa0a20191dc981b1443fd94592" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.484155 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.507811 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-public-tls-certs\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.508218 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.508325 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-config-data\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.515614 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-logs\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.515671 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.516065 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55slw\" (UniqueName: \"kubernetes.io/projected/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-kube-api-access-55slw\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.649023 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-public-tls-certs\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.649098 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.649139 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-config-data\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.649229 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-logs\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.649253 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.649323 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55slw\" (UniqueName: \"kubernetes.io/projected/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-kube-api-access-55slw\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.650521 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-logs\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.655830 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-config-data\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.656845 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-public-tls-certs\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.658281 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.662989 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.679224 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55slw\" (UniqueName: \"kubernetes.io/projected/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-kube-api-access-55slw\") pod \"nova-api-0\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.736156 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:34:44 crc kubenswrapper[4724]: I0226 11:34:44.824532 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:44 crc kubenswrapper[4724]: W0226 11:34:44.838135 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb81ddf59_7703_441c_aba4_94c804a7d830.slice/crio-fbc0812476848486372908d33108f1c7dda9b4cadd6d5b08c6b19baab6925eb3 WatchSource:0}: Error finding container fbc0812476848486372908d33108f1c7dda9b4cadd6d5b08c6b19baab6925eb3: Status 404 returned error can't find the container with id fbc0812476848486372908d33108f1c7dda9b4cadd6d5b08c6b19baab6925eb3 Feb 26 11:34:45 crc kubenswrapper[4724]: I0226 11:34:45.324144 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81ddf59-7703-441c-aba4-94c804a7d830","Type":"ContainerStarted","Data":"fbc0812476848486372908d33108f1c7dda9b4cadd6d5b08c6b19baab6925eb3"} Feb 26 11:34:45 crc kubenswrapper[4724]: I0226 11:34:45.355152 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:34:45 crc kubenswrapper[4724]: I0226 11:34:45.576791 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:34:45 crc kubenswrapper[4724]: I0226 11:34:45.998006 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9e70ee5-2312-436e-83ab-c365c8447761" path="/var/lib/kubelet/pods/b9e70ee5-2312-436e-83ab-c365c8447761/volumes" Feb 26 11:34:46 crc kubenswrapper[4724]: I0226 11:34:46.346406 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd","Type":"ContainerStarted","Data":"c2c717271ff34e2946ad07dfed2ac8c83ff137d0d3ec31b3879e552718ba8ae6"} Feb 26 11:34:46 crc kubenswrapper[4724]: I0226 11:34:46.347673 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd","Type":"ContainerStarted","Data":"01997d7ccb528fed9f47c89985a3717556fb81a96c687eb8d1586911b9014f07"} Feb 26 11:34:46 crc kubenswrapper[4724]: I0226 11:34:46.347758 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd","Type":"ContainerStarted","Data":"9c734ca600ab8c0fa458196c9bdc6dee21571d5e98eb72729a27b35826c96168"} Feb 26 11:34:46 crc kubenswrapper[4724]: I0226 11:34:46.354425 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81ddf59-7703-441c-aba4-94c804a7d830","Type":"ContainerStarted","Data":"e72f82a24a13730177725de51c5823e95e9f84d785aab51378eb96f258d3998b"} Feb 26 11:34:46 crc kubenswrapper[4724]: I0226 11:34:46.377003 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.376980593 podStartE2EDuration="2.376980593s" podCreationTimestamp="2026-02-26 11:34:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:46.369928744 +0000 UTC m=+1753.025667859" watchObservedRunningTime="2026-02-26 11:34:46.376980593 +0000 UTC m=+1753.032719708" Feb 26 11:34:46 crc kubenswrapper[4724]: I0226 11:34:46.777311 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:34:46 crc kubenswrapper[4724]: I0226 11:34:46.872125 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nzxmp"] Feb 26 11:34:46 crc kubenswrapper[4724]: I0226 11:34:46.880600 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7877d89589-nzxmp" podUID="2746e33a-3533-4464-abfb-2ead8cf17856" containerName="dnsmasq-dns" containerID="cri-o://c101124757a7f5d9c7f1a946596eeb61327948f2e381e400eeaeab6bb26c0e81" gracePeriod=10 Feb 26 11:34:46 crc kubenswrapper[4724]: I0226 11:34:46.905677 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:34:46 crc kubenswrapper[4724]: I0226 11:34:46.905731 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:34:47 crc kubenswrapper[4724]: I0226 11:34:47.375918 4724 generic.go:334] "Generic (PLEG): container finished" podID="2746e33a-3533-4464-abfb-2ead8cf17856" containerID="c101124757a7f5d9c7f1a946596eeb61327948f2e381e400eeaeab6bb26c0e81" exitCode=0 Feb 26 11:34:47 crc kubenswrapper[4724]: I0226 11:34:47.376207 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-nzxmp" event={"ID":"2746e33a-3533-4464-abfb-2ead8cf17856","Type":"ContainerDied","Data":"c101124757a7f5d9c7f1a946596eeb61327948f2e381e400eeaeab6bb26c0e81"} Feb 26 11:34:47 crc kubenswrapper[4724]: I0226 11:34:47.380958 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81ddf59-7703-441c-aba4-94c804a7d830","Type":"ContainerStarted","Data":"a40ca75d11c66d3b04b76e350dae173922bcea18dd91b34d739756a1e44705b8"} Feb 26 11:34:47 crc kubenswrapper[4724]: I0226 11:34:47.745433 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 11:34:47 crc kubenswrapper[4724]: I0226 11:34:47.773500 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:47 crc kubenswrapper[4724]: I0226 11:34:47.934587 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-ovsdbserver-sb\") pod \"2746e33a-3533-4464-abfb-2ead8cf17856\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " Feb 26 11:34:47 crc kubenswrapper[4724]: I0226 11:34:47.934968 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-config\") pod \"2746e33a-3533-4464-abfb-2ead8cf17856\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " Feb 26 11:34:47 crc kubenswrapper[4724]: I0226 11:34:47.934997 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-dns-svc\") pod \"2746e33a-3533-4464-abfb-2ead8cf17856\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " Feb 26 11:34:47 crc kubenswrapper[4724]: I0226 11:34:47.935029 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-ovsdbserver-nb\") pod \"2746e33a-3533-4464-abfb-2ead8cf17856\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " Feb 26 11:34:47 crc kubenswrapper[4724]: I0226 11:34:47.935073 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-dns-swift-storage-0\") pod \"2746e33a-3533-4464-abfb-2ead8cf17856\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " Feb 26 11:34:47 crc kubenswrapper[4724]: I0226 11:34:47.935251 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnlnq\" (UniqueName: \"kubernetes.io/projected/2746e33a-3533-4464-abfb-2ead8cf17856-kube-api-access-qnlnq\") pod \"2746e33a-3533-4464-abfb-2ead8cf17856\" (UID: \"2746e33a-3533-4464-abfb-2ead8cf17856\") " Feb 26 11:34:47 crc kubenswrapper[4724]: I0226 11:34:47.945216 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2746e33a-3533-4464-abfb-2ead8cf17856-kube-api-access-qnlnq" (OuterVolumeSpecName: "kube-api-access-qnlnq") pod "2746e33a-3533-4464-abfb-2ead8cf17856" (UID: "2746e33a-3533-4464-abfb-2ead8cf17856"). InnerVolumeSpecName "kube-api-access-qnlnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.046709 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnlnq\" (UniqueName: \"kubernetes.io/projected/2746e33a-3533-4464-abfb-2ead8cf17856-kube-api-access-qnlnq\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.166905 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2746e33a-3533-4464-abfb-2ead8cf17856" (UID: "2746e33a-3533-4464-abfb-2ead8cf17856"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.180857 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2746e33a-3533-4464-abfb-2ead8cf17856" (UID: "2746e33a-3533-4464-abfb-2ead8cf17856"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.222356 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2746e33a-3533-4464-abfb-2ead8cf17856" (UID: "2746e33a-3533-4464-abfb-2ead8cf17856"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.239751 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-config" (OuterVolumeSpecName: "config") pod "2746e33a-3533-4464-abfb-2ead8cf17856" (UID: "2746e33a-3533-4464-abfb-2ead8cf17856"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.263574 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.263607 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.263616 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.263626 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.306737 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2746e33a-3533-4464-abfb-2ead8cf17856" (UID: "2746e33a-3533-4464-abfb-2ead8cf17856"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.365689 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2746e33a-3533-4464-abfb-2ead8cf17856-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.406237 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81ddf59-7703-441c-aba4-94c804a7d830","Type":"ContainerStarted","Data":"949062c0737ed30dfb9166cbb8ba93450fe8de80d91b67537ea674722d9ec39e"} Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.408862 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-nzxmp" event={"ID":"2746e33a-3533-4464-abfb-2ead8cf17856","Type":"ContainerDied","Data":"1fe70fce880e2944d2cb675d9b2a490a72cdd8c05260e57729da78b62470e83d"} Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.408916 4724 scope.go:117] "RemoveContainer" containerID="c101124757a7f5d9c7f1a946596eeb61327948f2e381e400eeaeab6bb26c0e81" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.409042 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-nzxmp" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.443484 4724 scope.go:117] "RemoveContainer" containerID="d6dfc41e3a156c983d48353577cdc743bba7e8b4b20a78185d148d3298779533" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.462404 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nzxmp"] Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.476427 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-nzxmp"] Feb 26 11:34:48 crc kubenswrapper[4724]: E0226 11:34:48.629868 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2746e33a_3533_4464_abfb_2ead8cf17856.slice\": RecentStats: unable to find data in memory cache]" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.812899 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 11:34:48 crc kubenswrapper[4724]: I0226 11:34:48.813232 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 11:34:49 crc kubenswrapper[4724]: I0226 11:34:49.820332 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="99a58e01-ce00-4c33-8d7c-046711b4ef9a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:34:49 crc kubenswrapper[4724]: I0226 11:34:49.820360 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="99a58e01-ce00-4c33-8d7c-046711b4ef9a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:34:49 crc kubenswrapper[4724]: I0226 11:34:49.990567 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2746e33a-3533-4464-abfb-2ead8cf17856" path="/var/lib/kubelet/pods/2746e33a-3533-4464-abfb-2ead8cf17856/volumes" Feb 26 11:34:51 crc kubenswrapper[4724]: I0226 11:34:51.457976 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81ddf59-7703-441c-aba4-94c804a7d830","Type":"ContainerStarted","Data":"81f4a2920fce99b629f57b812de2f688ccb7786e0a8421c0f097f17b7ca5b694"} Feb 26 11:34:51 crc kubenswrapper[4724]: I0226 11:34:51.458461 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:34:51 crc kubenswrapper[4724]: I0226 11:34:51.458254 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="ceilometer-notification-agent" containerID="cri-o://a40ca75d11c66d3b04b76e350dae173922bcea18dd91b34d739756a1e44705b8" gracePeriod=30 Feb 26 11:34:51 crc kubenswrapper[4724]: I0226 11:34:51.458255 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="sg-core" containerID="cri-o://949062c0737ed30dfb9166cbb8ba93450fe8de80d91b67537ea674722d9ec39e" gracePeriod=30 Feb 26 11:34:51 crc kubenswrapper[4724]: I0226 11:34:51.458322 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="proxy-httpd" containerID="cri-o://81f4a2920fce99b629f57b812de2f688ccb7786e0a8421c0f097f17b7ca5b694" gracePeriod=30 Feb 26 11:34:51 crc kubenswrapper[4724]: I0226 11:34:51.458192 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="ceilometer-central-agent" containerID="cri-o://e72f82a24a13730177725de51c5823e95e9f84d785aab51378eb96f258d3998b" gracePeriod=30 Feb 26 11:34:51 crc kubenswrapper[4724]: I0226 11:34:51.483657 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.447589968 podStartE2EDuration="8.483640266s" podCreationTimestamp="2026-02-26 11:34:43 +0000 UTC" firstStartedPulling="2026-02-26 11:34:44.843441153 +0000 UTC m=+1751.499180268" lastFinishedPulling="2026-02-26 11:34:50.879491451 +0000 UTC m=+1757.535230566" observedRunningTime="2026-02-26 11:34:51.482862616 +0000 UTC m=+1758.138601751" watchObservedRunningTime="2026-02-26 11:34:51.483640266 +0000 UTC m=+1758.139379381" Feb 26 11:34:52 crc kubenswrapper[4724]: I0226 11:34:52.470618 4724 generic.go:334] "Generic (PLEG): container finished" podID="b81ddf59-7703-441c-aba4-94c804a7d830" containerID="949062c0737ed30dfb9166cbb8ba93450fe8de80d91b67537ea674722d9ec39e" exitCode=2 Feb 26 11:34:52 crc kubenswrapper[4724]: I0226 11:34:52.471950 4724 generic.go:334] "Generic (PLEG): container finished" podID="b81ddf59-7703-441c-aba4-94c804a7d830" containerID="a40ca75d11c66d3b04b76e350dae173922bcea18dd91b34d739756a1e44705b8" exitCode=0 Feb 26 11:34:52 crc kubenswrapper[4724]: I0226 11:34:52.470714 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81ddf59-7703-441c-aba4-94c804a7d830","Type":"ContainerDied","Data":"949062c0737ed30dfb9166cbb8ba93450fe8de80d91b67537ea674722d9ec39e"} Feb 26 11:34:52 crc kubenswrapper[4724]: I0226 11:34:52.472248 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81ddf59-7703-441c-aba4-94c804a7d830","Type":"ContainerDied","Data":"a40ca75d11c66d3b04b76e350dae173922bcea18dd91b34d739756a1e44705b8"} Feb 26 11:34:52 crc kubenswrapper[4724]: I0226 11:34:52.745498 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 11:34:52 crc kubenswrapper[4724]: I0226 11:34:52.776484 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 26 11:34:52 crc kubenswrapper[4724]: I0226 11:34:52.777895 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 11:34:53 crc kubenswrapper[4724]: I0226 11:34:53.512365 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 11:34:54 crc kubenswrapper[4724]: I0226 11:34:54.738041 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 11:34:54 crc kubenswrapper[4724]: I0226 11:34:54.738111 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.254887 4724 scope.go:117] "RemoveContainer" containerID="6a4e8c5deeff5e7e2d8b1dcbde0bdd01b3fae4fe6b90c4b8b31772fee0d41700" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.459428 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.510277 4724 generic.go:334] "Generic (PLEG): container finished" podID="b81ddf59-7703-441c-aba4-94c804a7d830" containerID="e72f82a24a13730177725de51c5823e95e9f84d785aab51378eb96f258d3998b" exitCode=0 Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.510368 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81ddf59-7703-441c-aba4-94c804a7d830","Type":"ContainerDied","Data":"e72f82a24a13730177725de51c5823e95e9f84d785aab51378eb96f258d3998b"} Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.513261 4724 generic.go:334] "Generic (PLEG): container finished" podID="b96a3fdf-1e3b-47fb-a073-26bc8acb78d2" containerID="4278a4afaa0f65060c9be0d90e1980e0650ebf5279a0b0f4149e7ff3784f2f83" exitCode=137 Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.513342 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2","Type":"ContainerDied","Data":"4278a4afaa0f65060c9be0d90e1980e0650ebf5279a0b0f4149e7ff3784f2f83"} Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.513375 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2","Type":"ContainerDied","Data":"832596a61e3b496717ba979fd5b5274d6bab4373c62739c530718dce064943d3"} Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.513395 4724 scope.go:117] "RemoveContainer" containerID="4278a4afaa0f65060c9be0d90e1980e0650ebf5279a0b0f4149e7ff3784f2f83" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.513605 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.525843 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmzgm\" (UniqueName: \"kubernetes.io/projected/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-kube-api-access-nmzgm\") pod \"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2\" (UID: \"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2\") " Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.526111 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-config-data\") pod \"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2\" (UID: \"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2\") " Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.526171 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-combined-ca-bundle\") pod \"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2\" (UID: \"b96a3fdf-1e3b-47fb-a073-26bc8acb78d2\") " Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.535401 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-kube-api-access-nmzgm" (OuterVolumeSpecName: "kube-api-access-nmzgm") pod "b96a3fdf-1e3b-47fb-a073-26bc8acb78d2" (UID: "b96a3fdf-1e3b-47fb-a073-26bc8acb78d2"). InnerVolumeSpecName "kube-api-access-nmzgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.558916 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-config-data" (OuterVolumeSpecName: "config-data") pod "b96a3fdf-1e3b-47fb-a073-26bc8acb78d2" (UID: "b96a3fdf-1e3b-47fb-a073-26bc8acb78d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.564655 4724 scope.go:117] "RemoveContainer" containerID="4278a4afaa0f65060c9be0d90e1980e0650ebf5279a0b0f4149e7ff3784f2f83" Feb 26 11:34:55 crc kubenswrapper[4724]: E0226 11:34:55.566157 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4278a4afaa0f65060c9be0d90e1980e0650ebf5279a0b0f4149e7ff3784f2f83\": container with ID starting with 4278a4afaa0f65060c9be0d90e1980e0650ebf5279a0b0f4149e7ff3784f2f83 not found: ID does not exist" containerID="4278a4afaa0f65060c9be0d90e1980e0650ebf5279a0b0f4149e7ff3784f2f83" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.566382 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4278a4afaa0f65060c9be0d90e1980e0650ebf5279a0b0f4149e7ff3784f2f83"} err="failed to get container status \"4278a4afaa0f65060c9be0d90e1980e0650ebf5279a0b0f4149e7ff3784f2f83\": rpc error: code = NotFound desc = could not find container \"4278a4afaa0f65060c9be0d90e1980e0650ebf5279a0b0f4149e7ff3784f2f83\": container with ID starting with 4278a4afaa0f65060c9be0d90e1980e0650ebf5279a0b0f4149e7ff3784f2f83 not found: ID does not exist" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.580411 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b96a3fdf-1e3b-47fb-a073-26bc8acb78d2" (UID: "b96a3fdf-1e3b-47fb-a073-26bc8acb78d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.627668 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmzgm\" (UniqueName: \"kubernetes.io/projected/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-kube-api-access-nmzgm\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.627723 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.627737 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.750356 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.218:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.750651 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.218:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.867234 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.890740 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.901542 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:34:55 crc kubenswrapper[4724]: E0226 11:34:55.902095 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2746e33a-3533-4464-abfb-2ead8cf17856" containerName="init" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.902122 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2746e33a-3533-4464-abfb-2ead8cf17856" containerName="init" Feb 26 11:34:55 crc kubenswrapper[4724]: E0226 11:34:55.902144 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96a3fdf-1e3b-47fb-a073-26bc8acb78d2" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.902153 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96a3fdf-1e3b-47fb-a073-26bc8acb78d2" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 11:34:55 crc kubenswrapper[4724]: E0226 11:34:55.902165 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2746e33a-3533-4464-abfb-2ead8cf17856" containerName="dnsmasq-dns" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.902173 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2746e33a-3533-4464-abfb-2ead8cf17856" containerName="dnsmasq-dns" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.902425 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="2746e33a-3533-4464-abfb-2ead8cf17856" containerName="dnsmasq-dns" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.902476 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96a3fdf-1e3b-47fb-a073-26bc8acb78d2" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.903321 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.906787 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.907152 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.907755 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.932419 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.932491 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.932571 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.932617 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.932641 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdrvq\" (UniqueName: \"kubernetes.io/projected/56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54-kube-api-access-rdrvq\") pod \"nova-cell1-novncproxy-0\" (UID: \"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.934468 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:34:55 crc kubenswrapper[4724]: I0226 11:34:55.990143 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b96a3fdf-1e3b-47fb-a073-26bc8acb78d2" path="/var/lib/kubelet/pods/b96a3fdf-1e3b-47fb-a073-26bc8acb78d2/volumes" Feb 26 11:34:56 crc kubenswrapper[4724]: I0226 11:34:56.034355 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:56 crc kubenswrapper[4724]: I0226 11:34:56.034473 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:56 crc kubenswrapper[4724]: I0226 11:34:56.034522 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:56 crc kubenswrapper[4724]: I0226 11:34:56.034543 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdrvq\" (UniqueName: \"kubernetes.io/projected/56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54-kube-api-access-rdrvq\") pod \"nova-cell1-novncproxy-0\" (UID: \"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:56 crc kubenswrapper[4724]: I0226 11:34:56.034624 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:56 crc kubenswrapper[4724]: I0226 11:34:56.041347 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:56 crc kubenswrapper[4724]: I0226 11:34:56.041800 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:56 crc kubenswrapper[4724]: I0226 11:34:56.043059 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:56 crc kubenswrapper[4724]: I0226 11:34:56.044873 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:56 crc kubenswrapper[4724]: I0226 11:34:56.057732 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdrvq\" (UniqueName: \"kubernetes.io/projected/56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54-kube-api-access-rdrvq\") pod \"nova-cell1-novncproxy-0\" (UID: \"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:56 crc kubenswrapper[4724]: I0226 11:34:56.221017 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:34:56 crc kubenswrapper[4724]: I0226 11:34:56.831587 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 11:34:57 crc kubenswrapper[4724]: I0226 11:34:57.543167 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54","Type":"ContainerStarted","Data":"7ed8e3ea6c8ffc04c1a1c052fae7c1e5cc177712a30b09105adcb58dde5cd8be"} Feb 26 11:34:57 crc kubenswrapper[4724]: I0226 11:34:57.543614 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54","Type":"ContainerStarted","Data":"b41afc5c8f89c64af0694a0d74cb96deece0fd12f6bea34db7c970a16d59fdbf"} Feb 26 11:34:57 crc kubenswrapper[4724]: I0226 11:34:57.582752 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.582727217 podStartE2EDuration="2.582727217s" podCreationTimestamp="2026-02-26 11:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:34:57.564440162 +0000 UTC m=+1764.220179267" watchObservedRunningTime="2026-02-26 11:34:57.582727217 +0000 UTC m=+1764.238466342" Feb 26 11:34:58 crc kubenswrapper[4724]: I0226 11:34:58.815551 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 11:34:58 crc kubenswrapper[4724]: I0226 11:34:58.816572 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 11:34:58 crc kubenswrapper[4724]: I0226 11:34:58.820111 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 11:34:59 crc kubenswrapper[4724]: I0226 11:34:59.570929 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 11:35:01 crc kubenswrapper[4724]: I0226 11:35:01.222132 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:35:04 crc kubenswrapper[4724]: I0226 11:35:04.746229 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 11:35:04 crc kubenswrapper[4724]: I0226 11:35:04.746857 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 11:35:04 crc kubenswrapper[4724]: I0226 11:35:04.747249 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 11:35:04 crc kubenswrapper[4724]: I0226 11:35:04.747288 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 11:35:04 crc kubenswrapper[4724]: I0226 11:35:04.755229 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 11:35:04 crc kubenswrapper[4724]: I0226 11:35:04.755564 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.223035 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.260449 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.655126 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.846408 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-2brpm"] Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.847970 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.849983 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.850321 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.860165 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2brpm"] Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.866500 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-config-data\") pod \"nova-cell1-cell-mapping-2brpm\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.866741 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-scripts\") pod \"nova-cell1-cell-mapping-2brpm\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.866772 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl4j8\" (UniqueName: \"kubernetes.io/projected/1754e31b-5617-4b43-96ec-fa7f2845b2de-kube-api-access-fl4j8\") pod \"nova-cell1-cell-mapping-2brpm\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.866797 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2brpm\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.968753 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-scripts\") pod \"nova-cell1-cell-mapping-2brpm\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.968799 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl4j8\" (UniqueName: \"kubernetes.io/projected/1754e31b-5617-4b43-96ec-fa7f2845b2de-kube-api-access-fl4j8\") pod \"nova-cell1-cell-mapping-2brpm\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.968820 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2brpm\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.968895 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-config-data\") pod \"nova-cell1-cell-mapping-2brpm\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.978486 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2brpm\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.983124 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-scripts\") pod \"nova-cell1-cell-mapping-2brpm\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.988038 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl4j8\" (UniqueName: \"kubernetes.io/projected/1754e31b-5617-4b43-96ec-fa7f2845b2de-kube-api-access-fl4j8\") pod \"nova-cell1-cell-mapping-2brpm\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:06 crc kubenswrapper[4724]: I0226 11:35:06.988472 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-config-data\") pod \"nova-cell1-cell-mapping-2brpm\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:07 crc kubenswrapper[4724]: I0226 11:35:07.168817 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:07 crc kubenswrapper[4724]: I0226 11:35:07.655623 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2brpm"] Feb 26 11:35:08 crc kubenswrapper[4724]: I0226 11:35:08.654422 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2brpm" event={"ID":"1754e31b-5617-4b43-96ec-fa7f2845b2de","Type":"ContainerStarted","Data":"ec77abe513b5c472b56cee1421d6050aa9092dbd78704fe99fa22f0ac25b7bcf"} Feb 26 11:35:08 crc kubenswrapper[4724]: I0226 11:35:08.655128 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2brpm" event={"ID":"1754e31b-5617-4b43-96ec-fa7f2845b2de","Type":"ContainerStarted","Data":"ccb5118c3769706fda80a6316fbeb496a77b1c5f39e31d9932e4a822eaaaaffb"} Feb 26 11:35:08 crc kubenswrapper[4724]: I0226 11:35:08.681097 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-2brpm" podStartSLOduration=2.681079993 podStartE2EDuration="2.681079993s" podCreationTimestamp="2026-02-26 11:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:35:08.675578473 +0000 UTC m=+1775.331317608" watchObservedRunningTime="2026-02-26 11:35:08.681079993 +0000 UTC m=+1775.336819108" Feb 26 11:35:13 crc kubenswrapper[4724]: I0226 11:35:13.705401 4724 generic.go:334] "Generic (PLEG): container finished" podID="1754e31b-5617-4b43-96ec-fa7f2845b2de" containerID="ec77abe513b5c472b56cee1421d6050aa9092dbd78704fe99fa22f0ac25b7bcf" exitCode=0 Feb 26 11:35:13 crc kubenswrapper[4724]: I0226 11:35:13.706112 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2brpm" event={"ID":"1754e31b-5617-4b43-96ec-fa7f2845b2de","Type":"ContainerDied","Data":"ec77abe513b5c472b56cee1421d6050aa9092dbd78704fe99fa22f0ac25b7bcf"} Feb 26 11:35:14 crc kubenswrapper[4724]: I0226 11:35:14.212051 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.108497 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.160442 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl4j8\" (UniqueName: \"kubernetes.io/projected/1754e31b-5617-4b43-96ec-fa7f2845b2de-kube-api-access-fl4j8\") pod \"1754e31b-5617-4b43-96ec-fa7f2845b2de\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.160619 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-scripts\") pod \"1754e31b-5617-4b43-96ec-fa7f2845b2de\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.160768 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-combined-ca-bundle\") pod \"1754e31b-5617-4b43-96ec-fa7f2845b2de\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.160821 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-config-data\") pod \"1754e31b-5617-4b43-96ec-fa7f2845b2de\" (UID: \"1754e31b-5617-4b43-96ec-fa7f2845b2de\") " Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.172414 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-scripts" (OuterVolumeSpecName: "scripts") pod "1754e31b-5617-4b43-96ec-fa7f2845b2de" (UID: "1754e31b-5617-4b43-96ec-fa7f2845b2de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.193375 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1754e31b-5617-4b43-96ec-fa7f2845b2de-kube-api-access-fl4j8" (OuterVolumeSpecName: "kube-api-access-fl4j8") pod "1754e31b-5617-4b43-96ec-fa7f2845b2de" (UID: "1754e31b-5617-4b43-96ec-fa7f2845b2de"). InnerVolumeSpecName "kube-api-access-fl4j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.203612 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1754e31b-5617-4b43-96ec-fa7f2845b2de" (UID: "1754e31b-5617-4b43-96ec-fa7f2845b2de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.213856 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-config-data" (OuterVolumeSpecName: "config-data") pod "1754e31b-5617-4b43-96ec-fa7f2845b2de" (UID: "1754e31b-5617-4b43-96ec-fa7f2845b2de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.263480 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.263769 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.263829 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl4j8\" (UniqueName: \"kubernetes.io/projected/1754e31b-5617-4b43-96ec-fa7f2845b2de-kube-api-access-fl4j8\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.266097 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1754e31b-5617-4b43-96ec-fa7f2845b2de-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.744040 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2brpm" event={"ID":"1754e31b-5617-4b43-96ec-fa7f2845b2de","Type":"ContainerDied","Data":"ccb5118c3769706fda80a6316fbeb496a77b1c5f39e31d9932e4a822eaaaaffb"} Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.744088 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccb5118c3769706fda80a6316fbeb496a77b1c5f39e31d9932e4a822eaaaaffb" Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.744201 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2brpm" Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.929371 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.929760 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" containerName="nova-api-log" containerID="cri-o://01997d7ccb528fed9f47c89985a3717556fb81a96c687eb8d1586911b9014f07" gracePeriod=30 Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.930258 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" containerName="nova-api-api" containerID="cri-o://c2c717271ff34e2946ad07dfed2ac8c83ff137d0d3ec31b3879e552718ba8ae6" gracePeriod=30 Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.949929 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:35:15 crc kubenswrapper[4724]: I0226 11:35:15.950169 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="919b78bb-6cec-4e04-a51b-464b175630e5" containerName="nova-scheduler-scheduler" containerID="cri-o://780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb" gracePeriod=30 Feb 26 11:35:16 crc kubenswrapper[4724]: I0226 11:35:16.000685 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:35:16 crc kubenswrapper[4724]: I0226 11:35:16.001019 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="99a58e01-ce00-4c33-8d7c-046711b4ef9a" containerName="nova-metadata-log" containerID="cri-o://b382b7484911baa49c95f1b1715cbbfecece3a443ad410a8e2f08ecfc26d79bd" gracePeriod=30 Feb 26 11:35:16 crc kubenswrapper[4724]: I0226 11:35:16.001069 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="99a58e01-ce00-4c33-8d7c-046711b4ef9a" containerName="nova-metadata-metadata" containerID="cri-o://c28ce24718f972547d34c6a7e47adceff5982e14d5c594c48ebe0fad746de577" gracePeriod=30 Feb 26 11:35:16 crc kubenswrapper[4724]: I0226 11:35:16.755353 4724 generic.go:334] "Generic (PLEG): container finished" podID="aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" containerID="01997d7ccb528fed9f47c89985a3717556fb81a96c687eb8d1586911b9014f07" exitCode=143 Feb 26 11:35:16 crc kubenswrapper[4724]: I0226 11:35:16.755459 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd","Type":"ContainerDied","Data":"01997d7ccb528fed9f47c89985a3717556fb81a96c687eb8d1586911b9014f07"} Feb 26 11:35:16 crc kubenswrapper[4724]: I0226 11:35:16.758000 4724 generic.go:334] "Generic (PLEG): container finished" podID="99a58e01-ce00-4c33-8d7c-046711b4ef9a" containerID="b382b7484911baa49c95f1b1715cbbfecece3a443ad410a8e2f08ecfc26d79bd" exitCode=143 Feb 26 11:35:16 crc kubenswrapper[4724]: I0226 11:35:16.758031 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"99a58e01-ce00-4c33-8d7c-046711b4ef9a","Type":"ContainerDied","Data":"b382b7484911baa49c95f1b1715cbbfecece3a443ad410a8e2f08ecfc26d79bd"} Feb 26 11:35:16 crc kubenswrapper[4724]: I0226 11:35:16.906882 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:35:16 crc kubenswrapper[4724]: I0226 11:35:16.906937 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:35:16 crc kubenswrapper[4724]: I0226 11:35:16.906979 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:35:16 crc kubenswrapper[4724]: I0226 11:35:16.907818 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:35:16 crc kubenswrapper[4724]: I0226 11:35:16.907879 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" gracePeriod=600 Feb 26 11:35:17 crc kubenswrapper[4724]: E0226 11:35:17.163245 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:35:17 crc kubenswrapper[4724]: E0226 11:35:17.775643 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 11:35:17 crc kubenswrapper[4724]: E0226 11:35:17.785863 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 11:35:17 crc kubenswrapper[4724]: E0226 11:35:17.790595 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 11:35:17 crc kubenswrapper[4724]: E0226 11:35:17.790688 4724 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="919b78bb-6cec-4e04-a51b-464b175630e5" containerName="nova-scheduler-scheduler" Feb 26 11:35:17 crc kubenswrapper[4724]: I0226 11:35:17.792552 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" exitCode=0 Feb 26 11:35:17 crc kubenswrapper[4724]: I0226 11:35:17.792593 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef"} Feb 26 11:35:17 crc kubenswrapper[4724]: I0226 11:35:17.792624 4724 scope.go:117] "RemoveContainer" containerID="55d1fb33975b75b061c0528685eae11004b1a2f0eedaec829e3798af02cfba8d" Feb 26 11:35:17 crc kubenswrapper[4724]: I0226 11:35:17.793323 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:35:17 crc kubenswrapper[4724]: E0226 11:35:17.793550 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.300015 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="99a58e01-ce00-4c33-8d7c-046711b4ef9a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": read tcp 10.217.0.2:48132->10.217.0.214:8775: read: connection reset by peer" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.300146 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="99a58e01-ce00-4c33-8d7c-046711b4ef9a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": read tcp 10.217.0.2:48130->10.217.0.214:8775: read: connection reset by peer" Feb 26 11:35:19 crc kubenswrapper[4724]: E0226 11:35:19.538123 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99a58e01_ce00_4c33_8d7c_046711b4ef9a.slice/crio-conmon-c28ce24718f972547d34c6a7e47adceff5982e14d5c594c48ebe0fad746de577.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99a58e01_ce00_4c33_8d7c_046711b4ef9a.slice/crio-c28ce24718f972547d34c6a7e47adceff5982e14d5c594c48ebe0fad746de577.scope\": RecentStats: unable to find data in memory cache]" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.707892 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.809010 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.815492 4724 generic.go:334] "Generic (PLEG): container finished" podID="aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" containerID="c2c717271ff34e2946ad07dfed2ac8c83ff137d0d3ec31b3879e552718ba8ae6" exitCode=0 Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.815583 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd","Type":"ContainerDied","Data":"c2c717271ff34e2946ad07dfed2ac8c83ff137d0d3ec31b3879e552718ba8ae6"} Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.817787 4724 generic.go:334] "Generic (PLEG): container finished" podID="919b78bb-6cec-4e04-a51b-464b175630e5" containerID="780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb" exitCode=0 Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.817865 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"919b78bb-6cec-4e04-a51b-464b175630e5","Type":"ContainerDied","Data":"780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb"} Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.817908 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"919b78bb-6cec-4e04-a51b-464b175630e5","Type":"ContainerDied","Data":"c0d5b26d94c9244fa0a8283e35db760ea257410a1ee1ca236288a3a93ec5040e"} Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.817930 4724 scope.go:117] "RemoveContainer" containerID="780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.818080 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.821119 4724 generic.go:334] "Generic (PLEG): container finished" podID="99a58e01-ce00-4c33-8d7c-046711b4ef9a" containerID="c28ce24718f972547d34c6a7e47adceff5982e14d5c594c48ebe0fad746de577" exitCode=0 Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.821154 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"99a58e01-ce00-4c33-8d7c-046711b4ef9a","Type":"ContainerDied","Data":"c28ce24718f972547d34c6a7e47adceff5982e14d5c594c48ebe0fad746de577"} Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.821197 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"99a58e01-ce00-4c33-8d7c-046711b4ef9a","Type":"ContainerDied","Data":"8683b8f623cc8d845843cc1099c829e04659abc7aadfa4c0807ddfabcda4337f"} Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.821261 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.828816 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919b78bb-6cec-4e04-a51b-464b175630e5-config-data\") pod \"919b78bb-6cec-4e04-a51b-464b175630e5\" (UID: \"919b78bb-6cec-4e04-a51b-464b175630e5\") " Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.829096 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n7d9\" (UniqueName: \"kubernetes.io/projected/919b78bb-6cec-4e04-a51b-464b175630e5-kube-api-access-7n7d9\") pod \"919b78bb-6cec-4e04-a51b-464b175630e5\" (UID: \"919b78bb-6cec-4e04-a51b-464b175630e5\") " Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.829274 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919b78bb-6cec-4e04-a51b-464b175630e5-combined-ca-bundle\") pod \"919b78bb-6cec-4e04-a51b-464b175630e5\" (UID: \"919b78bb-6cec-4e04-a51b-464b175630e5\") " Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.835027 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/919b78bb-6cec-4e04-a51b-464b175630e5-kube-api-access-7n7d9" (OuterVolumeSpecName: "kube-api-access-7n7d9") pod "919b78bb-6cec-4e04-a51b-464b175630e5" (UID: "919b78bb-6cec-4e04-a51b-464b175630e5"). InnerVolumeSpecName "kube-api-access-7n7d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.848131 4724 scope.go:117] "RemoveContainer" containerID="780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb" Feb 26 11:35:19 crc kubenswrapper[4724]: E0226 11:35:19.848990 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb\": container with ID starting with 780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb not found: ID does not exist" containerID="780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.849100 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb"} err="failed to get container status \"780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb\": rpc error: code = NotFound desc = could not find container \"780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb\": container with ID starting with 780760b8c24c8a3ab51b043a97398321f33ea6ee086c7b508371df39232271cb not found: ID does not exist" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.849172 4724 scope.go:117] "RemoveContainer" containerID="c28ce24718f972547d34c6a7e47adceff5982e14d5c594c48ebe0fad746de577" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.871339 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919b78bb-6cec-4e04-a51b-464b175630e5-config-data" (OuterVolumeSpecName: "config-data") pod "919b78bb-6cec-4e04-a51b-464b175630e5" (UID: "919b78bb-6cec-4e04-a51b-464b175630e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.871511 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919b78bb-6cec-4e04-a51b-464b175630e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "919b78bb-6cec-4e04-a51b-464b175630e5" (UID: "919b78bb-6cec-4e04-a51b-464b175630e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.876681 4724 scope.go:117] "RemoveContainer" containerID="b382b7484911baa49c95f1b1715cbbfecece3a443ad410a8e2f08ecfc26d79bd" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.930619 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99a58e01-ce00-4c33-8d7c-046711b4ef9a-logs\") pod \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.930762 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkrpv\" (UniqueName: \"kubernetes.io/projected/99a58e01-ce00-4c33-8d7c-046711b4ef9a-kube-api-access-zkrpv\") pod \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.930786 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-config-data\") pod \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.930899 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-nova-metadata-tls-certs\") pod \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.931023 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-combined-ca-bundle\") pod \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\" (UID: \"99a58e01-ce00-4c33-8d7c-046711b4ef9a\") " Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.931969 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n7d9\" (UniqueName: \"kubernetes.io/projected/919b78bb-6cec-4e04-a51b-464b175630e5-kube-api-access-7n7d9\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.931990 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919b78bb-6cec-4e04-a51b-464b175630e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.931999 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919b78bb-6cec-4e04-a51b-464b175630e5-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.932854 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99a58e01-ce00-4c33-8d7c-046711b4ef9a-logs" (OuterVolumeSpecName: "logs") pod "99a58e01-ce00-4c33-8d7c-046711b4ef9a" (UID: "99a58e01-ce00-4c33-8d7c-046711b4ef9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.936032 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99a58e01-ce00-4c33-8d7c-046711b4ef9a-kube-api-access-zkrpv" (OuterVolumeSpecName: "kube-api-access-zkrpv") pod "99a58e01-ce00-4c33-8d7c-046711b4ef9a" (UID: "99a58e01-ce00-4c33-8d7c-046711b4ef9a"). InnerVolumeSpecName "kube-api-access-zkrpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.939884 4724 scope.go:117] "RemoveContainer" containerID="c28ce24718f972547d34c6a7e47adceff5982e14d5c594c48ebe0fad746de577" Feb 26 11:35:19 crc kubenswrapper[4724]: E0226 11:35:19.940444 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c28ce24718f972547d34c6a7e47adceff5982e14d5c594c48ebe0fad746de577\": container with ID starting with c28ce24718f972547d34c6a7e47adceff5982e14d5c594c48ebe0fad746de577 not found: ID does not exist" containerID="c28ce24718f972547d34c6a7e47adceff5982e14d5c594c48ebe0fad746de577" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.940476 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28ce24718f972547d34c6a7e47adceff5982e14d5c594c48ebe0fad746de577"} err="failed to get container status \"c28ce24718f972547d34c6a7e47adceff5982e14d5c594c48ebe0fad746de577\": rpc error: code = NotFound desc = could not find container \"c28ce24718f972547d34c6a7e47adceff5982e14d5c594c48ebe0fad746de577\": container with ID starting with c28ce24718f972547d34c6a7e47adceff5982e14d5c594c48ebe0fad746de577 not found: ID does not exist" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.940501 4724 scope.go:117] "RemoveContainer" containerID="b382b7484911baa49c95f1b1715cbbfecece3a443ad410a8e2f08ecfc26d79bd" Feb 26 11:35:19 crc kubenswrapper[4724]: E0226 11:35:19.945194 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b382b7484911baa49c95f1b1715cbbfecece3a443ad410a8e2f08ecfc26d79bd\": container with ID starting with b382b7484911baa49c95f1b1715cbbfecece3a443ad410a8e2f08ecfc26d79bd not found: ID does not exist" containerID="b382b7484911baa49c95f1b1715cbbfecece3a443ad410a8e2f08ecfc26d79bd" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.945237 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b382b7484911baa49c95f1b1715cbbfecece3a443ad410a8e2f08ecfc26d79bd"} err="failed to get container status \"b382b7484911baa49c95f1b1715cbbfecece3a443ad410a8e2f08ecfc26d79bd\": rpc error: code = NotFound desc = could not find container \"b382b7484911baa49c95f1b1715cbbfecece3a443ad410a8e2f08ecfc26d79bd\": container with ID starting with b382b7484911baa49c95f1b1715cbbfecece3a443ad410a8e2f08ecfc26d79bd not found: ID does not exist" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.986253 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-config-data" (OuterVolumeSpecName: "config-data") pod "99a58e01-ce00-4c33-8d7c-046711b4ef9a" (UID: "99a58e01-ce00-4c33-8d7c-046711b4ef9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:19 crc kubenswrapper[4724]: I0226 11:35:19.986887 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99a58e01-ce00-4c33-8d7c-046711b4ef9a" (UID: "99a58e01-ce00-4c33-8d7c-046711b4ef9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.019582 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "99a58e01-ce00-4c33-8d7c-046711b4ef9a" (UID: "99a58e01-ce00-4c33-8d7c-046711b4ef9a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.033908 4724 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.033945 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.033954 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99a58e01-ce00-4c33-8d7c-046711b4ef9a-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.033965 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkrpv\" (UniqueName: \"kubernetes.io/projected/99a58e01-ce00-4c33-8d7c-046711b4ef9a-kube-api-access-zkrpv\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.033974 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99a58e01-ce00-4c33-8d7c-046711b4ef9a-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.149241 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.162741 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.176467 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.187943 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.228422 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:35:20 crc kubenswrapper[4724]: E0226 11:35:20.228963 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99a58e01-ce00-4c33-8d7c-046711b4ef9a" containerName="nova-metadata-log" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.228982 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="99a58e01-ce00-4c33-8d7c-046711b4ef9a" containerName="nova-metadata-log" Feb 26 11:35:20 crc kubenswrapper[4724]: E0226 11:35:20.229012 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99a58e01-ce00-4c33-8d7c-046711b4ef9a" containerName="nova-metadata-metadata" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.229019 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="99a58e01-ce00-4c33-8d7c-046711b4ef9a" containerName="nova-metadata-metadata" Feb 26 11:35:20 crc kubenswrapper[4724]: E0226 11:35:20.229029 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1754e31b-5617-4b43-96ec-fa7f2845b2de" containerName="nova-manage" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.229037 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1754e31b-5617-4b43-96ec-fa7f2845b2de" containerName="nova-manage" Feb 26 11:35:20 crc kubenswrapper[4724]: E0226 11:35:20.229066 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919b78bb-6cec-4e04-a51b-464b175630e5" containerName="nova-scheduler-scheduler" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.229074 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="919b78bb-6cec-4e04-a51b-464b175630e5" containerName="nova-scheduler-scheduler" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.229320 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1754e31b-5617-4b43-96ec-fa7f2845b2de" containerName="nova-manage" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.229342 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="99a58e01-ce00-4c33-8d7c-046711b4ef9a" containerName="nova-metadata-metadata" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.229355 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="99a58e01-ce00-4c33-8d7c-046711b4ef9a" containerName="nova-metadata-log" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.229369 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="919b78bb-6cec-4e04-a51b-464b175630e5" containerName="nova-scheduler-scheduler" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.230169 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.237628 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.264387 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.347248 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8972b4b1-55d2-433f-a7f0-886a242a9db2-config-data\") pod \"nova-scheduler-0\" (UID: \"8972b4b1-55d2-433f-a7f0-886a242a9db2\") " pod="openstack/nova-scheduler-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.348707 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8972b4b1-55d2-433f-a7f0-886a242a9db2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8972b4b1-55d2-433f-a7f0-886a242a9db2\") " pod="openstack/nova-scheduler-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.348849 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpgng\" (UniqueName: \"kubernetes.io/projected/8972b4b1-55d2-433f-a7f0-886a242a9db2-kube-api-access-lpgng\") pod \"nova-scheduler-0\" (UID: \"8972b4b1-55d2-433f-a7f0-886a242a9db2\") " pod="openstack/nova-scheduler-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.399993 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.420795 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.424880 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.425166 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.441400 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.450985 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8972b4b1-55d2-433f-a7f0-886a242a9db2-config-data\") pod \"nova-scheduler-0\" (UID: \"8972b4b1-55d2-433f-a7f0-886a242a9db2\") " pod="openstack/nova-scheduler-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.451052 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8972b4b1-55d2-433f-a7f0-886a242a9db2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8972b4b1-55d2-433f-a7f0-886a242a9db2\") " pod="openstack/nova-scheduler-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.451110 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpgng\" (UniqueName: \"kubernetes.io/projected/8972b4b1-55d2-433f-a7f0-886a242a9db2-kube-api-access-lpgng\") pod \"nova-scheduler-0\" (UID: \"8972b4b1-55d2-433f-a7f0-886a242a9db2\") " pod="openstack/nova-scheduler-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.455367 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8972b4b1-55d2-433f-a7f0-886a242a9db2-config-data\") pod \"nova-scheduler-0\" (UID: \"8972b4b1-55d2-433f-a7f0-886a242a9db2\") " pod="openstack/nova-scheduler-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.455367 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8972b4b1-55d2-433f-a7f0-886a242a9db2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8972b4b1-55d2-433f-a7f0-886a242a9db2\") " pod="openstack/nova-scheduler-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.483301 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpgng\" (UniqueName: \"kubernetes.io/projected/8972b4b1-55d2-433f-a7f0-886a242a9db2-kube-api-access-lpgng\") pod \"nova-scheduler-0\" (UID: \"8972b4b1-55d2-433f-a7f0-886a242a9db2\") " pod="openstack/nova-scheduler-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.553213 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ba1adb-959d-470b-a25d-5967665793f3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a3ba1adb-959d-470b-a25d-5967665793f3\") " pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.553272 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3ba1adb-959d-470b-a25d-5967665793f3-logs\") pod \"nova-metadata-0\" (UID: \"a3ba1adb-959d-470b-a25d-5967665793f3\") " pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.553313 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3ba1adb-959d-470b-a25d-5967665793f3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a3ba1adb-959d-470b-a25d-5967665793f3\") " pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.553357 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk2jl\" (UniqueName: \"kubernetes.io/projected/a3ba1adb-959d-470b-a25d-5967665793f3-kube-api-access-zk2jl\") pod \"nova-metadata-0\" (UID: \"a3ba1adb-959d-470b-a25d-5967665793f3\") " pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.553387 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ba1adb-959d-470b-a25d-5967665793f3-config-data\") pod \"nova-metadata-0\" (UID: \"a3ba1adb-959d-470b-a25d-5967665793f3\") " pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.611757 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.655440 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ba1adb-959d-470b-a25d-5967665793f3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a3ba1adb-959d-470b-a25d-5967665793f3\") " pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.655491 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3ba1adb-959d-470b-a25d-5967665793f3-logs\") pod \"nova-metadata-0\" (UID: \"a3ba1adb-959d-470b-a25d-5967665793f3\") " pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.655527 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3ba1adb-959d-470b-a25d-5967665793f3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a3ba1adb-959d-470b-a25d-5967665793f3\") " pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.655568 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk2jl\" (UniqueName: \"kubernetes.io/projected/a3ba1adb-959d-470b-a25d-5967665793f3-kube-api-access-zk2jl\") pod \"nova-metadata-0\" (UID: \"a3ba1adb-959d-470b-a25d-5967665793f3\") " pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.655598 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ba1adb-959d-470b-a25d-5967665793f3-config-data\") pod \"nova-metadata-0\" (UID: \"a3ba1adb-959d-470b-a25d-5967665793f3\") " pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.656537 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3ba1adb-959d-470b-a25d-5967665793f3-logs\") pod \"nova-metadata-0\" (UID: \"a3ba1adb-959d-470b-a25d-5967665793f3\") " pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.660163 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3ba1adb-959d-470b-a25d-5967665793f3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a3ba1adb-959d-470b-a25d-5967665793f3\") " pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.660787 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ba1adb-959d-470b-a25d-5967665793f3-config-data\") pod \"nova-metadata-0\" (UID: \"a3ba1adb-959d-470b-a25d-5967665793f3\") " pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.668885 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ba1adb-959d-470b-a25d-5967665793f3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a3ba1adb-959d-470b-a25d-5967665793f3\") " pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.672119 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk2jl\" (UniqueName: \"kubernetes.io/projected/a3ba1adb-959d-470b-a25d-5967665793f3-kube-api-access-zk2jl\") pod \"nova-metadata-0\" (UID: \"a3ba1adb-959d-470b-a25d-5967665793f3\") " pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.720765 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.743743 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.954018 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-public-tls-certs\") pod \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.954318 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-internal-tls-certs\") pod \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.954495 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-logs\") pod \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.954740 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-combined-ca-bundle\") pod \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.954896 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-config-data\") pod \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.955035 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55slw\" (UniqueName: \"kubernetes.io/projected/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-kube-api-access-55slw\") pod \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\" (UID: \"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd\") " Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.960130 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-logs" (OuterVolumeSpecName: "logs") pod "aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" (UID: "aaa4d047-3afc-49e5-83d5-34ba23ae7cfd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.981779 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaa4d047-3afc-49e5-83d5-34ba23ae7cfd","Type":"ContainerDied","Data":"9c734ca600ab8c0fa458196c9bdc6dee21571d5e98eb72729a27b35826c96168"} Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.981829 4724 scope.go:117] "RemoveContainer" containerID="c2c717271ff34e2946ad07dfed2ac8c83ff137d0d3ec31b3879e552718ba8ae6" Feb 26 11:35:20 crc kubenswrapper[4724]: I0226 11:35:20.982015 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.003103 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-kube-api-access-55slw" (OuterVolumeSpecName: "kube-api-access-55slw") pod "aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" (UID: "aaa4d047-3afc-49e5-83d5-34ba23ae7cfd"). InnerVolumeSpecName "kube-api-access-55slw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.048897 4724 scope.go:117] "RemoveContainer" containerID="01997d7ccb528fed9f47c89985a3717556fb81a96c687eb8d1586911b9014f07" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.064730 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55slw\" (UniqueName: \"kubernetes.io/projected/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-kube-api-access-55slw\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.064768 4724 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-logs\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.068562 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" (UID: "aaa4d047-3afc-49e5-83d5-34ba23ae7cfd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.076005 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-config-data" (OuterVolumeSpecName: "config-data") pod "aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" (UID: "aaa4d047-3afc-49e5-83d5-34ba23ae7cfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.092926 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" (UID: "aaa4d047-3afc-49e5-83d5-34ba23ae7cfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.143000 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" (UID: "aaa4d047-3afc-49e5-83d5-34ba23ae7cfd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.166350 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.166391 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.166404 4724 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.166414 4724 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.334268 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.344213 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.352119 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.386366 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 11:35:21 crc kubenswrapper[4724]: E0226 11:35:21.386869 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" containerName="nova-api-log" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.386892 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" containerName="nova-api-log" Feb 26 11:35:21 crc kubenswrapper[4724]: E0226 11:35:21.386904 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" containerName="nova-api-api" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.386912 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" containerName="nova-api-api" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.387154 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" containerName="nova-api-api" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.387194 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" containerName="nova-api-log" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.388207 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.392712 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.393348 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.393472 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.401752 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.482308 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2496c701-9abc-4d28-8f5d-9cde4cefbabb-config-data\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.482368 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76zqd\" (UniqueName: \"kubernetes.io/projected/2496c701-9abc-4d28-8f5d-9cde4cefbabb-kube-api-access-76zqd\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.482423 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2496c701-9abc-4d28-8f5d-9cde4cefbabb-logs\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.482456 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2496c701-9abc-4d28-8f5d-9cde4cefbabb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.482569 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2496c701-9abc-4d28-8f5d-9cde4cefbabb-public-tls-certs\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.482630 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2496c701-9abc-4d28-8f5d-9cde4cefbabb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:21 crc kubenswrapper[4724]: I0226 11:35:21.507605 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.584943 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2496c701-9abc-4d28-8f5d-9cde4cefbabb-logs\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.587864 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2496c701-9abc-4d28-8f5d-9cde4cefbabb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.588208 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2496c701-9abc-4d28-8f5d-9cde4cefbabb-public-tls-certs\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.588400 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2496c701-9abc-4d28-8f5d-9cde4cefbabb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.588629 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2496c701-9abc-4d28-8f5d-9cde4cefbabb-config-data\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.588662 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76zqd\" (UniqueName: \"kubernetes.io/projected/2496c701-9abc-4d28-8f5d-9cde4cefbabb-kube-api-access-76zqd\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.590437 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2496c701-9abc-4d28-8f5d-9cde4cefbabb-logs\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.594895 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2496c701-9abc-4d28-8f5d-9cde4cefbabb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.595021 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2496c701-9abc-4d28-8f5d-9cde4cefbabb-config-data\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.597077 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2496c701-9abc-4d28-8f5d-9cde4cefbabb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.600900 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2496c701-9abc-4d28-8f5d-9cde4cefbabb-public-tls-certs\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.608909 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76zqd\" (UniqueName: \"kubernetes.io/projected/2496c701-9abc-4d28-8f5d-9cde4cefbabb-kube-api-access-76zqd\") pod \"nova-api-0\" (UID: \"2496c701-9abc-4d28-8f5d-9cde4cefbabb\") " pod="openstack/nova-api-0" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.894813 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.994469 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="919b78bb-6cec-4e04-a51b-464b175630e5" path="/var/lib/kubelet/pods/919b78bb-6cec-4e04-a51b-464b175630e5/volumes" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.995607 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99a58e01-ce00-4c33-8d7c-046711b4ef9a" path="/var/lib/kubelet/pods/99a58e01-ce00-4c33-8d7c-046711b4ef9a/volumes" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:21.996298 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa4d047-3afc-49e5-83d5-34ba23ae7cfd" path="/var/lib/kubelet/pods/aaa4d047-3afc-49e5-83d5-34ba23ae7cfd/volumes" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.033225 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a3ba1adb-959d-470b-a25d-5967665793f3","Type":"ContainerStarted","Data":"74170854719de2f0d07b27aa4932cb90ca407a03b4e15d3807b2467ba134878e"} Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.043063 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8972b4b1-55d2-433f-a7f0-886a242a9db2","Type":"ContainerStarted","Data":"63168803a9ca09efdc6afbad4baa09d86e7aa275b907c51e15041c7398b5b4f0"} Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.054992 4724 generic.go:334] "Generic (PLEG): container finished" podID="b81ddf59-7703-441c-aba4-94c804a7d830" containerID="81f4a2920fce99b629f57b812de2f688ccb7786e0a8421c0f097f17b7ca5b694" exitCode=137 Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.055035 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81ddf59-7703-441c-aba4-94c804a7d830","Type":"ContainerDied","Data":"81f4a2920fce99b629f57b812de2f688ccb7786e0a8421c0f097f17b7ca5b694"} Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.435828 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.611500 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.628865 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81ddf59-7703-441c-aba4-94c804a7d830-log-httpd\") pod \"b81ddf59-7703-441c-aba4-94c804a7d830\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.628926 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-combined-ca-bundle\") pod \"b81ddf59-7703-441c-aba4-94c804a7d830\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.629131 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81ddf59-7703-441c-aba4-94c804a7d830-run-httpd\") pod \"b81ddf59-7703-441c-aba4-94c804a7d830\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.629168 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-config-data\") pod \"b81ddf59-7703-441c-aba4-94c804a7d830\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.629201 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-scripts\") pod \"b81ddf59-7703-441c-aba4-94c804a7d830\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.629241 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-sg-core-conf-yaml\") pod \"b81ddf59-7703-441c-aba4-94c804a7d830\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.629284 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qdkk\" (UniqueName: \"kubernetes.io/projected/b81ddf59-7703-441c-aba4-94c804a7d830-kube-api-access-4qdkk\") pod \"b81ddf59-7703-441c-aba4-94c804a7d830\" (UID: \"b81ddf59-7703-441c-aba4-94c804a7d830\") " Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.629319 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b81ddf59-7703-441c-aba4-94c804a7d830-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b81ddf59-7703-441c-aba4-94c804a7d830" (UID: "b81ddf59-7703-441c-aba4-94c804a7d830"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.629941 4724 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81ddf59-7703-441c-aba4-94c804a7d830-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.630262 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b81ddf59-7703-441c-aba4-94c804a7d830-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b81ddf59-7703-441c-aba4-94c804a7d830" (UID: "b81ddf59-7703-441c-aba4-94c804a7d830"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.638492 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b81ddf59-7703-441c-aba4-94c804a7d830-kube-api-access-4qdkk" (OuterVolumeSpecName: "kube-api-access-4qdkk") pod "b81ddf59-7703-441c-aba4-94c804a7d830" (UID: "b81ddf59-7703-441c-aba4-94c804a7d830"). InnerVolumeSpecName "kube-api-access-4qdkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.660377 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-scripts" (OuterVolumeSpecName: "scripts") pod "b81ddf59-7703-441c-aba4-94c804a7d830" (UID: "b81ddf59-7703-441c-aba4-94c804a7d830"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.681147 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b81ddf59-7703-441c-aba4-94c804a7d830" (UID: "b81ddf59-7703-441c-aba4-94c804a7d830"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.732516 4724 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81ddf59-7703-441c-aba4-94c804a7d830-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.732535 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.732545 4724 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.732554 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qdkk\" (UniqueName: \"kubernetes.io/projected/b81ddf59-7703-441c-aba4-94c804a7d830-kube-api-access-4qdkk\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.781740 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b81ddf59-7703-441c-aba4-94c804a7d830" (UID: "b81ddf59-7703-441c-aba4-94c804a7d830"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.792784 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-config-data" (OuterVolumeSpecName: "config-data") pod "b81ddf59-7703-441c-aba4-94c804a7d830" (UID: "b81ddf59-7703-441c-aba4-94c804a7d830"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.837000 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:22 crc kubenswrapper[4724]: I0226 11:35:22.837031 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81ddf59-7703-441c-aba4-94c804a7d830-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.081965 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8972b4b1-55d2-433f-a7f0-886a242a9db2","Type":"ContainerStarted","Data":"0e720104a3f611b4815c1532e87f809e43ad5d48e1b7c04996116918b4b1aa05"} Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.084995 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2496c701-9abc-4d28-8f5d-9cde4cefbabb","Type":"ContainerStarted","Data":"1c8bfbde3f6203289be083f1f6cc297aa6acc1b0bf0dee200aaf78174acd0baa"} Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.086162 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2496c701-9abc-4d28-8f5d-9cde4cefbabb","Type":"ContainerStarted","Data":"7785949dfe138a0458d7691de177be57e676a62e547df9ba98619242ead2a806"} Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.086251 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2496c701-9abc-4d28-8f5d-9cde4cefbabb","Type":"ContainerStarted","Data":"ae29d02f4d974e9f6f02b097bb5b407c56d11e6cb9fd847207d9871fa7eebd90"} Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.091781 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81ddf59-7703-441c-aba4-94c804a7d830","Type":"ContainerDied","Data":"fbc0812476848486372908d33108f1c7dda9b4cadd6d5b08c6b19baab6925eb3"} Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.091844 4724 scope.go:117] "RemoveContainer" containerID="81f4a2920fce99b629f57b812de2f688ccb7786e0a8421c0f097f17b7ca5b694" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.091994 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.098814 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a3ba1adb-959d-470b-a25d-5967665793f3","Type":"ContainerStarted","Data":"02ef81f2337362e9b2f89ee19f1675b79af385b925d37dd58c95d56c96319026"} Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.098898 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a3ba1adb-959d-470b-a25d-5967665793f3","Type":"ContainerStarted","Data":"e54b3cf03b1834150e86192b47c73dab1fbf6fe24b0efde414e6fdae50801a0a"} Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.133001 4724 scope.go:117] "RemoveContainer" containerID="949062c0737ed30dfb9166cbb8ba93450fe8de80d91b67537ea674722d9ec39e" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.157829 4724 scope.go:117] "RemoveContainer" containerID="a40ca75d11c66d3b04b76e350dae173922bcea18dd91b34d739756a1e44705b8" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.170700 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.17066923 podStartE2EDuration="2.17066923s" podCreationTimestamp="2026-02-26 11:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:35:23.133131406 +0000 UTC m=+1789.788870521" watchObservedRunningTime="2026-02-26 11:35:23.17066923 +0000 UTC m=+1789.826408345" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.178589 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.178568861 podStartE2EDuration="3.178568861s" podCreationTimestamp="2026-02-26 11:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:35:23.107810152 +0000 UTC m=+1789.763549297" watchObservedRunningTime="2026-02-26 11:35:23.178568861 +0000 UTC m=+1789.834307976" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.186877 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.186855572 podStartE2EDuration="3.186855572s" podCreationTimestamp="2026-02-26 11:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:35:23.163299753 +0000 UTC m=+1789.819038868" watchObservedRunningTime="2026-02-26 11:35:23.186855572 +0000 UTC m=+1789.842594687" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.212197 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.216544 4724 scope.go:117] "RemoveContainer" containerID="e72f82a24a13730177725de51c5823e95e9f84d785aab51378eb96f258d3998b" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.236241 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.252659 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:35:23 crc kubenswrapper[4724]: E0226 11:35:23.253192 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="ceilometer-central-agent" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.253209 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="ceilometer-central-agent" Feb 26 11:35:23 crc kubenswrapper[4724]: E0226 11:35:23.253221 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="sg-core" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.253226 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="sg-core" Feb 26 11:35:23 crc kubenswrapper[4724]: E0226 11:35:23.253257 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="ceilometer-notification-agent" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.253263 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="ceilometer-notification-agent" Feb 26 11:35:23 crc kubenswrapper[4724]: E0226 11:35:23.253273 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="proxy-httpd" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.253279 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="proxy-httpd" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.253460 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="ceilometer-central-agent" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.253479 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="ceilometer-notification-agent" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.253495 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="sg-core" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.253501 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" containerName="proxy-httpd" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.255360 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.263287 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.275812 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.279571 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.346739 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.346889 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-run-httpd\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.346961 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-scripts\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.346982 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-config-data\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.347058 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.347109 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-log-httpd\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.347128 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q9vs\" (UniqueName: \"kubernetes.io/projected/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-kube-api-access-9q9vs\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.448883 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-run-httpd\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.448965 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-scripts\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.448986 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-config-data\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.449050 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.449122 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-log-httpd\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.449153 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q9vs\" (UniqueName: \"kubernetes.io/projected/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-kube-api-access-9q9vs\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.449232 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.449676 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-run-httpd\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.450601 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-log-httpd\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.453429 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.454857 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-config-data\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.455547 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-scripts\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.456003 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.468452 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q9vs\" (UniqueName: \"kubernetes.io/projected/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-kube-api-access-9q9vs\") pod \"ceilometer-0\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.601548 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:35:23 crc kubenswrapper[4724]: I0226 11:35:23.989649 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b81ddf59-7703-441c-aba4-94c804a7d830" path="/var/lib/kubelet/pods/b81ddf59-7703-441c-aba4-94c804a7d830/volumes" Feb 26 11:35:24 crc kubenswrapper[4724]: I0226 11:35:24.072707 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:35:24 crc kubenswrapper[4724]: I0226 11:35:24.108499 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21cc4bbc-d9b0-4cbc-be72-e07817c4c242","Type":"ContainerStarted","Data":"da46a9da63f36388ad036104c5f936b22bc24ddd98a50231ca43890b9c7edeac"} Feb 26 11:35:25 crc kubenswrapper[4724]: I0226 11:35:25.150651 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21cc4bbc-d9b0-4cbc-be72-e07817c4c242","Type":"ContainerStarted","Data":"62120b44b852af987eabec87e6844ddb2b58b27721adb4053025b851b334876e"} Feb 26 11:35:25 crc kubenswrapper[4724]: I0226 11:35:25.612525 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 11:35:25 crc kubenswrapper[4724]: I0226 11:35:25.744019 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 11:35:25 crc kubenswrapper[4724]: I0226 11:35:25.744487 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 11:35:26 crc kubenswrapper[4724]: I0226 11:35:26.168402 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21cc4bbc-d9b0-4cbc-be72-e07817c4c242","Type":"ContainerStarted","Data":"4908a5f51c8be56e738a4bf5809739751f2bf9661b53e542e6dde1675caa70c5"} Feb 26 11:35:27 crc kubenswrapper[4724]: I0226 11:35:27.178677 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21cc4bbc-d9b0-4cbc-be72-e07817c4c242","Type":"ContainerStarted","Data":"c0d05022a7b3a169fae7f2e59ab456097b1262efdd6a8a4377d1228fc0b76dcd"} Feb 26 11:35:30 crc kubenswrapper[4724]: I0226 11:35:30.612319 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 11:35:30 crc kubenswrapper[4724]: I0226 11:35:30.650735 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 11:35:30 crc kubenswrapper[4724]: I0226 11:35:30.745703 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 11:35:30 crc kubenswrapper[4724]: I0226 11:35:30.746300 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 11:35:30 crc kubenswrapper[4724]: I0226 11:35:30.975679 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:35:30 crc kubenswrapper[4724]: E0226 11:35:30.976020 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:35:31 crc kubenswrapper[4724]: I0226 11:35:31.232484 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21cc4bbc-d9b0-4cbc-be72-e07817c4c242","Type":"ContainerStarted","Data":"fb0ec0ed57eda63a9722d457d53cb9ee02fae00167ea6b25b7179f1dcf616476"} Feb 26 11:35:31 crc kubenswrapper[4724]: I0226 11:35:31.232637 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:35:31 crc kubenswrapper[4724]: I0226 11:35:31.274914 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9515164710000001 podStartE2EDuration="8.274894997s" podCreationTimestamp="2026-02-26 11:35:23 +0000 UTC" firstStartedPulling="2026-02-26 11:35:24.068449113 +0000 UTC m=+1790.724188228" lastFinishedPulling="2026-02-26 11:35:30.391827639 +0000 UTC m=+1797.047566754" observedRunningTime="2026-02-26 11:35:31.259122446 +0000 UTC m=+1797.914861571" watchObservedRunningTime="2026-02-26 11:35:31.274894997 +0000 UTC m=+1797.930634112" Feb 26 11:35:31 crc kubenswrapper[4724]: I0226 11:35:31.275538 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 11:35:31 crc kubenswrapper[4724]: I0226 11:35:31.758377 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a3ba1adb-959d-470b-a25d-5967665793f3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:35:31 crc kubenswrapper[4724]: I0226 11:35:31.758409 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a3ba1adb-959d-470b-a25d-5967665793f3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:35:31 crc kubenswrapper[4724]: I0226 11:35:31.895687 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 11:35:31 crc kubenswrapper[4724]: I0226 11:35:31.895743 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 11:35:32 crc kubenswrapper[4724]: I0226 11:35:32.910397 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2496c701-9abc-4d28-8f5d-9cde4cefbabb" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 11:35:32 crc kubenswrapper[4724]: I0226 11:35:32.910411 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2496c701-9abc-4d28-8f5d-9cde4cefbabb" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 11:35:40 crc kubenswrapper[4724]: I0226 11:35:40.751102 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 11:35:40 crc kubenswrapper[4724]: I0226 11:35:40.754541 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 11:35:40 crc kubenswrapper[4724]: I0226 11:35:40.756988 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 11:35:41 crc kubenswrapper[4724]: I0226 11:35:41.320981 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 11:35:41 crc kubenswrapper[4724]: I0226 11:35:41.815963 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s45hk"] Feb 26 11:35:41 crc kubenswrapper[4724]: I0226 11:35:41.818210 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:35:41 crc kubenswrapper[4724]: I0226 11:35:41.834642 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s45hk"] Feb 26 11:35:41 crc kubenswrapper[4724]: I0226 11:35:41.957715 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-utilities\") pod \"certified-operators-s45hk\" (UID: \"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac\") " pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:35:41 crc kubenswrapper[4724]: I0226 11:35:41.957783 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-catalog-content\") pod \"certified-operators-s45hk\" (UID: \"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac\") " pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:35:41 crc kubenswrapper[4724]: I0226 11:35:41.957857 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b6js\" (UniqueName: \"kubernetes.io/projected/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-kube-api-access-6b6js\") pod \"certified-operators-s45hk\" (UID: \"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac\") " pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:35:41 crc kubenswrapper[4724]: I0226 11:35:41.992578 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 11:35:41 crc kubenswrapper[4724]: I0226 11:35:41.992643 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 11:35:41 crc kubenswrapper[4724]: I0226 11:35:41.993113 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 11:35:41 crc kubenswrapper[4724]: I0226 11:35:41.993144 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 11:35:42 crc kubenswrapper[4724]: I0226 11:35:42.008607 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 11:35:42 crc kubenswrapper[4724]: I0226 11:35:42.012966 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 11:35:42 crc kubenswrapper[4724]: I0226 11:35:42.059151 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-utilities\") pod \"certified-operators-s45hk\" (UID: \"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac\") " pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:35:42 crc kubenswrapper[4724]: I0226 11:35:42.059272 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-catalog-content\") pod \"certified-operators-s45hk\" (UID: \"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac\") " pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:35:42 crc kubenswrapper[4724]: I0226 11:35:42.059344 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b6js\" (UniqueName: \"kubernetes.io/projected/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-kube-api-access-6b6js\") pod \"certified-operators-s45hk\" (UID: \"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac\") " pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:35:42 crc kubenswrapper[4724]: I0226 11:35:42.059817 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-utilities\") pod \"certified-operators-s45hk\" (UID: \"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac\") " pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:35:42 crc kubenswrapper[4724]: I0226 11:35:42.060047 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-catalog-content\") pod \"certified-operators-s45hk\" (UID: \"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac\") " pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:35:42 crc kubenswrapper[4724]: I0226 11:35:42.093324 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b6js\" (UniqueName: \"kubernetes.io/projected/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-kube-api-access-6b6js\") pod \"certified-operators-s45hk\" (UID: \"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac\") " pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:35:42 crc kubenswrapper[4724]: I0226 11:35:42.138557 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:35:42 crc kubenswrapper[4724]: I0226 11:35:42.743542 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s45hk"] Feb 26 11:35:42 crc kubenswrapper[4724]: W0226 11:35:42.756106 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea7df9bd_e8d9_4cf2_8af9_b1a6bf2cb5ac.slice/crio-7af3e91d03c014c21cbd4ee7d75243893b2e760fea01f7f0e7b0d08e125bcae4 WatchSource:0}: Error finding container 7af3e91d03c014c21cbd4ee7d75243893b2e760fea01f7f0e7b0d08e125bcae4: Status 404 returned error can't find the container with id 7af3e91d03c014c21cbd4ee7d75243893b2e760fea01f7f0e7b0d08e125bcae4 Feb 26 11:35:42 crc kubenswrapper[4724]: I0226 11:35:42.975448 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:35:42 crc kubenswrapper[4724]: E0226 11:35:42.976486 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:35:43 crc kubenswrapper[4724]: I0226 11:35:43.366416 4724 generic.go:334] "Generic (PLEG): container finished" podID="ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" containerID="b1f610ad1d6dfdc9325341d8689ef9e600002164f378a18f37be19995e962c05" exitCode=0 Feb 26 11:35:43 crc kubenswrapper[4724]: I0226 11:35:43.366476 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s45hk" event={"ID":"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac","Type":"ContainerDied","Data":"b1f610ad1d6dfdc9325341d8689ef9e600002164f378a18f37be19995e962c05"} Feb 26 11:35:43 crc kubenswrapper[4724]: I0226 11:35:43.366990 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s45hk" event={"ID":"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac","Type":"ContainerStarted","Data":"7af3e91d03c014c21cbd4ee7d75243893b2e760fea01f7f0e7b0d08e125bcae4"} Feb 26 11:35:43 crc kubenswrapper[4724]: I0226 11:35:43.369092 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:35:45 crc kubenswrapper[4724]: I0226 11:35:45.386058 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s45hk" event={"ID":"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac","Type":"ContainerStarted","Data":"54bf9f75258273632b3b4523d0c2910a79e3ff3270b7c13909ef768f10c46a1e"} Feb 26 11:35:48 crc kubenswrapper[4724]: I0226 11:35:48.416387 4724 generic.go:334] "Generic (PLEG): container finished" podID="ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" containerID="54bf9f75258273632b3b4523d0c2910a79e3ff3270b7c13909ef768f10c46a1e" exitCode=0 Feb 26 11:35:48 crc kubenswrapper[4724]: I0226 11:35:48.416453 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s45hk" event={"ID":"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac","Type":"ContainerDied","Data":"54bf9f75258273632b3b4523d0c2910a79e3ff3270b7c13909ef768f10c46a1e"} Feb 26 11:35:49 crc kubenswrapper[4724]: I0226 11:35:49.430376 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s45hk" event={"ID":"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac","Type":"ContainerStarted","Data":"a6136cd4d4592cecab48c78a4933b66c593d16b9cd9f859a8b94cd668175768c"} Feb 26 11:35:49 crc kubenswrapper[4724]: I0226 11:35:49.461146 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s45hk" podStartSLOduration=2.978499594 podStartE2EDuration="8.46112421s" podCreationTimestamp="2026-02-26 11:35:41 +0000 UTC" firstStartedPulling="2026-02-26 11:35:43.368856819 +0000 UTC m=+1810.024595934" lastFinishedPulling="2026-02-26 11:35:48.851481435 +0000 UTC m=+1815.507220550" observedRunningTime="2026-02-26 11:35:49.449978787 +0000 UTC m=+1816.105717902" watchObservedRunningTime="2026-02-26 11:35:49.46112421 +0000 UTC m=+1816.116863325" Feb 26 11:35:52 crc kubenswrapper[4724]: I0226 11:35:52.139662 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:35:52 crc kubenswrapper[4724]: I0226 11:35:52.139978 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:35:53 crc kubenswrapper[4724]: I0226 11:35:53.183562 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-s45hk" podUID="ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" containerName="registry-server" probeResult="failure" output=< Feb 26 11:35:53 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:35:53 crc kubenswrapper[4724]: > Feb 26 11:35:53 crc kubenswrapper[4724]: I0226 11:35:53.643862 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 11:35:54 crc kubenswrapper[4724]: I0226 11:35:54.976288 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:35:54 crc kubenswrapper[4724]: E0226 11:35:54.976578 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:35:55 crc kubenswrapper[4724]: I0226 11:35:55.708057 4724 scope.go:117] "RemoveContainer" containerID="7975d99a72a3c4b2d306233ff3eda269b60a6993b690843b8a87460726bc32da" Feb 26 11:35:55 crc kubenswrapper[4724]: I0226 11:35:55.744681 4724 scope.go:117] "RemoveContainer" containerID="978c9f1dadd2fe427d90affee829023bd6e57a29c4e230020d3e9d63c9331b19" Feb 26 11:35:55 crc kubenswrapper[4724]: I0226 11:35:55.836612 4724 scope.go:117] "RemoveContainer" containerID="4cfd1f80078583554f5f1f90824e816b28b9447c41a0d397bca70469d63e4d7d" Feb 26 11:35:55 crc kubenswrapper[4724]: I0226 11:35:55.905345 4724 scope.go:117] "RemoveContainer" containerID="d5e1fcd72bc882e298601e88c79f821339f695f6b1df1f0d88b74af683f964b2" Feb 26 11:35:57 crc kubenswrapper[4724]: I0226 11:35:57.454525 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:35:57 crc kubenswrapper[4724]: I0226 11:35:57.456282 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a9a1a92d-3769-4901-89b0-2fa52cbb547a" containerName="kube-state-metrics" containerID="cri-o://92948e4a8856fdc4a7a0d7b4e275154b86dc00d827ac85cf0f3dd688fb623f0e" gracePeriod=30 Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.061884 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.218032 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rlr7\" (UniqueName: \"kubernetes.io/projected/a9a1a92d-3769-4901-89b0-2fa52cbb547a-kube-api-access-8rlr7\") pod \"a9a1a92d-3769-4901-89b0-2fa52cbb547a\" (UID: \"a9a1a92d-3769-4901-89b0-2fa52cbb547a\") " Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.225503 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9a1a92d-3769-4901-89b0-2fa52cbb547a-kube-api-access-8rlr7" (OuterVolumeSpecName: "kube-api-access-8rlr7") pod "a9a1a92d-3769-4901-89b0-2fa52cbb547a" (UID: "a9a1a92d-3769-4901-89b0-2fa52cbb547a"). InnerVolumeSpecName "kube-api-access-8rlr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.322094 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rlr7\" (UniqueName: \"kubernetes.io/projected/a9a1a92d-3769-4901-89b0-2fa52cbb547a-kube-api-access-8rlr7\") on node \"crc\" DevicePath \"\"" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.514453 4724 generic.go:334] "Generic (PLEG): container finished" podID="a9a1a92d-3769-4901-89b0-2fa52cbb547a" containerID="92948e4a8856fdc4a7a0d7b4e275154b86dc00d827ac85cf0f3dd688fb623f0e" exitCode=2 Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.514489 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.514500 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a9a1a92d-3769-4901-89b0-2fa52cbb547a","Type":"ContainerDied","Data":"92948e4a8856fdc4a7a0d7b4e275154b86dc00d827ac85cf0f3dd688fb623f0e"} Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.514533 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a9a1a92d-3769-4901-89b0-2fa52cbb547a","Type":"ContainerDied","Data":"3d462953177811c2f21ed66141f10056187043dca7c2504e933742b7f4d697ce"} Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.514551 4724 scope.go:117] "RemoveContainer" containerID="92948e4a8856fdc4a7a0d7b4e275154b86dc00d827ac85cf0f3dd688fb623f0e" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.543754 4724 scope.go:117] "RemoveContainer" containerID="92948e4a8856fdc4a7a0d7b4e275154b86dc00d827ac85cf0f3dd688fb623f0e" Feb 26 11:35:58 crc kubenswrapper[4724]: E0226 11:35:58.544359 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92948e4a8856fdc4a7a0d7b4e275154b86dc00d827ac85cf0f3dd688fb623f0e\": container with ID starting with 92948e4a8856fdc4a7a0d7b4e275154b86dc00d827ac85cf0f3dd688fb623f0e not found: ID does not exist" containerID="92948e4a8856fdc4a7a0d7b4e275154b86dc00d827ac85cf0f3dd688fb623f0e" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.544400 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92948e4a8856fdc4a7a0d7b4e275154b86dc00d827ac85cf0f3dd688fb623f0e"} err="failed to get container status \"92948e4a8856fdc4a7a0d7b4e275154b86dc00d827ac85cf0f3dd688fb623f0e\": rpc error: code = NotFound desc = could not find container \"92948e4a8856fdc4a7a0d7b4e275154b86dc00d827ac85cf0f3dd688fb623f0e\": container with ID starting with 92948e4a8856fdc4a7a0d7b4e275154b86dc00d827ac85cf0f3dd688fb623f0e not found: ID does not exist" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.549284 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.576595 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.589451 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:35:58 crc kubenswrapper[4724]: E0226 11:35:58.589996 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a1a92d-3769-4901-89b0-2fa52cbb547a" containerName="kube-state-metrics" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.590022 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a1a92d-3769-4901-89b0-2fa52cbb547a" containerName="kube-state-metrics" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.590262 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a1a92d-3769-4901-89b0-2fa52cbb547a" containerName="kube-state-metrics" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.591090 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.600244 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.600627 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.602441 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.729699 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ea1726a-a8a4-4e5d-b39f-c8393e0dad54-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4ea1726a-a8a4-4e5d-b39f-c8393e0dad54\") " pod="openstack/kube-state-metrics-0" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.729842 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea1726a-a8a4-4e5d-b39f-c8393e0dad54-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4ea1726a-a8a4-4e5d-b39f-c8393e0dad54\") " pod="openstack/kube-state-metrics-0" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.729875 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr5px\" (UniqueName: \"kubernetes.io/projected/4ea1726a-a8a4-4e5d-b39f-c8393e0dad54-kube-api-access-vr5px\") pod \"kube-state-metrics-0\" (UID: \"4ea1726a-a8a4-4e5d-b39f-c8393e0dad54\") " pod="openstack/kube-state-metrics-0" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.729921 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4ea1726a-a8a4-4e5d-b39f-c8393e0dad54-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4ea1726a-a8a4-4e5d-b39f-c8393e0dad54\") " pod="openstack/kube-state-metrics-0" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.832044 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea1726a-a8a4-4e5d-b39f-c8393e0dad54-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4ea1726a-a8a4-4e5d-b39f-c8393e0dad54\") " pod="openstack/kube-state-metrics-0" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.832108 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr5px\" (UniqueName: \"kubernetes.io/projected/4ea1726a-a8a4-4e5d-b39f-c8393e0dad54-kube-api-access-vr5px\") pod \"kube-state-metrics-0\" (UID: \"4ea1726a-a8a4-4e5d-b39f-c8393e0dad54\") " pod="openstack/kube-state-metrics-0" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.832159 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4ea1726a-a8a4-4e5d-b39f-c8393e0dad54-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4ea1726a-a8a4-4e5d-b39f-c8393e0dad54\") " pod="openstack/kube-state-metrics-0" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.832207 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ea1726a-a8a4-4e5d-b39f-c8393e0dad54-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4ea1726a-a8a4-4e5d-b39f-c8393e0dad54\") " pod="openstack/kube-state-metrics-0" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.836969 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ea1726a-a8a4-4e5d-b39f-c8393e0dad54-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4ea1726a-a8a4-4e5d-b39f-c8393e0dad54\") " pod="openstack/kube-state-metrics-0" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.837013 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea1726a-a8a4-4e5d-b39f-c8393e0dad54-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4ea1726a-a8a4-4e5d-b39f-c8393e0dad54\") " pod="openstack/kube-state-metrics-0" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.840078 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4ea1726a-a8a4-4e5d-b39f-c8393e0dad54-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4ea1726a-a8a4-4e5d-b39f-c8393e0dad54\") " pod="openstack/kube-state-metrics-0" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.853249 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr5px\" (UniqueName: \"kubernetes.io/projected/4ea1726a-a8a4-4e5d-b39f-c8393e0dad54-kube-api-access-vr5px\") pod \"kube-state-metrics-0\" (UID: \"4ea1726a-a8a4-4e5d-b39f-c8393e0dad54\") " pod="openstack/kube-state-metrics-0" Feb 26 11:35:58 crc kubenswrapper[4724]: I0226 11:35:58.914965 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 11:35:59 crc kubenswrapper[4724]: I0226 11:35:59.376270 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 11:35:59 crc kubenswrapper[4724]: I0226 11:35:59.524320 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4ea1726a-a8a4-4e5d-b39f-c8393e0dad54","Type":"ContainerStarted","Data":"7c93245bae5ebcbc8c64edcff97614aa33e10a35887b685c4f7fd8a3ab446a8a"} Feb 26 11:35:59 crc kubenswrapper[4724]: I0226 11:35:59.678595 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:35:59 crc kubenswrapper[4724]: I0226 11:35:59.678907 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="proxy-httpd" containerID="cri-o://fb0ec0ed57eda63a9722d457d53cb9ee02fae00167ea6b25b7179f1dcf616476" gracePeriod=30 Feb 26 11:35:59 crc kubenswrapper[4724]: I0226 11:35:59.678985 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="sg-core" containerID="cri-o://c0d05022a7b3a169fae7f2e59ab456097b1262efdd6a8a4377d1228fc0b76dcd" gracePeriod=30 Feb 26 11:35:59 crc kubenswrapper[4724]: I0226 11:35:59.679056 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="ceilometer-central-agent" containerID="cri-o://62120b44b852af987eabec87e6844ddb2b58b27721adb4053025b851b334876e" gracePeriod=30 Feb 26 11:35:59 crc kubenswrapper[4724]: I0226 11:35:59.679152 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="ceilometer-notification-agent" containerID="cri-o://4908a5f51c8be56e738a4bf5809739751f2bf9661b53e542e6dde1675caa70c5" gracePeriod=30 Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.083628 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a1a92d-3769-4901-89b0-2fa52cbb547a" path="/var/lib/kubelet/pods/a9a1a92d-3769-4901-89b0-2fa52cbb547a/volumes" Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.196298 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535096-72km4"] Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.198005 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535096-72km4" Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.201700 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.202451 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.202750 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.238205 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535096-72km4"] Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.367417 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw4b6\" (UniqueName: \"kubernetes.io/projected/4ed5fa8e-48cb-497b-b871-3dd17b4a77e2-kube-api-access-kw4b6\") pod \"auto-csr-approver-29535096-72km4\" (UID: \"4ed5fa8e-48cb-497b-b871-3dd17b4a77e2\") " pod="openshift-infra/auto-csr-approver-29535096-72km4" Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.468844 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw4b6\" (UniqueName: \"kubernetes.io/projected/4ed5fa8e-48cb-497b-b871-3dd17b4a77e2-kube-api-access-kw4b6\") pod \"auto-csr-approver-29535096-72km4\" (UID: \"4ed5fa8e-48cb-497b-b871-3dd17b4a77e2\") " pod="openshift-infra/auto-csr-approver-29535096-72km4" Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.488008 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw4b6\" (UniqueName: \"kubernetes.io/projected/4ed5fa8e-48cb-497b-b871-3dd17b4a77e2-kube-api-access-kw4b6\") pod \"auto-csr-approver-29535096-72km4\" (UID: \"4ed5fa8e-48cb-497b-b871-3dd17b4a77e2\") " pod="openshift-infra/auto-csr-approver-29535096-72km4" Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.529432 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535096-72km4" Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.537684 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4ea1726a-a8a4-4e5d-b39f-c8393e0dad54","Type":"ContainerStarted","Data":"e837706e0a0ceb4642f14cfc80086fd2037fdfbc329577cfd753cf9720910f75"} Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.537856 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.541322 4724 generic.go:334] "Generic (PLEG): container finished" podID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerID="fb0ec0ed57eda63a9722d457d53cb9ee02fae00167ea6b25b7179f1dcf616476" exitCode=0 Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.541351 4724 generic.go:334] "Generic (PLEG): container finished" podID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerID="c0d05022a7b3a169fae7f2e59ab456097b1262efdd6a8a4377d1228fc0b76dcd" exitCode=2 Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.541389 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21cc4bbc-d9b0-4cbc-be72-e07817c4c242","Type":"ContainerDied","Data":"fb0ec0ed57eda63a9722d457d53cb9ee02fae00167ea6b25b7179f1dcf616476"} Feb 26 11:36:00 crc kubenswrapper[4724]: I0226 11:36:00.541416 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21cc4bbc-d9b0-4cbc-be72-e07817c4c242","Type":"ContainerDied","Data":"c0d05022a7b3a169fae7f2e59ab456097b1262efdd6a8a4377d1228fc0b76dcd"} Feb 26 11:36:00 crc kubenswrapper[4724]: E0226 11:36:00.660359 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21cc4bbc_d9b0_4cbc_be72_e07817c4c242.slice/crio-conmon-62120b44b852af987eabec87e6844ddb2b58b27721adb4053025b851b334876e.scope\": RecentStats: unable to find data in memory cache]" Feb 26 11:36:01 crc kubenswrapper[4724]: I0226 11:36:01.050364 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.5030075050000002 podStartE2EDuration="3.050345975s" podCreationTimestamp="2026-02-26 11:35:58 +0000 UTC" firstStartedPulling="2026-02-26 11:35:59.387051725 +0000 UTC m=+1826.042790840" lastFinishedPulling="2026-02-26 11:35:59.934390195 +0000 UTC m=+1826.590129310" observedRunningTime="2026-02-26 11:36:00.55976456 +0000 UTC m=+1827.215503695" watchObservedRunningTime="2026-02-26 11:36:01.050345975 +0000 UTC m=+1827.706085090" Feb 26 11:36:01 crc kubenswrapper[4724]: I0226 11:36:01.058914 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535096-72km4"] Feb 26 11:36:01 crc kubenswrapper[4724]: I0226 11:36:01.560831 4724 generic.go:334] "Generic (PLEG): container finished" podID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerID="62120b44b852af987eabec87e6844ddb2b58b27721adb4053025b851b334876e" exitCode=0 Feb 26 11:36:01 crc kubenswrapper[4724]: I0226 11:36:01.561165 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21cc4bbc-d9b0-4cbc-be72-e07817c4c242","Type":"ContainerDied","Data":"62120b44b852af987eabec87e6844ddb2b58b27721adb4053025b851b334876e"} Feb 26 11:36:01 crc kubenswrapper[4724]: I0226 11:36:01.563011 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535096-72km4" event={"ID":"4ed5fa8e-48cb-497b-b871-3dd17b4a77e2","Type":"ContainerStarted","Data":"01f6fa88532de06f7456b86ac8e8a2456cacf563caa4956160277d3b55d53431"} Feb 26 11:36:02 crc kubenswrapper[4724]: I0226 11:36:02.190286 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:36:02 crc kubenswrapper[4724]: I0226 11:36:02.248635 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:36:02 crc kubenswrapper[4724]: I0226 11:36:02.438476 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s45hk"] Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.039246 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.128892 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-scripts\") pod \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.129562 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q9vs\" (UniqueName: \"kubernetes.io/projected/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-kube-api-access-9q9vs\") pod \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.129922 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-run-httpd\") pod \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.130031 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-config-data\") pod \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.130128 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-combined-ca-bundle\") pod \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.130285 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-sg-core-conf-yaml\") pod \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.130433 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-log-httpd\") pod \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\" (UID: \"21cc4bbc-d9b0-4cbc-be72-e07817c4c242\") " Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.132736 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "21cc4bbc-d9b0-4cbc-be72-e07817c4c242" (UID: "21cc4bbc-d9b0-4cbc-be72-e07817c4c242"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.133767 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "21cc4bbc-d9b0-4cbc-be72-e07817c4c242" (UID: "21cc4bbc-d9b0-4cbc-be72-e07817c4c242"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.139145 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-kube-api-access-9q9vs" (OuterVolumeSpecName: "kube-api-access-9q9vs") pod "21cc4bbc-d9b0-4cbc-be72-e07817c4c242" (UID: "21cc4bbc-d9b0-4cbc-be72-e07817c4c242"). InnerVolumeSpecName "kube-api-access-9q9vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.144886 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-scripts" (OuterVolumeSpecName: "scripts") pod "21cc4bbc-d9b0-4cbc-be72-e07817c4c242" (UID: "21cc4bbc-d9b0-4cbc-be72-e07817c4c242"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.193862 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "21cc4bbc-d9b0-4cbc-be72-e07817c4c242" (UID: "21cc4bbc-d9b0-4cbc-be72-e07817c4c242"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.237594 4724 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.237649 4724 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.237668 4724 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.237677 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q9vs\" (UniqueName: \"kubernetes.io/projected/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-kube-api-access-9q9vs\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.237685 4724 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.277116 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21cc4bbc-d9b0-4cbc-be72-e07817c4c242" (UID: "21cc4bbc-d9b0-4cbc-be72-e07817c4c242"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.302223 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-config-data" (OuterVolumeSpecName: "config-data") pod "21cc4bbc-d9b0-4cbc-be72-e07817c4c242" (UID: "21cc4bbc-d9b0-4cbc-be72-e07817c4c242"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.340401 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.340732 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21cc4bbc-d9b0-4cbc-be72-e07817c4c242-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.583550 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535096-72km4" event={"ID":"4ed5fa8e-48cb-497b-b871-3dd17b4a77e2","Type":"ContainerStarted","Data":"dbced93decb21b9e23f0f5687b3b63463048bed4da57ca9ddb5457ed23d25894"} Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.588493 4724 generic.go:334] "Generic (PLEG): container finished" podID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerID="4908a5f51c8be56e738a4bf5809739751f2bf9661b53e542e6dde1675caa70c5" exitCode=0 Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.588579 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.588635 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21cc4bbc-d9b0-4cbc-be72-e07817c4c242","Type":"ContainerDied","Data":"4908a5f51c8be56e738a4bf5809739751f2bf9661b53e542e6dde1675caa70c5"} Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.588671 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21cc4bbc-d9b0-4cbc-be72-e07817c4c242","Type":"ContainerDied","Data":"da46a9da63f36388ad036104c5f936b22bc24ddd98a50231ca43890b9c7edeac"} Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.588691 4724 scope.go:117] "RemoveContainer" containerID="fb0ec0ed57eda63a9722d457d53cb9ee02fae00167ea6b25b7179f1dcf616476" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.589025 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s45hk" podUID="ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" containerName="registry-server" containerID="cri-o://a6136cd4d4592cecab48c78a4933b66c593d16b9cd9f859a8b94cd668175768c" gracePeriod=2 Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.612956 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535096-72km4" podStartSLOduration=1.664404972 podStartE2EDuration="3.612931507s" podCreationTimestamp="2026-02-26 11:36:00 +0000 UTC" firstStartedPulling="2026-02-26 11:36:01.065386858 +0000 UTC m=+1827.721125983" lastFinishedPulling="2026-02-26 11:36:03.013913403 +0000 UTC m=+1829.669652518" observedRunningTime="2026-02-26 11:36:03.601484886 +0000 UTC m=+1830.257224011" watchObservedRunningTime="2026-02-26 11:36:03.612931507 +0000 UTC m=+1830.268670622" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.620384 4724 scope.go:117] "RemoveContainer" containerID="c0d05022a7b3a169fae7f2e59ab456097b1262efdd6a8a4377d1228fc0b76dcd" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.642299 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.654277 4724 scope.go:117] "RemoveContainer" containerID="4908a5f51c8be56e738a4bf5809739751f2bf9661b53e542e6dde1675caa70c5" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.656947 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.708563 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:36:03 crc kubenswrapper[4724]: E0226 11:36:03.708958 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="sg-core" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.708970 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="sg-core" Feb 26 11:36:03 crc kubenswrapper[4724]: E0226 11:36:03.708985 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="proxy-httpd" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.708992 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="proxy-httpd" Feb 26 11:36:03 crc kubenswrapper[4724]: E0226 11:36:03.709002 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="ceilometer-central-agent" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.709008 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="ceilometer-central-agent" Feb 26 11:36:03 crc kubenswrapper[4724]: E0226 11:36:03.709024 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="ceilometer-notification-agent" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.709030 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="ceilometer-notification-agent" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.709224 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="ceilometer-notification-agent" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.709238 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="ceilometer-central-agent" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.709249 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="proxy-httpd" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.709267 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" containerName="sg-core" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.711370 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.725918 4724 scope.go:117] "RemoveContainer" containerID="62120b44b852af987eabec87e6844ddb2b58b27721adb4053025b851b334876e" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.726309 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.726595 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.728116 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.781782 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.854394 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3da6a1f6-3a11-4249-8038-9b41635e7011-run-httpd\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.854432 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fp78\" (UniqueName: \"kubernetes.io/projected/3da6a1f6-3a11-4249-8038-9b41635e7011-kube-api-access-9fp78\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.854452 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3da6a1f6-3a11-4249-8038-9b41635e7011-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.854493 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3da6a1f6-3a11-4249-8038-9b41635e7011-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.854508 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da6a1f6-3a11-4249-8038-9b41635e7011-scripts\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.854537 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da6a1f6-3a11-4249-8038-9b41635e7011-config-data\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.854595 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3da6a1f6-3a11-4249-8038-9b41635e7011-log-httpd\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.854636 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da6a1f6-3a11-4249-8038-9b41635e7011-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.868922 4724 scope.go:117] "RemoveContainer" containerID="fb0ec0ed57eda63a9722d457d53cb9ee02fae00167ea6b25b7179f1dcf616476" Feb 26 11:36:03 crc kubenswrapper[4724]: E0226 11:36:03.882506 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb0ec0ed57eda63a9722d457d53cb9ee02fae00167ea6b25b7179f1dcf616476\": container with ID starting with fb0ec0ed57eda63a9722d457d53cb9ee02fae00167ea6b25b7179f1dcf616476 not found: ID does not exist" containerID="fb0ec0ed57eda63a9722d457d53cb9ee02fae00167ea6b25b7179f1dcf616476" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.882562 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb0ec0ed57eda63a9722d457d53cb9ee02fae00167ea6b25b7179f1dcf616476"} err="failed to get container status \"fb0ec0ed57eda63a9722d457d53cb9ee02fae00167ea6b25b7179f1dcf616476\": rpc error: code = NotFound desc = could not find container \"fb0ec0ed57eda63a9722d457d53cb9ee02fae00167ea6b25b7179f1dcf616476\": container with ID starting with fb0ec0ed57eda63a9722d457d53cb9ee02fae00167ea6b25b7179f1dcf616476 not found: ID does not exist" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.882590 4724 scope.go:117] "RemoveContainer" containerID="c0d05022a7b3a169fae7f2e59ab456097b1262efdd6a8a4377d1228fc0b76dcd" Feb 26 11:36:03 crc kubenswrapper[4724]: E0226 11:36:03.884134 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0d05022a7b3a169fae7f2e59ab456097b1262efdd6a8a4377d1228fc0b76dcd\": container with ID starting with c0d05022a7b3a169fae7f2e59ab456097b1262efdd6a8a4377d1228fc0b76dcd not found: ID does not exist" containerID="c0d05022a7b3a169fae7f2e59ab456097b1262efdd6a8a4377d1228fc0b76dcd" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.884166 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0d05022a7b3a169fae7f2e59ab456097b1262efdd6a8a4377d1228fc0b76dcd"} err="failed to get container status \"c0d05022a7b3a169fae7f2e59ab456097b1262efdd6a8a4377d1228fc0b76dcd\": rpc error: code = NotFound desc = could not find container \"c0d05022a7b3a169fae7f2e59ab456097b1262efdd6a8a4377d1228fc0b76dcd\": container with ID starting with c0d05022a7b3a169fae7f2e59ab456097b1262efdd6a8a4377d1228fc0b76dcd not found: ID does not exist" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.884196 4724 scope.go:117] "RemoveContainer" containerID="4908a5f51c8be56e738a4bf5809739751f2bf9661b53e542e6dde1675caa70c5" Feb 26 11:36:03 crc kubenswrapper[4724]: E0226 11:36:03.887045 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4908a5f51c8be56e738a4bf5809739751f2bf9661b53e542e6dde1675caa70c5\": container with ID starting with 4908a5f51c8be56e738a4bf5809739751f2bf9661b53e542e6dde1675caa70c5 not found: ID does not exist" containerID="4908a5f51c8be56e738a4bf5809739751f2bf9661b53e542e6dde1675caa70c5" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.887082 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4908a5f51c8be56e738a4bf5809739751f2bf9661b53e542e6dde1675caa70c5"} err="failed to get container status \"4908a5f51c8be56e738a4bf5809739751f2bf9661b53e542e6dde1675caa70c5\": rpc error: code = NotFound desc = could not find container \"4908a5f51c8be56e738a4bf5809739751f2bf9661b53e542e6dde1675caa70c5\": container with ID starting with 4908a5f51c8be56e738a4bf5809739751f2bf9661b53e542e6dde1675caa70c5 not found: ID does not exist" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.887103 4724 scope.go:117] "RemoveContainer" containerID="62120b44b852af987eabec87e6844ddb2b58b27721adb4053025b851b334876e" Feb 26 11:36:03 crc kubenswrapper[4724]: E0226 11:36:03.890842 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62120b44b852af987eabec87e6844ddb2b58b27721adb4053025b851b334876e\": container with ID starting with 62120b44b852af987eabec87e6844ddb2b58b27721adb4053025b851b334876e not found: ID does not exist" containerID="62120b44b852af987eabec87e6844ddb2b58b27721adb4053025b851b334876e" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.890876 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62120b44b852af987eabec87e6844ddb2b58b27721adb4053025b851b334876e"} err="failed to get container status \"62120b44b852af987eabec87e6844ddb2b58b27721adb4053025b851b334876e\": rpc error: code = NotFound desc = could not find container \"62120b44b852af987eabec87e6844ddb2b58b27721adb4053025b851b334876e\": container with ID starting with 62120b44b852af987eabec87e6844ddb2b58b27721adb4053025b851b334876e not found: ID does not exist" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.956670 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3da6a1f6-3a11-4249-8038-9b41635e7011-log-httpd\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.956915 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da6a1f6-3a11-4249-8038-9b41635e7011-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.957013 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3da6a1f6-3a11-4249-8038-9b41635e7011-run-httpd\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.957096 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fp78\" (UniqueName: \"kubernetes.io/projected/3da6a1f6-3a11-4249-8038-9b41635e7011-kube-api-access-9fp78\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.957165 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3da6a1f6-3a11-4249-8038-9b41635e7011-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.957341 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da6a1f6-3a11-4249-8038-9b41635e7011-scripts\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.957407 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3da6a1f6-3a11-4249-8038-9b41635e7011-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.957493 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da6a1f6-3a11-4249-8038-9b41635e7011-config-data\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.958769 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3da6a1f6-3a11-4249-8038-9b41635e7011-log-httpd\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.978556 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3da6a1f6-3a11-4249-8038-9b41635e7011-run-httpd\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.979974 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da6a1f6-3a11-4249-8038-9b41635e7011-scripts\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:03 crc kubenswrapper[4724]: I0226 11:36:03.989101 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3da6a1f6-3a11-4249-8038-9b41635e7011-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.017959 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3da6a1f6-3a11-4249-8038-9b41635e7011-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.021987 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da6a1f6-3a11-4249-8038-9b41635e7011-config-data\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.023510 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fp78\" (UniqueName: \"kubernetes.io/projected/3da6a1f6-3a11-4249-8038-9b41635e7011-kube-api-access-9fp78\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.030974 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da6a1f6-3a11-4249-8038-9b41635e7011-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3da6a1f6-3a11-4249-8038-9b41635e7011\") " pod="openstack/ceilometer-0" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.053561 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21cc4bbc-d9b0-4cbc-be72-e07817c4c242" path="/var/lib/kubelet/pods/21cc4bbc-d9b0-4cbc-be72-e07817c4c242/volumes" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.156032 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.263459 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.372650 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-utilities\") pod \"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac\" (UID: \"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac\") " Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.373317 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b6js\" (UniqueName: \"kubernetes.io/projected/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-kube-api-access-6b6js\") pod \"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac\" (UID: \"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac\") " Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.373572 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-catalog-content\") pod \"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac\" (UID: \"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac\") " Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.377059 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-utilities" (OuterVolumeSpecName: "utilities") pod "ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" (UID: "ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.379757 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-kube-api-access-6b6js" (OuterVolumeSpecName: "kube-api-access-6b6js") pod "ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" (UID: "ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac"). InnerVolumeSpecName "kube-api-access-6b6js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.443498 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" (UID: "ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.477898 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b6js\" (UniqueName: \"kubernetes.io/projected/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-kube-api-access-6b6js\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.477937 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.477949 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.600464 4724 generic.go:334] "Generic (PLEG): container finished" podID="ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" containerID="a6136cd4d4592cecab48c78a4933b66c593d16b9cd9f859a8b94cd668175768c" exitCode=0 Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.600511 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s45hk" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.600520 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s45hk" event={"ID":"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac","Type":"ContainerDied","Data":"a6136cd4d4592cecab48c78a4933b66c593d16b9cd9f859a8b94cd668175768c"} Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.600594 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s45hk" event={"ID":"ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac","Type":"ContainerDied","Data":"7af3e91d03c014c21cbd4ee7d75243893b2e760fea01f7f0e7b0d08e125bcae4"} Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.600615 4724 scope.go:117] "RemoveContainer" containerID="a6136cd4d4592cecab48c78a4933b66c593d16b9cd9f859a8b94cd668175768c" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.627280 4724 scope.go:117] "RemoveContainer" containerID="54bf9f75258273632b3b4523d0c2910a79e3ff3270b7c13909ef768f10c46a1e" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.651373 4724 scope.go:117] "RemoveContainer" containerID="b1f610ad1d6dfdc9325341d8689ef9e600002164f378a18f37be19995e962c05" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.654116 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s45hk"] Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.668132 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s45hk"] Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.733132 4724 scope.go:117] "RemoveContainer" containerID="a6136cd4d4592cecab48c78a4933b66c593d16b9cd9f859a8b94cd668175768c" Feb 26 11:36:04 crc kubenswrapper[4724]: E0226 11:36:04.735334 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6136cd4d4592cecab48c78a4933b66c593d16b9cd9f859a8b94cd668175768c\": container with ID starting with a6136cd4d4592cecab48c78a4933b66c593d16b9cd9f859a8b94cd668175768c not found: ID does not exist" containerID="a6136cd4d4592cecab48c78a4933b66c593d16b9cd9f859a8b94cd668175768c" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.735407 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6136cd4d4592cecab48c78a4933b66c593d16b9cd9f859a8b94cd668175768c"} err="failed to get container status \"a6136cd4d4592cecab48c78a4933b66c593d16b9cd9f859a8b94cd668175768c\": rpc error: code = NotFound desc = could not find container \"a6136cd4d4592cecab48c78a4933b66c593d16b9cd9f859a8b94cd668175768c\": container with ID starting with a6136cd4d4592cecab48c78a4933b66c593d16b9cd9f859a8b94cd668175768c not found: ID does not exist" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.735444 4724 scope.go:117] "RemoveContainer" containerID="54bf9f75258273632b3b4523d0c2910a79e3ff3270b7c13909ef768f10c46a1e" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.736153 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 11:36:04 crc kubenswrapper[4724]: E0226 11:36:04.740271 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54bf9f75258273632b3b4523d0c2910a79e3ff3270b7c13909ef768f10c46a1e\": container with ID starting with 54bf9f75258273632b3b4523d0c2910a79e3ff3270b7c13909ef768f10c46a1e not found: ID does not exist" containerID="54bf9f75258273632b3b4523d0c2910a79e3ff3270b7c13909ef768f10c46a1e" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.740326 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54bf9f75258273632b3b4523d0c2910a79e3ff3270b7c13909ef768f10c46a1e"} err="failed to get container status \"54bf9f75258273632b3b4523d0c2910a79e3ff3270b7c13909ef768f10c46a1e\": rpc error: code = NotFound desc = could not find container \"54bf9f75258273632b3b4523d0c2910a79e3ff3270b7c13909ef768f10c46a1e\": container with ID starting with 54bf9f75258273632b3b4523d0c2910a79e3ff3270b7c13909ef768f10c46a1e not found: ID does not exist" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.740385 4724 scope.go:117] "RemoveContainer" containerID="b1f610ad1d6dfdc9325341d8689ef9e600002164f378a18f37be19995e962c05" Feb 26 11:36:04 crc kubenswrapper[4724]: E0226 11:36:04.740895 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1f610ad1d6dfdc9325341d8689ef9e600002164f378a18f37be19995e962c05\": container with ID starting with b1f610ad1d6dfdc9325341d8689ef9e600002164f378a18f37be19995e962c05 not found: ID does not exist" containerID="b1f610ad1d6dfdc9325341d8689ef9e600002164f378a18f37be19995e962c05" Feb 26 11:36:04 crc kubenswrapper[4724]: I0226 11:36:04.740936 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1f610ad1d6dfdc9325341d8689ef9e600002164f378a18f37be19995e962c05"} err="failed to get container status \"b1f610ad1d6dfdc9325341d8689ef9e600002164f378a18f37be19995e962c05\": rpc error: code = NotFound desc = could not find container \"b1f610ad1d6dfdc9325341d8689ef9e600002164f378a18f37be19995e962c05\": container with ID starting with b1f610ad1d6dfdc9325341d8689ef9e600002164f378a18f37be19995e962c05 not found: ID does not exist" Feb 26 11:36:05 crc kubenswrapper[4724]: I0226 11:36:05.619693 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3da6a1f6-3a11-4249-8038-9b41635e7011","Type":"ContainerStarted","Data":"513a6164cff91abd9a9da3714b6b8fd7da63a3f55e2b8b875a110e4361ec04aa"} Feb 26 11:36:05 crc kubenswrapper[4724]: I0226 11:36:05.620013 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3da6a1f6-3a11-4249-8038-9b41635e7011","Type":"ContainerStarted","Data":"78888338336d3f7e55ce1f53cd2ca5ae8ac521d60b9b4f9bc728951ab689ca45"} Feb 26 11:36:05 crc kubenswrapper[4724]: I0226 11:36:05.630232 4724 generic.go:334] "Generic (PLEG): container finished" podID="4ed5fa8e-48cb-497b-b871-3dd17b4a77e2" containerID="dbced93decb21b9e23f0f5687b3b63463048bed4da57ca9ddb5457ed23d25894" exitCode=0 Feb 26 11:36:05 crc kubenswrapper[4724]: I0226 11:36:05.630529 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535096-72km4" event={"ID":"4ed5fa8e-48cb-497b-b871-3dd17b4a77e2","Type":"ContainerDied","Data":"dbced93decb21b9e23f0f5687b3b63463048bed4da57ca9ddb5457ed23d25894"} Feb 26 11:36:06 crc kubenswrapper[4724]: I0226 11:36:06.015067 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" path="/var/lib/kubelet/pods/ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac/volumes" Feb 26 11:36:06 crc kubenswrapper[4724]: I0226 11:36:06.016591 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:36:06 crc kubenswrapper[4724]: I0226 11:36:06.641616 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3da6a1f6-3a11-4249-8038-9b41635e7011","Type":"ContainerStarted","Data":"c5a967d0f772882706c32a80383b62ffd3ec7cb693a7bb021e383a7a0573fa32"} Feb 26 11:36:06 crc kubenswrapper[4724]: I0226 11:36:06.976596 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:36:06 crc kubenswrapper[4724]: E0226 11:36:06.977042 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:36:07 crc kubenswrapper[4724]: I0226 11:36:07.141817 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535096-72km4" Feb 26 11:36:07 crc kubenswrapper[4724]: I0226 11:36:07.262092 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw4b6\" (UniqueName: \"kubernetes.io/projected/4ed5fa8e-48cb-497b-b871-3dd17b4a77e2-kube-api-access-kw4b6\") pod \"4ed5fa8e-48cb-497b-b871-3dd17b4a77e2\" (UID: \"4ed5fa8e-48cb-497b-b871-3dd17b4a77e2\") " Feb 26 11:36:07 crc kubenswrapper[4724]: I0226 11:36:07.285371 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ed5fa8e-48cb-497b-b871-3dd17b4a77e2-kube-api-access-kw4b6" (OuterVolumeSpecName: "kube-api-access-kw4b6") pod "4ed5fa8e-48cb-497b-b871-3dd17b4a77e2" (UID: "4ed5fa8e-48cb-497b-b871-3dd17b4a77e2"). InnerVolumeSpecName "kube-api-access-kw4b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:36:07 crc kubenswrapper[4724]: I0226 11:36:07.364565 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw4b6\" (UniqueName: \"kubernetes.io/projected/4ed5fa8e-48cb-497b-b871-3dd17b4a77e2-kube-api-access-kw4b6\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:07 crc kubenswrapper[4724]: I0226 11:36:07.621738 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:36:07 crc kubenswrapper[4724]: I0226 11:36:07.657398 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535096-72km4" Feb 26 11:36:07 crc kubenswrapper[4724]: I0226 11:36:07.657397 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535096-72km4" event={"ID":"4ed5fa8e-48cb-497b-b871-3dd17b4a77e2","Type":"ContainerDied","Data":"01f6fa88532de06f7456b86ac8e8a2456cacf563caa4956160277d3b55d53431"} Feb 26 11:36:07 crc kubenswrapper[4724]: I0226 11:36:07.657529 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01f6fa88532de06f7456b86ac8e8a2456cacf563caa4956160277d3b55d53431" Feb 26 11:36:07 crc kubenswrapper[4724]: I0226 11:36:07.661809 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3da6a1f6-3a11-4249-8038-9b41635e7011","Type":"ContainerStarted","Data":"9bbca6e1abe5f9bcc6f9cd44b92fb585815e6bacb664f0cfb2b658db0c0125b5"} Feb 26 11:36:07 crc kubenswrapper[4724]: I0226 11:36:07.781897 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535090-lwblq"] Feb 26 11:36:07 crc kubenswrapper[4724]: I0226 11:36:07.799129 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535090-lwblq"] Feb 26 11:36:07 crc kubenswrapper[4724]: I0226 11:36:07.990803 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e9c2690-0081-4d25-9813-e94f387c218d" path="/var/lib/kubelet/pods/2e9c2690-0081-4d25-9813-e94f387c218d/volumes" Feb 26 11:36:08 crc kubenswrapper[4724]: I0226 11:36:08.938042 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 26 11:36:10 crc kubenswrapper[4724]: I0226 11:36:10.699727 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3da6a1f6-3a11-4249-8038-9b41635e7011","Type":"ContainerStarted","Data":"d43022e088176a4fbca5945bccf594c2cc7f07f84a70286d7b703f3124294530"} Feb 26 11:36:10 crc kubenswrapper[4724]: I0226 11:36:10.700096 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 11:36:10 crc kubenswrapper[4724]: I0226 11:36:10.778375 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.778327217 podStartE2EDuration="7.778353148s" podCreationTimestamp="2026-02-26 11:36:03 +0000 UTC" firstStartedPulling="2026-02-26 11:36:04.7404041 +0000 UTC m=+1831.396143215" lastFinishedPulling="2026-02-26 11:36:09.740430031 +0000 UTC m=+1836.396169146" observedRunningTime="2026-02-26 11:36:10.770076417 +0000 UTC m=+1837.425815542" watchObservedRunningTime="2026-02-26 11:36:10.778353148 +0000 UTC m=+1837.434092263" Feb 26 11:36:12 crc kubenswrapper[4724]: I0226 11:36:12.764826 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="4d7fdccb-4fd0-4a6e-9241-add667b9a537" containerName="rabbitmq" containerID="cri-o://e3aa4be8f3a3d0b0dae151ea4bd94a34026eef07c9435b74466154ffb9d6add8" gracePeriod=604795 Feb 26 11:36:13 crc kubenswrapper[4724]: I0226 11:36:13.016476 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="ad24283d-3357-4230-a2b2-3d5ed0fefa7f" containerName="rabbitmq" containerID="cri-o://080f6ade20d9e595da88afe20d4f9c55286bb1dc41f06f7dd295182e1fd362a9" gracePeriod=604794 Feb 26 11:36:13 crc kubenswrapper[4724]: I0226 11:36:13.529792 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ad24283d-3357-4230-a2b2-3d5ed0fefa7f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Feb 26 11:36:13 crc kubenswrapper[4724]: I0226 11:36:13.919755 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="4d7fdccb-4fd0-4a6e-9241-add667b9a537" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Feb 26 11:36:18 crc kubenswrapper[4724]: I0226 11:36:18.984308 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:36:18 crc kubenswrapper[4724]: E0226 11:36:18.985213 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.476281 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.592893 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.613844 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d7fdccb-4fd0-4a6e-9241-add667b9a537-erlang-cookie-secret\") pod \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.613959 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-plugins-conf\") pod \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.613986 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-server-conf\") pod \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.614016 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-confd\") pod \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.614084 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-plugins\") pod \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.614119 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-config-data\") pod \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.614157 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-tls\") pod \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.614942 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.615018 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d7fdccb-4fd0-4a6e-9241-add667b9a537-pod-info\") pod \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.615044 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49bt2\" (UniqueName: \"kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-kube-api-access-49bt2\") pod \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.615240 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-erlang-cookie\") pod \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\" (UID: \"4d7fdccb-4fd0-4a6e-9241-add667b9a537\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.615249 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4d7fdccb-4fd0-4a6e-9241-add667b9a537" (UID: "4d7fdccb-4fd0-4a6e-9241-add667b9a537"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.615782 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4d7fdccb-4fd0-4a6e-9241-add667b9a537" (UID: "4d7fdccb-4fd0-4a6e-9241-add667b9a537"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.615912 4724 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.615928 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.627139 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4d7fdccb-4fd0-4a6e-9241-add667b9a537" (UID: "4d7fdccb-4fd0-4a6e-9241-add667b9a537"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.644447 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4d7fdccb-4fd0-4a6e-9241-add667b9a537" (UID: "4d7fdccb-4fd0-4a6e-9241-add667b9a537"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.649443 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "4d7fdccb-4fd0-4a6e-9241-add667b9a537" (UID: "4d7fdccb-4fd0-4a6e-9241-add667b9a537"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.649500 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-kube-api-access-49bt2" (OuterVolumeSpecName: "kube-api-access-49bt2") pod "4d7fdccb-4fd0-4a6e-9241-add667b9a537" (UID: "4d7fdccb-4fd0-4a6e-9241-add667b9a537"). InnerVolumeSpecName "kube-api-access-49bt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.655397 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d7fdccb-4fd0-4a6e-9241-add667b9a537-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4d7fdccb-4fd0-4a6e-9241-add667b9a537" (UID: "4d7fdccb-4fd0-4a6e-9241-add667b9a537"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.661441 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4d7fdccb-4fd0-4a6e-9241-add667b9a537-pod-info" (OuterVolumeSpecName: "pod-info") pod "4d7fdccb-4fd0-4a6e-9241-add667b9a537" (UID: "4d7fdccb-4fd0-4a6e-9241-add667b9a537"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.693880 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-config-data" (OuterVolumeSpecName: "config-data") pod "4d7fdccb-4fd0-4a6e-9241-add667b9a537" (UID: "4d7fdccb-4fd0-4a6e-9241-add667b9a537"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.716725 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-pod-info\") pod \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.716769 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-confd\") pod \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.716797 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-server-conf\") pod \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.716831 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-erlang-cookie-secret\") pod \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.716911 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-plugins-conf\") pod \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.716982 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dd2j\" (UniqueName: \"kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-kube-api-access-4dd2j\") pod \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.717039 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-erlang-cookie\") pod \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.717112 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-plugins\") pod \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.717146 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.717208 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-tls\") pod \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.717297 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-config-data\") pod \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\" (UID: \"ad24283d-3357-4230-a2b2-3d5ed0fefa7f\") " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.717916 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.717940 4724 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d7fdccb-4fd0-4a6e-9241-add667b9a537-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.717953 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.717965 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.717988 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.718002 4724 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d7fdccb-4fd0-4a6e-9241-add667b9a537-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.718013 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49bt2\" (UniqueName: \"kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-kube-api-access-49bt2\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.718903 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ad24283d-3357-4230-a2b2-3d5ed0fefa7f" (UID: "ad24283d-3357-4230-a2b2-3d5ed0fefa7f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.721574 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ad24283d-3357-4230-a2b2-3d5ed0fefa7f" (UID: "ad24283d-3357-4230-a2b2-3d5ed0fefa7f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.723459 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ad24283d-3357-4230-a2b2-3d5ed0fefa7f" (UID: "ad24283d-3357-4230-a2b2-3d5ed0fefa7f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.723740 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ad24283d-3357-4230-a2b2-3d5ed0fefa7f" (UID: "ad24283d-3357-4230-a2b2-3d5ed0fefa7f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.742035 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-pod-info" (OuterVolumeSpecName: "pod-info") pod "ad24283d-3357-4230-a2b2-3d5ed0fefa7f" (UID: "ad24283d-3357-4230-a2b2-3d5ed0fefa7f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.743796 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "ad24283d-3357-4230-a2b2-3d5ed0fefa7f" (UID: "ad24283d-3357-4230-a2b2-3d5ed0fefa7f"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.748374 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ad24283d-3357-4230-a2b2-3d5ed0fefa7f" (UID: "ad24283d-3357-4230-a2b2-3d5ed0fefa7f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.762413 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-kube-api-access-4dd2j" (OuterVolumeSpecName: "kube-api-access-4dd2j") pod "ad24283d-3357-4230-a2b2-3d5ed0fefa7f" (UID: "ad24283d-3357-4230-a2b2-3d5ed0fefa7f"). InnerVolumeSpecName "kube-api-access-4dd2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.769587 4724 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.810124 4724 generic.go:334] "Generic (PLEG): container finished" podID="ad24283d-3357-4230-a2b2-3d5ed0fefa7f" containerID="080f6ade20d9e595da88afe20d4f9c55286bb1dc41f06f7dd295182e1fd362a9" exitCode=0 Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.810250 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.810292 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ad24283d-3357-4230-a2b2-3d5ed0fefa7f","Type":"ContainerDied","Data":"080f6ade20d9e595da88afe20d4f9c55286bb1dc41f06f7dd295182e1fd362a9"} Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.810573 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ad24283d-3357-4230-a2b2-3d5ed0fefa7f","Type":"ContainerDied","Data":"cc7734e6d220507580f813de6d45266da4278dd3a73d937cd7ca08f0d4cad186"} Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.810594 4724 scope.go:117] "RemoveContainer" containerID="080f6ade20d9e595da88afe20d4f9c55286bb1dc41f06f7dd295182e1fd362a9" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.817023 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-config-data" (OuterVolumeSpecName: "config-data") pod "ad24283d-3357-4230-a2b2-3d5ed0fefa7f" (UID: "ad24283d-3357-4230-a2b2-3d5ed0fefa7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.818274 4724 generic.go:334] "Generic (PLEG): container finished" podID="4d7fdccb-4fd0-4a6e-9241-add667b9a537" containerID="e3aa4be8f3a3d0b0dae151ea4bd94a34026eef07c9435b74466154ffb9d6add8" exitCode=0 Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.818313 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d7fdccb-4fd0-4a6e-9241-add667b9a537","Type":"ContainerDied","Data":"e3aa4be8f3a3d0b0dae151ea4bd94a34026eef07c9435b74466154ffb9d6add8"} Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.818338 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d7fdccb-4fd0-4a6e-9241-add667b9a537","Type":"ContainerDied","Data":"a84a542ea8195b6ea4bec9a645a70add310134a2247d1b2753568f2b55f10e11"} Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.818397 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.820009 4724 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.820022 4724 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.820031 4724 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.820040 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dd2j\" (UniqueName: \"kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-kube-api-access-4dd2j\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.820048 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.820056 4724 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.820068 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.820090 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.820098 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.820106 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.834203 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-server-conf" (OuterVolumeSpecName: "server-conf") pod "4d7fdccb-4fd0-4a6e-9241-add667b9a537" (UID: "4d7fdccb-4fd0-4a6e-9241-add667b9a537"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.889023 4724 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.902086 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4d7fdccb-4fd0-4a6e-9241-add667b9a537" (UID: "4d7fdccb-4fd0-4a6e-9241-add667b9a537"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.915228 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-server-conf" (OuterVolumeSpecName: "server-conf") pod "ad24283d-3357-4230-a2b2-3d5ed0fefa7f" (UID: "ad24283d-3357-4230-a2b2-3d5ed0fefa7f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.923956 4724 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.923989 4724 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d7fdccb-4fd0-4a6e-9241-add667b9a537-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.924000 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d7fdccb-4fd0-4a6e-9241-add667b9a537-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.924011 4724 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:19 crc kubenswrapper[4724]: I0226 11:36:19.945477 4724 scope.go:117] "RemoveContainer" containerID="2ae61775b75b66aef3e5990aa38f0d8b0eb327cc86db14bc172e02fa575dee21" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.026171 4724 scope.go:117] "RemoveContainer" containerID="080f6ade20d9e595da88afe20d4f9c55286bb1dc41f06f7dd295182e1fd362a9" Feb 26 11:36:20 crc kubenswrapper[4724]: E0226 11:36:20.029363 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"080f6ade20d9e595da88afe20d4f9c55286bb1dc41f06f7dd295182e1fd362a9\": container with ID starting with 080f6ade20d9e595da88afe20d4f9c55286bb1dc41f06f7dd295182e1fd362a9 not found: ID does not exist" containerID="080f6ade20d9e595da88afe20d4f9c55286bb1dc41f06f7dd295182e1fd362a9" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.029418 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"080f6ade20d9e595da88afe20d4f9c55286bb1dc41f06f7dd295182e1fd362a9"} err="failed to get container status \"080f6ade20d9e595da88afe20d4f9c55286bb1dc41f06f7dd295182e1fd362a9\": rpc error: code = NotFound desc = could not find container \"080f6ade20d9e595da88afe20d4f9c55286bb1dc41f06f7dd295182e1fd362a9\": container with ID starting with 080f6ade20d9e595da88afe20d4f9c55286bb1dc41f06f7dd295182e1fd362a9 not found: ID does not exist" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.029448 4724 scope.go:117] "RemoveContainer" containerID="2ae61775b75b66aef3e5990aa38f0d8b0eb327cc86db14bc172e02fa575dee21" Feb 26 11:36:20 crc kubenswrapper[4724]: E0226 11:36:20.040891 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ae61775b75b66aef3e5990aa38f0d8b0eb327cc86db14bc172e02fa575dee21\": container with ID starting with 2ae61775b75b66aef3e5990aa38f0d8b0eb327cc86db14bc172e02fa575dee21 not found: ID does not exist" containerID="2ae61775b75b66aef3e5990aa38f0d8b0eb327cc86db14bc172e02fa575dee21" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.040928 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae61775b75b66aef3e5990aa38f0d8b0eb327cc86db14bc172e02fa575dee21"} err="failed to get container status \"2ae61775b75b66aef3e5990aa38f0d8b0eb327cc86db14bc172e02fa575dee21\": rpc error: code = NotFound desc = could not find container \"2ae61775b75b66aef3e5990aa38f0d8b0eb327cc86db14bc172e02fa575dee21\": container with ID starting with 2ae61775b75b66aef3e5990aa38f0d8b0eb327cc86db14bc172e02fa575dee21 not found: ID does not exist" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.040954 4724 scope.go:117] "RemoveContainer" containerID="e3aa4be8f3a3d0b0dae151ea4bd94a34026eef07c9435b74466154ffb9d6add8" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.046155 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ad24283d-3357-4230-a2b2-3d5ed0fefa7f" (UID: "ad24283d-3357-4230-a2b2-3d5ed0fefa7f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.129947 4724 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad24283d-3357-4230-a2b2-3d5ed0fefa7f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.162950 4724 scope.go:117] "RemoveContainer" containerID="f02d3f46994abb7bc5f7a8cc31c24d6db26868c82a95d430fdc99e0104c638a2" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.208339 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.244074 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.268406 4724 scope.go:117] "RemoveContainer" containerID="e3aa4be8f3a3d0b0dae151ea4bd94a34026eef07c9435b74466154ffb9d6add8" Feb 26 11:36:20 crc kubenswrapper[4724]: E0226 11:36:20.276439 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3aa4be8f3a3d0b0dae151ea4bd94a34026eef07c9435b74466154ffb9d6add8\": container with ID starting with e3aa4be8f3a3d0b0dae151ea4bd94a34026eef07c9435b74466154ffb9d6add8 not found: ID does not exist" containerID="e3aa4be8f3a3d0b0dae151ea4bd94a34026eef07c9435b74466154ffb9d6add8" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.276490 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3aa4be8f3a3d0b0dae151ea4bd94a34026eef07c9435b74466154ffb9d6add8"} err="failed to get container status \"e3aa4be8f3a3d0b0dae151ea4bd94a34026eef07c9435b74466154ffb9d6add8\": rpc error: code = NotFound desc = could not find container \"e3aa4be8f3a3d0b0dae151ea4bd94a34026eef07c9435b74466154ffb9d6add8\": container with ID starting with e3aa4be8f3a3d0b0dae151ea4bd94a34026eef07c9435b74466154ffb9d6add8 not found: ID does not exist" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.276521 4724 scope.go:117] "RemoveContainer" containerID="f02d3f46994abb7bc5f7a8cc31c24d6db26868c82a95d430fdc99e0104c638a2" Feb 26 11:36:20 crc kubenswrapper[4724]: E0226 11:36:20.280082 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f02d3f46994abb7bc5f7a8cc31c24d6db26868c82a95d430fdc99e0104c638a2\": container with ID starting with f02d3f46994abb7bc5f7a8cc31c24d6db26868c82a95d430fdc99e0104c638a2 not found: ID does not exist" containerID="f02d3f46994abb7bc5f7a8cc31c24d6db26868c82a95d430fdc99e0104c638a2" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.280136 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f02d3f46994abb7bc5f7a8cc31c24d6db26868c82a95d430fdc99e0104c638a2"} err="failed to get container status \"f02d3f46994abb7bc5f7a8cc31c24d6db26868c82a95d430fdc99e0104c638a2\": rpc error: code = NotFound desc = could not find container \"f02d3f46994abb7bc5f7a8cc31c24d6db26868c82a95d430fdc99e0104c638a2\": container with ID starting with f02d3f46994abb7bc5f7a8cc31c24d6db26868c82a95d430fdc99e0104c638a2 not found: ID does not exist" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.284205 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.313588 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.341265 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:36:20 crc kubenswrapper[4724]: E0226 11:36:20.349965 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" containerName="extract-utilities" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.350239 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" containerName="extract-utilities" Feb 26 11:36:20 crc kubenswrapper[4724]: E0226 11:36:20.378297 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" containerName="extract-content" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.378352 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" containerName="extract-content" Feb 26 11:36:20 crc kubenswrapper[4724]: E0226 11:36:20.378371 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad24283d-3357-4230-a2b2-3d5ed0fefa7f" containerName="rabbitmq" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.378378 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad24283d-3357-4230-a2b2-3d5ed0fefa7f" containerName="rabbitmq" Feb 26 11:36:20 crc kubenswrapper[4724]: E0226 11:36:20.378399 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7fdccb-4fd0-4a6e-9241-add667b9a537" containerName="rabbitmq" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.378405 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7fdccb-4fd0-4a6e-9241-add667b9a537" containerName="rabbitmq" Feb 26 11:36:20 crc kubenswrapper[4724]: E0226 11:36:20.378457 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7fdccb-4fd0-4a6e-9241-add667b9a537" containerName="setup-container" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.378463 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7fdccb-4fd0-4a6e-9241-add667b9a537" containerName="setup-container" Feb 26 11:36:20 crc kubenswrapper[4724]: E0226 11:36:20.378477 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ed5fa8e-48cb-497b-b871-3dd17b4a77e2" containerName="oc" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.378487 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ed5fa8e-48cb-497b-b871-3dd17b4a77e2" containerName="oc" Feb 26 11:36:20 crc kubenswrapper[4724]: E0226 11:36:20.378511 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad24283d-3357-4230-a2b2-3d5ed0fefa7f" containerName="setup-container" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.378516 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad24283d-3357-4230-a2b2-3d5ed0fefa7f" containerName="setup-container" Feb 26 11:36:20 crc kubenswrapper[4724]: E0226 11:36:20.378532 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" containerName="registry-server" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.378538 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" containerName="registry-server" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.382276 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d7fdccb-4fd0-4a6e-9241-add667b9a537" containerName="rabbitmq" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.382333 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ed5fa8e-48cb-497b-b871-3dd17b4a77e2" containerName="oc" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.382379 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad24283d-3357-4230-a2b2-3d5ed0fefa7f" containerName="rabbitmq" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.382394 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7df9bd-e8d9-4cf2-8af9-b1a6bf2cb5ac" containerName="registry-server" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.389754 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.394974 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.403892 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-cg4xv" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.405781 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.405968 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.408677 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.408824 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.410214 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.418150 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.418633 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.418798 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.424350 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-cfjpw" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.424621 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.428317 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.428555 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.428858 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.436581 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.458447 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.462498 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550315 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns6br\" (UniqueName: \"kubernetes.io/projected/bad75855-a326-41f0-8b17-c83e5be398b9-kube-api-access-ns6br\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550381 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bad75855-a326-41f0-8b17-c83e5be398b9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550430 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550494 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550537 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bad75855-a326-41f0-8b17-c83e5be398b9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550577 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bad75855-a326-41f0-8b17-c83e5be398b9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550604 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550625 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bad75855-a326-41f0-8b17-c83e5be398b9-config-data\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550659 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550684 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bad75855-a326-41f0-8b17-c83e5be398b9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550708 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550729 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550751 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bad75855-a326-41f0-8b17-c83e5be398b9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550780 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pljf5\" (UniqueName: \"kubernetes.io/projected/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-kube-api-access-pljf5\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550817 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550847 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550878 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bad75855-a326-41f0-8b17-c83e5be398b9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550902 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bad75855-a326-41f0-8b17-c83e5be398b9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550929 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550956 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bad75855-a326-41f0-8b17-c83e5be398b9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.550985 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.551011 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.653748 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bad75855-a326-41f0-8b17-c83e5be398b9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.653844 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.653889 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.653941 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns6br\" (UniqueName: \"kubernetes.io/projected/bad75855-a326-41f0-8b17-c83e5be398b9-kube-api-access-ns6br\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654001 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bad75855-a326-41f0-8b17-c83e5be398b9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654077 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654167 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654240 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bad75855-a326-41f0-8b17-c83e5be398b9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654302 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bad75855-a326-41f0-8b17-c83e5be398b9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654335 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654368 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bad75855-a326-41f0-8b17-c83e5be398b9-config-data\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654416 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654465 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bad75855-a326-41f0-8b17-c83e5be398b9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654504 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654548 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654581 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bad75855-a326-41f0-8b17-c83e5be398b9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654626 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pljf5\" (UniqueName: \"kubernetes.io/projected/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-kube-api-access-pljf5\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654684 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654723 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654760 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bad75855-a326-41f0-8b17-c83e5be398b9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654795 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bad75855-a326-41f0-8b17-c83e5be398b9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.654838 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.655620 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.732812 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.733410 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bad75855-a326-41f0-8b17-c83e5be398b9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.733990 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bad75855-a326-41f0-8b17-c83e5be398b9-config-data\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.734329 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.739697 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.748563 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.749448 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.749646 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.752297 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.754049 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bad75855-a326-41f0-8b17-c83e5be398b9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.756623 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bad75855-a326-41f0-8b17-c83e5be398b9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.758705 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bad75855-a326-41f0-8b17-c83e5be398b9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.779583 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.780112 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bad75855-a326-41f0-8b17-c83e5be398b9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.780173 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bad75855-a326-41f0-8b17-c83e5be398b9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.780823 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.782044 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bad75855-a326-41f0-8b17-c83e5be398b9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.801791 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.813755 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bad75855-a326-41f0-8b17-c83e5be398b9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.815646 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns6br\" (UniqueName: \"kubernetes.io/projected/bad75855-a326-41f0-8b17-c83e5be398b9-kube-api-access-ns6br\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.840164 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pljf5\" (UniqueName: \"kubernetes.io/projected/da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df-kube-api-access-pljf5\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.841196 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:20 crc kubenswrapper[4724]: I0226 11:36:20.939844 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bad75855-a326-41f0-8b17-c83e5be398b9\") " pod="openstack/rabbitmq-server-0" Feb 26 11:36:21 crc kubenswrapper[4724]: I0226 11:36:21.028614 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 11:36:21 crc kubenswrapper[4724]: I0226 11:36:21.047128 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:21 crc kubenswrapper[4724]: W0226 11:36:21.698248 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbad75855_a326_41f0_8b17_c83e5be398b9.slice/crio-7ab3e43bac745024227c56aa9427b565416af980a07d546defb211a405a19655 WatchSource:0}: Error finding container 7ab3e43bac745024227c56aa9427b565416af980a07d546defb211a405a19655: Status 404 returned error can't find the container with id 7ab3e43bac745024227c56aa9427b565416af980a07d546defb211a405a19655 Feb 26 11:36:21 crc kubenswrapper[4724]: I0226 11:36:21.702221 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 11:36:21 crc kubenswrapper[4724]: I0226 11:36:21.756474 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 11:36:21 crc kubenswrapper[4724]: W0226 11:36:21.764540 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda62ca3a_60df_4af3_8b0e_9dd3e8ffd0df.slice/crio-c77d37431afb8d95c0d7051f9255e132c54eb04163f3c4621c06e0482f68caea WatchSource:0}: Error finding container c77d37431afb8d95c0d7051f9255e132c54eb04163f3c4621c06e0482f68caea: Status 404 returned error can't find the container with id c77d37431afb8d95c0d7051f9255e132c54eb04163f3c4621c06e0482f68caea Feb 26 11:36:21 crc kubenswrapper[4724]: I0226 11:36:21.877339 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df","Type":"ContainerStarted","Data":"c77d37431afb8d95c0d7051f9255e132c54eb04163f3c4621c06e0482f68caea"} Feb 26 11:36:21 crc kubenswrapper[4724]: I0226 11:36:21.879163 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bad75855-a326-41f0-8b17-c83e5be398b9","Type":"ContainerStarted","Data":"7ab3e43bac745024227c56aa9427b565416af980a07d546defb211a405a19655"} Feb 26 11:36:21 crc kubenswrapper[4724]: I0226 11:36:21.987666 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d7fdccb-4fd0-4a6e-9241-add667b9a537" path="/var/lib/kubelet/pods/4d7fdccb-4fd0-4a6e-9241-add667b9a537/volumes" Feb 26 11:36:21 crc kubenswrapper[4724]: I0226 11:36:21.989379 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad24283d-3357-4230-a2b2-3d5ed0fefa7f" path="/var/lib/kubelet/pods/ad24283d-3357-4230-a2b2-3d5ed0fefa7f/volumes" Feb 26 11:36:23 crc kubenswrapper[4724]: I0226 11:36:23.797898 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-vssgn"] Feb 26 11:36:23 crc kubenswrapper[4724]: I0226 11:36:23.800527 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:23 crc kubenswrapper[4724]: I0226 11:36:23.803849 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 26 11:36:23 crc kubenswrapper[4724]: I0226 11:36:23.813797 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-vssgn"] Feb 26 11:36:23 crc kubenswrapper[4724]: I0226 11:36:23.911236 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df","Type":"ContainerStarted","Data":"79c9046dc4d99c6683d93e38de3160040eebc6cd7260dd79648dc09004a6a105"} Feb 26 11:36:23 crc kubenswrapper[4724]: I0226 11:36:23.917128 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bad75855-a326-41f0-8b17-c83e5be398b9","Type":"ContainerStarted","Data":"e3597641e274ce78afcf9496f033a089ce85d333da8c4520fd90ba464344993d"} Feb 26 11:36:23 crc kubenswrapper[4724]: I0226 11:36:23.928129 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:23 crc kubenswrapper[4724]: I0226 11:36:23.928464 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:23 crc kubenswrapper[4724]: I0226 11:36:23.928627 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-config\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:23 crc kubenswrapper[4724]: I0226 11:36:23.928756 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:23 crc kubenswrapper[4724]: I0226 11:36:23.928842 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-dns-svc\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:23 crc kubenswrapper[4724]: I0226 11:36:23.928967 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnv7g\" (UniqueName: \"kubernetes.io/projected/e59cb5c2-2880-4bdd-acbe-8c66b147225d-kube-api-access-mnv7g\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:23 crc kubenswrapper[4724]: I0226 11:36:23.929134 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.031099 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnv7g\" (UniqueName: \"kubernetes.io/projected/e59cb5c2-2880-4bdd-acbe-8c66b147225d-kube-api-access-mnv7g\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.031201 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.031325 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.031407 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.031444 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-config\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.031497 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-dns-svc\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.031521 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.032369 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.032418 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.033829 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-config\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.034349 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.034734 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.034829 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-dns-svc\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.060610 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnv7g\" (UniqueName: \"kubernetes.io/projected/e59cb5c2-2880-4bdd-acbe-8c66b147225d-kube-api-access-mnv7g\") pod \"dnsmasq-dns-594cb89c79-vssgn\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.125692 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:24 crc kubenswrapper[4724]: W0226 11:36:24.739972 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode59cb5c2_2880_4bdd_acbe_8c66b147225d.slice/crio-01df74851dea3525b60aa8576fcb0b88d2e6ef91d1661f515f05dd022df058f7 WatchSource:0}: Error finding container 01df74851dea3525b60aa8576fcb0b88d2e6ef91d1661f515f05dd022df058f7: Status 404 returned error can't find the container with id 01df74851dea3525b60aa8576fcb0b88d2e6ef91d1661f515f05dd022df058f7 Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.743122 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-vssgn"] Feb 26 11:36:24 crc kubenswrapper[4724]: I0226 11:36:24.926977 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-vssgn" event={"ID":"e59cb5c2-2880-4bdd-acbe-8c66b147225d","Type":"ContainerStarted","Data":"01df74851dea3525b60aa8576fcb0b88d2e6ef91d1661f515f05dd022df058f7"} Feb 26 11:36:25 crc kubenswrapper[4724]: I0226 11:36:25.942663 4724 generic.go:334] "Generic (PLEG): container finished" podID="e59cb5c2-2880-4bdd-acbe-8c66b147225d" containerID="37a6ce30595142473a7148d065c6b0ff04608987f4e63096db1f15d4981148f3" exitCode=0 Feb 26 11:36:25 crc kubenswrapper[4724]: I0226 11:36:25.942950 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-vssgn" event={"ID":"e59cb5c2-2880-4bdd-acbe-8c66b147225d","Type":"ContainerDied","Data":"37a6ce30595142473a7148d065c6b0ff04608987f4e63096db1f15d4981148f3"} Feb 26 11:36:26 crc kubenswrapper[4724]: I0226 11:36:26.955279 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-vssgn" event={"ID":"e59cb5c2-2880-4bdd-acbe-8c66b147225d","Type":"ContainerStarted","Data":"faca695925747d3158221173338e12c77f6954dbbd23161fe1eb10a943be8ec6"} Feb 26 11:36:26 crc kubenswrapper[4724]: I0226 11:36:26.955819 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:26 crc kubenswrapper[4724]: I0226 11:36:26.980524 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-594cb89c79-vssgn" podStartSLOduration=3.980503809 podStartE2EDuration="3.980503809s" podCreationTimestamp="2026-02-26 11:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:36:26.979190296 +0000 UTC m=+1853.634929431" watchObservedRunningTime="2026-02-26 11:36:26.980503809 +0000 UTC m=+1853.636242924" Feb 26 11:36:33 crc kubenswrapper[4724]: I0226 11:36:33.995944 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:36:33 crc kubenswrapper[4724]: E0226 11:36:33.996827 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.127769 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.199752 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-9jpvw"] Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.200020 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" podUID="cc258ae0-3005-4720-bcde-7a7be93c5dd0" containerName="dnsmasq-dns" containerID="cri-o://60bcb618d2445bf9f6312c22847c593458bb0d0ed2aa7e1cbf04e86767703a74" gracePeriod=10 Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.211193 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.435843 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64f6bf65cc-sgjfx"] Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.441362 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.455462 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-dns-svc\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.455540 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-ovsdbserver-sb\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.455640 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-config\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.455697 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-ovsdbserver-nb\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.455730 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-openstack-edpm-ipam\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.455791 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-dns-swift-storage-0\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.455825 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftrb6\" (UniqueName: \"kubernetes.io/projected/10b37b6f-2173-460a-aebf-876cd4efc50a-kube-api-access-ftrb6\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.457082 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64f6bf65cc-sgjfx"] Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.557016 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-openstack-edpm-ipam\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.557100 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-dns-swift-storage-0\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.557130 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftrb6\" (UniqueName: \"kubernetes.io/projected/10b37b6f-2173-460a-aebf-876cd4efc50a-kube-api-access-ftrb6\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.557162 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-dns-svc\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.557237 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-ovsdbserver-sb\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.557310 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-config\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.557358 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-ovsdbserver-nb\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.558201 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-openstack-edpm-ipam\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.558364 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-dns-swift-storage-0\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.561746 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-config\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.561748 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-ovsdbserver-nb\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.562858 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-ovsdbserver-sb\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.563664 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10b37b6f-2173-460a-aebf-876cd4efc50a-dns-svc\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.611903 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftrb6\" (UniqueName: \"kubernetes.io/projected/10b37b6f-2173-460a-aebf-876cd4efc50a-kube-api-access-ftrb6\") pod \"dnsmasq-dns-64f6bf65cc-sgjfx\" (UID: \"10b37b6f-2173-460a-aebf-876cd4efc50a\") " pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.766518 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.954889 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.965910 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-dns-swift-storage-0\") pod \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.965969 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkxlk\" (UniqueName: \"kubernetes.io/projected/cc258ae0-3005-4720-bcde-7a7be93c5dd0-kube-api-access-pkxlk\") pod \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.966027 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-config\") pod \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.966094 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-ovsdbserver-nb\") pod \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.966190 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-ovsdbserver-sb\") pod \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " Feb 26 11:36:34 crc kubenswrapper[4724]: I0226 11:36:34.966217 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-dns-svc\") pod \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\" (UID: \"cc258ae0-3005-4720-bcde-7a7be93c5dd0\") " Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.017743 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc258ae0-3005-4720-bcde-7a7be93c5dd0-kube-api-access-pkxlk" (OuterVolumeSpecName: "kube-api-access-pkxlk") pod "cc258ae0-3005-4720-bcde-7a7be93c5dd0" (UID: "cc258ae0-3005-4720-bcde-7a7be93c5dd0"). InnerVolumeSpecName "kube-api-access-pkxlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.053739 4724 generic.go:334] "Generic (PLEG): container finished" podID="cc258ae0-3005-4720-bcde-7a7be93c5dd0" containerID="60bcb618d2445bf9f6312c22847c593458bb0d0ed2aa7e1cbf04e86767703a74" exitCode=0 Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.053788 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" event={"ID":"cc258ae0-3005-4720-bcde-7a7be93c5dd0","Type":"ContainerDied","Data":"60bcb618d2445bf9f6312c22847c593458bb0d0ed2aa7e1cbf04e86767703a74"} Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.053808 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.053830 4724 scope.go:117] "RemoveContainer" containerID="60bcb618d2445bf9f6312c22847c593458bb0d0ed2aa7e1cbf04e86767703a74" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.053819 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-9jpvw" event={"ID":"cc258ae0-3005-4720-bcde-7a7be93c5dd0","Type":"ContainerDied","Data":"00afa9433dc1c6e7c86a48f627cba7a08fb6a424fd594fdd7d7835f66d155505"} Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.069474 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkxlk\" (UniqueName: \"kubernetes.io/projected/cc258ae0-3005-4720-bcde-7a7be93c5dd0-kube-api-access-pkxlk\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.079097 4724 scope.go:117] "RemoveContainer" containerID="1effd0b596865bb4b4f9296c953bb348d8b56a911231e548079e793d890679c4" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.120919 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-config" (OuterVolumeSpecName: "config") pod "cc258ae0-3005-4720-bcde-7a7be93c5dd0" (UID: "cc258ae0-3005-4720-bcde-7a7be93c5dd0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.124634 4724 scope.go:117] "RemoveContainer" containerID="60bcb618d2445bf9f6312c22847c593458bb0d0ed2aa7e1cbf04e86767703a74" Feb 26 11:36:35 crc kubenswrapper[4724]: E0226 11:36:35.130339 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60bcb618d2445bf9f6312c22847c593458bb0d0ed2aa7e1cbf04e86767703a74\": container with ID starting with 60bcb618d2445bf9f6312c22847c593458bb0d0ed2aa7e1cbf04e86767703a74 not found: ID does not exist" containerID="60bcb618d2445bf9f6312c22847c593458bb0d0ed2aa7e1cbf04e86767703a74" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.130394 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60bcb618d2445bf9f6312c22847c593458bb0d0ed2aa7e1cbf04e86767703a74"} err="failed to get container status \"60bcb618d2445bf9f6312c22847c593458bb0d0ed2aa7e1cbf04e86767703a74\": rpc error: code = NotFound desc = could not find container \"60bcb618d2445bf9f6312c22847c593458bb0d0ed2aa7e1cbf04e86767703a74\": container with ID starting with 60bcb618d2445bf9f6312c22847c593458bb0d0ed2aa7e1cbf04e86767703a74 not found: ID does not exist" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.130424 4724 scope.go:117] "RemoveContainer" containerID="1effd0b596865bb4b4f9296c953bb348d8b56a911231e548079e793d890679c4" Feb 26 11:36:35 crc kubenswrapper[4724]: E0226 11:36:35.131003 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1effd0b596865bb4b4f9296c953bb348d8b56a911231e548079e793d890679c4\": container with ID starting with 1effd0b596865bb4b4f9296c953bb348d8b56a911231e548079e793d890679c4 not found: ID does not exist" containerID="1effd0b596865bb4b4f9296c953bb348d8b56a911231e548079e793d890679c4" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.131036 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1effd0b596865bb4b4f9296c953bb348d8b56a911231e548079e793d890679c4"} err="failed to get container status \"1effd0b596865bb4b4f9296c953bb348d8b56a911231e548079e793d890679c4\": rpc error: code = NotFound desc = could not find container \"1effd0b596865bb4b4f9296c953bb348d8b56a911231e548079e793d890679c4\": container with ID starting with 1effd0b596865bb4b4f9296c953bb348d8b56a911231e548079e793d890679c4 not found: ID does not exist" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.131426 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc258ae0-3005-4720-bcde-7a7be93c5dd0" (UID: "cc258ae0-3005-4720-bcde-7a7be93c5dd0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.152780 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cc258ae0-3005-4720-bcde-7a7be93c5dd0" (UID: "cc258ae0-3005-4720-bcde-7a7be93c5dd0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.163081 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc258ae0-3005-4720-bcde-7a7be93c5dd0" (UID: "cc258ae0-3005-4720-bcde-7a7be93c5dd0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.163606 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc258ae0-3005-4720-bcde-7a7be93c5dd0" (UID: "cc258ae0-3005-4720-bcde-7a7be93c5dd0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.171761 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.171798 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.171813 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.171824 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.171852 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc258ae0-3005-4720-bcde-7a7be93c5dd0-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.415051 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-9jpvw"] Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.426277 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-9jpvw"] Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.451578 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64f6bf65cc-sgjfx"] Feb 26 11:36:35 crc kubenswrapper[4724]: I0226 11:36:35.993886 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc258ae0-3005-4720-bcde-7a7be93c5dd0" path="/var/lib/kubelet/pods/cc258ae0-3005-4720-bcde-7a7be93c5dd0/volumes" Feb 26 11:36:36 crc kubenswrapper[4724]: I0226 11:36:36.078905 4724 generic.go:334] "Generic (PLEG): container finished" podID="10b37b6f-2173-460a-aebf-876cd4efc50a" containerID="c081e7ac697d70b1da02cb7f7054943ee42c069921787f45dc05e9827bb22617" exitCode=0 Feb 26 11:36:36 crc kubenswrapper[4724]: I0226 11:36:36.078976 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" event={"ID":"10b37b6f-2173-460a-aebf-876cd4efc50a","Type":"ContainerDied","Data":"c081e7ac697d70b1da02cb7f7054943ee42c069921787f45dc05e9827bb22617"} Feb 26 11:36:36 crc kubenswrapper[4724]: I0226 11:36:36.079458 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" event={"ID":"10b37b6f-2173-460a-aebf-876cd4efc50a","Type":"ContainerStarted","Data":"f22a9ce73ac772db1d80a6057cbbfb4aa2ff6c665767b4c29242f202a7d4d361"} Feb 26 11:36:37 crc kubenswrapper[4724]: I0226 11:36:37.103762 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" event={"ID":"10b37b6f-2173-460a-aebf-876cd4efc50a","Type":"ContainerStarted","Data":"bab8893404ea088357c7381c2e63311a6259d9325483b44ad2cfc8e6501f2bad"} Feb 26 11:36:37 crc kubenswrapper[4724]: I0226 11:36:37.104351 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:37 crc kubenswrapper[4724]: I0226 11:36:37.155193 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" podStartSLOduration=3.15515453 podStartE2EDuration="3.15515453s" podCreationTimestamp="2026-02-26 11:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:36:37.134588507 +0000 UTC m=+1863.790327632" watchObservedRunningTime="2026-02-26 11:36:37.15515453 +0000 UTC m=+1863.810893645" Feb 26 11:36:44 crc kubenswrapper[4724]: I0226 11:36:44.768382 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64f6bf65cc-sgjfx" Feb 26 11:36:44 crc kubenswrapper[4724]: I0226 11:36:44.836077 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-vssgn"] Feb 26 11:36:44 crc kubenswrapper[4724]: I0226 11:36:44.836405 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-594cb89c79-vssgn" podUID="e59cb5c2-2880-4bdd-acbe-8c66b147225d" containerName="dnsmasq-dns" containerID="cri-o://faca695925747d3158221173338e12c77f6954dbbd23161fe1eb10a943be8ec6" gracePeriod=10 Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.208749 4724 generic.go:334] "Generic (PLEG): container finished" podID="e59cb5c2-2880-4bdd-acbe-8c66b147225d" containerID="faca695925747d3158221173338e12c77f6954dbbd23161fe1eb10a943be8ec6" exitCode=0 Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.208807 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-vssgn" event={"ID":"e59cb5c2-2880-4bdd-acbe-8c66b147225d","Type":"ContainerDied","Data":"faca695925747d3158221173338e12c77f6954dbbd23161fe1eb10a943be8ec6"} Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.451754 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.633230 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-dns-swift-storage-0\") pod \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.633301 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-ovsdbserver-nb\") pod \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.633329 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnv7g\" (UniqueName: \"kubernetes.io/projected/e59cb5c2-2880-4bdd-acbe-8c66b147225d-kube-api-access-mnv7g\") pod \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.633406 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-openstack-edpm-ipam\") pod \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.633469 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-ovsdbserver-sb\") pod \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.633531 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-dns-svc\") pod \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.633555 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-config\") pod \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\" (UID: \"e59cb5c2-2880-4bdd-acbe-8c66b147225d\") " Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.649586 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59cb5c2-2880-4bdd-acbe-8c66b147225d-kube-api-access-mnv7g" (OuterVolumeSpecName: "kube-api-access-mnv7g") pod "e59cb5c2-2880-4bdd-acbe-8c66b147225d" (UID: "e59cb5c2-2880-4bdd-acbe-8c66b147225d"). InnerVolumeSpecName "kube-api-access-mnv7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.719302 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e59cb5c2-2880-4bdd-acbe-8c66b147225d" (UID: "e59cb5c2-2880-4bdd-acbe-8c66b147225d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.719984 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e59cb5c2-2880-4bdd-acbe-8c66b147225d" (UID: "e59cb5c2-2880-4bdd-acbe-8c66b147225d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.735749 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.735777 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnv7g\" (UniqueName: \"kubernetes.io/projected/e59cb5c2-2880-4bdd-acbe-8c66b147225d-kube-api-access-mnv7g\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.735788 4724 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.739927 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-config" (OuterVolumeSpecName: "config") pod "e59cb5c2-2880-4bdd-acbe-8c66b147225d" (UID: "e59cb5c2-2880-4bdd-acbe-8c66b147225d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.744300 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e59cb5c2-2880-4bdd-acbe-8c66b147225d" (UID: "e59cb5c2-2880-4bdd-acbe-8c66b147225d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.751823 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e59cb5c2-2880-4bdd-acbe-8c66b147225d" (UID: "e59cb5c2-2880-4bdd-acbe-8c66b147225d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.769313 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "e59cb5c2-2880-4bdd-acbe-8c66b147225d" (UID: "e59cb5c2-2880-4bdd-acbe-8c66b147225d"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.837770 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.837804 4724 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.837812 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-config\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:45 crc kubenswrapper[4724]: I0226 11:36:45.837821 4724 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e59cb5c2-2880-4bdd-acbe-8c66b147225d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:36:46 crc kubenswrapper[4724]: I0226 11:36:46.220111 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-vssgn" event={"ID":"e59cb5c2-2880-4bdd-acbe-8c66b147225d","Type":"ContainerDied","Data":"01df74851dea3525b60aa8576fcb0b88d2e6ef91d1661f515f05dd022df058f7"} Feb 26 11:36:46 crc kubenswrapper[4724]: I0226 11:36:46.220172 4724 scope.go:117] "RemoveContainer" containerID="faca695925747d3158221173338e12c77f6954dbbd23161fe1eb10a943be8ec6" Feb 26 11:36:46 crc kubenswrapper[4724]: I0226 11:36:46.220214 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-vssgn" Feb 26 11:36:46 crc kubenswrapper[4724]: I0226 11:36:46.256473 4724 scope.go:117] "RemoveContainer" containerID="37a6ce30595142473a7148d065c6b0ff04608987f4e63096db1f15d4981148f3" Feb 26 11:36:46 crc kubenswrapper[4724]: I0226 11:36:46.257516 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-vssgn"] Feb 26 11:36:46 crc kubenswrapper[4724]: I0226 11:36:46.273816 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-vssgn"] Feb 26 11:36:46 crc kubenswrapper[4724]: I0226 11:36:46.975227 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:36:46 crc kubenswrapper[4724]: E0226 11:36:46.975529 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:36:47 crc kubenswrapper[4724]: I0226 11:36:47.987636 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59cb5c2-2880-4bdd-acbe-8c66b147225d" path="/var/lib/kubelet/pods/e59cb5c2-2880-4bdd-acbe-8c66b147225d/volumes" Feb 26 11:36:56 crc kubenswrapper[4724]: I0226 11:36:56.311656 4724 generic.go:334] "Generic (PLEG): container finished" podID="bad75855-a326-41f0-8b17-c83e5be398b9" containerID="e3597641e274ce78afcf9496f033a089ce85d333da8c4520fd90ba464344993d" exitCode=0 Feb 26 11:36:56 crc kubenswrapper[4724]: I0226 11:36:56.311734 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bad75855-a326-41f0-8b17-c83e5be398b9","Type":"ContainerDied","Data":"e3597641e274ce78afcf9496f033a089ce85d333da8c4520fd90ba464344993d"} Feb 26 11:36:56 crc kubenswrapper[4724]: I0226 11:36:56.315502 4724 generic.go:334] "Generic (PLEG): container finished" podID="da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df" containerID="79c9046dc4d99c6683d93e38de3160040eebc6cd7260dd79648dc09004a6a105" exitCode=0 Feb 26 11:36:56 crc kubenswrapper[4724]: I0226 11:36:56.315546 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df","Type":"ContainerDied","Data":"79c9046dc4d99c6683d93e38de3160040eebc6cd7260dd79648dc09004a6a105"} Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.326199 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bad75855-a326-41f0-8b17-c83e5be398b9","Type":"ContainerStarted","Data":"d8b2c7f4b88ce0ab589e5a577aaf546f29020d1cac65d30784fcb1c8a6f725ba"} Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.327489 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.328268 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df","Type":"ContainerStarted","Data":"2780efbb47414f85e13d39498155ed5be873c1d42bc956e6882a4499c6f139de"} Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.328469 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.362163 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.362140925 podStartE2EDuration="37.362140925s" podCreationTimestamp="2026-02-26 11:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:36:57.352148561 +0000 UTC m=+1884.007887676" watchObservedRunningTime="2026-02-26 11:36:57.362140925 +0000 UTC m=+1884.017880040" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.382572 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.382550513 podStartE2EDuration="37.382550513s" podCreationTimestamp="2026-02-26 11:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:36:57.380192723 +0000 UTC m=+1884.035931858" watchObservedRunningTime="2026-02-26 11:36:57.382550513 +0000 UTC m=+1884.038289618" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.508389 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2"] Feb 26 11:36:57 crc kubenswrapper[4724]: E0226 11:36:57.508860 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59cb5c2-2880-4bdd-acbe-8c66b147225d" containerName="init" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.508880 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59cb5c2-2880-4bdd-acbe-8c66b147225d" containerName="init" Feb 26 11:36:57 crc kubenswrapper[4724]: E0226 11:36:57.508906 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59cb5c2-2880-4bdd-acbe-8c66b147225d" containerName="dnsmasq-dns" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.508915 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59cb5c2-2880-4bdd-acbe-8c66b147225d" containerName="dnsmasq-dns" Feb 26 11:36:57 crc kubenswrapper[4724]: E0226 11:36:57.508942 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc258ae0-3005-4720-bcde-7a7be93c5dd0" containerName="init" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.508949 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc258ae0-3005-4720-bcde-7a7be93c5dd0" containerName="init" Feb 26 11:36:57 crc kubenswrapper[4724]: E0226 11:36:57.508978 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc258ae0-3005-4720-bcde-7a7be93c5dd0" containerName="dnsmasq-dns" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.508987 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc258ae0-3005-4720-bcde-7a7be93c5dd0" containerName="dnsmasq-dns" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.509200 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59cb5c2-2880-4bdd-acbe-8c66b147225d" containerName="dnsmasq-dns" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.509233 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc258ae0-3005-4720-bcde-7a7be93c5dd0" containerName="dnsmasq-dns" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.509989 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.512607 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.512619 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.512796 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.519932 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2"] Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.520277 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.564525 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f9jj\" (UniqueName: \"kubernetes.io/projected/49850149-79d3-4700-801a-c2630caba9c9-kube-api-access-8f9jj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.564846 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.564949 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.565037 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.667127 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f9jj\" (UniqueName: \"kubernetes.io/projected/49850149-79d3-4700-801a-c2630caba9c9-kube-api-access-8f9jj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.667196 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.667231 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.667263 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.672408 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.673017 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.691793 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.694330 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f9jj\" (UniqueName: \"kubernetes.io/projected/49850149-79d3-4700-801a-c2630caba9c9-kube-api-access-8f9jj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.847814 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:36:57 crc kubenswrapper[4724]: I0226 11:36:57.975917 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:36:57 crc kubenswrapper[4724]: E0226 11:36:57.976974 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:36:58 crc kubenswrapper[4724]: I0226 11:36:58.492384 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2"] Feb 26 11:36:58 crc kubenswrapper[4724]: W0226 11:36:58.496898 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49850149_79d3_4700_801a_c2630caba9c9.slice/crio-98f98b9982ce8584680a76a81e0557dfb6bdf2c9cdb64a54157a3e6bb903d477 WatchSource:0}: Error finding container 98f98b9982ce8584680a76a81e0557dfb6bdf2c9cdb64a54157a3e6bb903d477: Status 404 returned error can't find the container with id 98f98b9982ce8584680a76a81e0557dfb6bdf2c9cdb64a54157a3e6bb903d477 Feb 26 11:36:59 crc kubenswrapper[4724]: I0226 11:36:59.349755 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" event={"ID":"49850149-79d3-4700-801a-c2630caba9c9","Type":"ContainerStarted","Data":"98f98b9982ce8584680a76a81e0557dfb6bdf2c9cdb64a54157a3e6bb903d477"} Feb 26 11:37:11 crc kubenswrapper[4724]: I0226 11:37:11.031484 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="bad75855-a326-41f0-8b17-c83e5be398b9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.229:5671: connect: connection refused" Feb 26 11:37:11 crc kubenswrapper[4724]: I0226 11:37:11.049639 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.230:5671: connect: connection refused" Feb 26 11:37:11 crc kubenswrapper[4724]: I0226 11:37:11.518914 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" event={"ID":"49850149-79d3-4700-801a-c2630caba9c9","Type":"ContainerStarted","Data":"ee74661a297a0387571e97c35ec2ac1b38efaec212589bd380e4d3190ff90942"} Feb 26 11:37:11 crc kubenswrapper[4724]: I0226 11:37:11.539919 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" podStartSLOduration=2.668579728 podStartE2EDuration="14.539898507s" podCreationTimestamp="2026-02-26 11:36:57 +0000 UTC" firstStartedPulling="2026-02-26 11:36:58.50034031 +0000 UTC m=+1885.156079425" lastFinishedPulling="2026-02-26 11:37:10.371659099 +0000 UTC m=+1897.027398204" observedRunningTime="2026-02-26 11:37:11.535205458 +0000 UTC m=+1898.190944593" watchObservedRunningTime="2026-02-26 11:37:11.539898507 +0000 UTC m=+1898.195637622" Feb 26 11:37:11 crc kubenswrapper[4724]: I0226 11:37:11.975325 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:37:11 crc kubenswrapper[4724]: E0226 11:37:11.975590 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:37:21 crc kubenswrapper[4724]: I0226 11:37:21.030327 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 26 11:37:21 crc kubenswrapper[4724]: I0226 11:37:21.049324 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 26 11:37:22 crc kubenswrapper[4724]: I0226 11:37:22.623248 4724 generic.go:334] "Generic (PLEG): container finished" podID="49850149-79d3-4700-801a-c2630caba9c9" containerID="ee74661a297a0387571e97c35ec2ac1b38efaec212589bd380e4d3190ff90942" exitCode=0 Feb 26 11:37:22 crc kubenswrapper[4724]: I0226 11:37:22.623290 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" event={"ID":"49850149-79d3-4700-801a-c2630caba9c9","Type":"ContainerDied","Data":"ee74661a297a0387571e97c35ec2ac1b38efaec212589bd380e4d3190ff90942"} Feb 26 11:37:23 crc kubenswrapper[4724]: I0226 11:37:23.982810 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:37:23 crc kubenswrapper[4724]: E0226 11:37:23.983355 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.217735 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.332915 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f9jj\" (UniqueName: \"kubernetes.io/projected/49850149-79d3-4700-801a-c2630caba9c9-kube-api-access-8f9jj\") pod \"49850149-79d3-4700-801a-c2630caba9c9\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.332971 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-inventory\") pod \"49850149-79d3-4700-801a-c2630caba9c9\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.333031 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-repo-setup-combined-ca-bundle\") pod \"49850149-79d3-4700-801a-c2630caba9c9\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.333167 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-ssh-key-openstack-edpm-ipam\") pod \"49850149-79d3-4700-801a-c2630caba9c9\" (UID: \"49850149-79d3-4700-801a-c2630caba9c9\") " Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.343207 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "49850149-79d3-4700-801a-c2630caba9c9" (UID: "49850149-79d3-4700-801a-c2630caba9c9"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.346845 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49850149-79d3-4700-801a-c2630caba9c9-kube-api-access-8f9jj" (OuterVolumeSpecName: "kube-api-access-8f9jj") pod "49850149-79d3-4700-801a-c2630caba9c9" (UID: "49850149-79d3-4700-801a-c2630caba9c9"). InnerVolumeSpecName "kube-api-access-8f9jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.363080 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-inventory" (OuterVolumeSpecName: "inventory") pod "49850149-79d3-4700-801a-c2630caba9c9" (UID: "49850149-79d3-4700-801a-c2630caba9c9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.367307 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "49850149-79d3-4700-801a-c2630caba9c9" (UID: "49850149-79d3-4700-801a-c2630caba9c9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.435267 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f9jj\" (UniqueName: \"kubernetes.io/projected/49850149-79d3-4700-801a-c2630caba9c9-kube-api-access-8f9jj\") on node \"crc\" DevicePath \"\"" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.435298 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.435308 4724 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.435318 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49850149-79d3-4700-801a-c2630caba9c9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.642845 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" event={"ID":"49850149-79d3-4700-801a-c2630caba9c9","Type":"ContainerDied","Data":"98f98b9982ce8584680a76a81e0557dfb6bdf2c9cdb64a54157a3e6bb903d477"} Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.642878 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98f98b9982ce8584680a76a81e0557dfb6bdf2c9cdb64a54157a3e6bb903d477" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.642930 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.750740 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p"] Feb 26 11:37:24 crc kubenswrapper[4724]: E0226 11:37:24.760366 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49850149-79d3-4700-801a-c2630caba9c9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.760404 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="49850149-79d3-4700-801a-c2630caba9c9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.760794 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="49850149-79d3-4700-801a-c2630caba9c9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.761667 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.764276 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.764560 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.764768 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.764925 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.774467 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p"] Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.846419 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de96567c-d135-4e9a-b847-ce90658d94be-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c86p\" (UID: \"de96567c-d135-4e9a-b847-ce90658d94be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.846716 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czh56\" (UniqueName: \"kubernetes.io/projected/de96567c-d135-4e9a-b847-ce90658d94be-kube-api-access-czh56\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c86p\" (UID: \"de96567c-d135-4e9a-b847-ce90658d94be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.846759 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de96567c-d135-4e9a-b847-ce90658d94be-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c86p\" (UID: \"de96567c-d135-4e9a-b847-ce90658d94be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.948222 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czh56\" (UniqueName: \"kubernetes.io/projected/de96567c-d135-4e9a-b847-ce90658d94be-kube-api-access-czh56\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c86p\" (UID: \"de96567c-d135-4e9a-b847-ce90658d94be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.948278 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de96567c-d135-4e9a-b847-ce90658d94be-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c86p\" (UID: \"de96567c-d135-4e9a-b847-ce90658d94be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.948370 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de96567c-d135-4e9a-b847-ce90658d94be-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c86p\" (UID: \"de96567c-d135-4e9a-b847-ce90658d94be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.952612 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de96567c-d135-4e9a-b847-ce90658d94be-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c86p\" (UID: \"de96567c-d135-4e9a-b847-ce90658d94be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.954692 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de96567c-d135-4e9a-b847-ce90658d94be-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c86p\" (UID: \"de96567c-d135-4e9a-b847-ce90658d94be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" Feb 26 11:37:24 crc kubenswrapper[4724]: I0226 11:37:24.969875 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czh56\" (UniqueName: \"kubernetes.io/projected/de96567c-d135-4e9a-b847-ce90658d94be-kube-api-access-czh56\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c86p\" (UID: \"de96567c-d135-4e9a-b847-ce90658d94be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" Feb 26 11:37:25 crc kubenswrapper[4724]: I0226 11:37:25.086009 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" Feb 26 11:37:25 crc kubenswrapper[4724]: I0226 11:37:25.596885 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p"] Feb 26 11:37:25 crc kubenswrapper[4724]: I0226 11:37:25.666368 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" event={"ID":"de96567c-d135-4e9a-b847-ce90658d94be","Type":"ContainerStarted","Data":"10593e9137594773ef14bf072b55a2291232e0f30649be096b41e5b4f3f87dc1"} Feb 26 11:37:26 crc kubenswrapper[4724]: I0226 11:37:26.676081 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" event={"ID":"de96567c-d135-4e9a-b847-ce90658d94be","Type":"ContainerStarted","Data":"06b559a9db75b4ca90a6661283d625bc02b5071a99467370f8deecccd229b85c"} Feb 26 11:37:26 crc kubenswrapper[4724]: I0226 11:37:26.695790 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" podStartSLOduration=2.304967178 podStartE2EDuration="2.695773763s" podCreationTimestamp="2026-02-26 11:37:24 +0000 UTC" firstStartedPulling="2026-02-26 11:37:25.59670043 +0000 UTC m=+1912.252439545" lastFinishedPulling="2026-02-26 11:37:25.987507015 +0000 UTC m=+1912.643246130" observedRunningTime="2026-02-26 11:37:26.692854589 +0000 UTC m=+1913.348593704" watchObservedRunningTime="2026-02-26 11:37:26.695773763 +0000 UTC m=+1913.351512878" Feb 26 11:37:28 crc kubenswrapper[4724]: I0226 11:37:28.695528 4724 generic.go:334] "Generic (PLEG): container finished" podID="de96567c-d135-4e9a-b847-ce90658d94be" containerID="06b559a9db75b4ca90a6661283d625bc02b5071a99467370f8deecccd229b85c" exitCode=0 Feb 26 11:37:28 crc kubenswrapper[4724]: I0226 11:37:28.695618 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" event={"ID":"de96567c-d135-4e9a-b847-ce90658d94be","Type":"ContainerDied","Data":"06b559a9db75b4ca90a6661283d625bc02b5071a99467370f8deecccd229b85c"} Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.119687 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.154648 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czh56\" (UniqueName: \"kubernetes.io/projected/de96567c-d135-4e9a-b847-ce90658d94be-kube-api-access-czh56\") pod \"de96567c-d135-4e9a-b847-ce90658d94be\" (UID: \"de96567c-d135-4e9a-b847-ce90658d94be\") " Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.154793 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de96567c-d135-4e9a-b847-ce90658d94be-inventory\") pod \"de96567c-d135-4e9a-b847-ce90658d94be\" (UID: \"de96567c-d135-4e9a-b847-ce90658d94be\") " Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.154831 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de96567c-d135-4e9a-b847-ce90658d94be-ssh-key-openstack-edpm-ipam\") pod \"de96567c-d135-4e9a-b847-ce90658d94be\" (UID: \"de96567c-d135-4e9a-b847-ce90658d94be\") " Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.168514 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de96567c-d135-4e9a-b847-ce90658d94be-kube-api-access-czh56" (OuterVolumeSpecName: "kube-api-access-czh56") pod "de96567c-d135-4e9a-b847-ce90658d94be" (UID: "de96567c-d135-4e9a-b847-ce90658d94be"). InnerVolumeSpecName "kube-api-access-czh56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.189230 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de96567c-d135-4e9a-b847-ce90658d94be-inventory" (OuterVolumeSpecName: "inventory") pod "de96567c-d135-4e9a-b847-ce90658d94be" (UID: "de96567c-d135-4e9a-b847-ce90658d94be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.202420 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de96567c-d135-4e9a-b847-ce90658d94be-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "de96567c-d135-4e9a-b847-ce90658d94be" (UID: "de96567c-d135-4e9a-b847-ce90658d94be"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.256902 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czh56\" (UniqueName: \"kubernetes.io/projected/de96567c-d135-4e9a-b847-ce90658d94be-kube-api-access-czh56\") on node \"crc\" DevicePath \"\"" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.256940 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de96567c-d135-4e9a-b847-ce90658d94be-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.256950 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de96567c-d135-4e9a-b847-ce90658d94be-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.713391 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" event={"ID":"de96567c-d135-4e9a-b847-ce90658d94be","Type":"ContainerDied","Data":"10593e9137594773ef14bf072b55a2291232e0f30649be096b41e5b4f3f87dc1"} Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.713664 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10593e9137594773ef14bf072b55a2291232e0f30649be096b41e5b4f3f87dc1" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.713546 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c86p" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.786941 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7"] Feb 26 11:37:30 crc kubenswrapper[4724]: E0226 11:37:30.787978 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de96567c-d135-4e9a-b847-ce90658d94be" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.788000 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="de96567c-d135-4e9a-b847-ce90658d94be" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.788198 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="de96567c-d135-4e9a-b847-ce90658d94be" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.788834 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.793257 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.793478 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.793728 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.794732 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.806614 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7"] Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.870787 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.870835 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.870888 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.870955 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmdgr\" (UniqueName: \"kubernetes.io/projected/fb1451db-04cb-41fc-b46a-3a64ea6e8528-kube-api-access-bmdgr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.972705 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.973069 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.973230 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.973343 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmdgr\" (UniqueName: \"kubernetes.io/projected/fb1451db-04cb-41fc-b46a-3a64ea6e8528-kube-api-access-bmdgr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.980612 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.981872 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.982715 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:37:30 crc kubenswrapper[4724]: I0226 11:37:30.991695 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmdgr\" (UniqueName: \"kubernetes.io/projected/fb1451db-04cb-41fc-b46a-3a64ea6e8528-kube-api-access-bmdgr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:37:31 crc kubenswrapper[4724]: I0226 11:37:31.127754 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:37:31 crc kubenswrapper[4724]: I0226 11:37:31.683640 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7"] Feb 26 11:37:31 crc kubenswrapper[4724]: I0226 11:37:31.724602 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" event={"ID":"fb1451db-04cb-41fc-b46a-3a64ea6e8528","Type":"ContainerStarted","Data":"ab395d4f730050585022d256cf355608d2b342c1502c9a8c0453e3fefe07c342"} Feb 26 11:37:32 crc kubenswrapper[4724]: I0226 11:37:32.741668 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" event={"ID":"fb1451db-04cb-41fc-b46a-3a64ea6e8528","Type":"ContainerStarted","Data":"3bd9f0c660b01ab36ebef299ce821e9b76db2d56c8de6721f4df62bc6acd1c6e"} Feb 26 11:37:32 crc kubenswrapper[4724]: I0226 11:37:32.768739 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" podStartSLOduration=2.298001369 podStartE2EDuration="2.768720161s" podCreationTimestamp="2026-02-26 11:37:30 +0000 UTC" firstStartedPulling="2026-02-26 11:37:31.686882395 +0000 UTC m=+1918.342621510" lastFinishedPulling="2026-02-26 11:37:32.157601187 +0000 UTC m=+1918.813340302" observedRunningTime="2026-02-26 11:37:32.763587961 +0000 UTC m=+1919.419327076" watchObservedRunningTime="2026-02-26 11:37:32.768720161 +0000 UTC m=+1919.424459266" Feb 26 11:37:38 crc kubenswrapper[4724]: I0226 11:37:38.975449 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:37:38 crc kubenswrapper[4724]: E0226 11:37:38.976820 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:37:51 crc kubenswrapper[4724]: I0226 11:37:51.975754 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:37:51 crc kubenswrapper[4724]: E0226 11:37:51.976547 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:37:56 crc kubenswrapper[4724]: I0226 11:37:56.443643 4724 scope.go:117] "RemoveContainer" containerID="31b208c45ecbb18bfb2d7e7dafbb3836f48970b5a23499aa599d769b90ed63c0" Feb 26 11:37:56 crc kubenswrapper[4724]: I0226 11:37:56.485431 4724 scope.go:117] "RemoveContainer" containerID="1cd908824885fea8e8151befad8384cea2476e615a1b043b266cf513ee595cf5" Feb 26 11:37:56 crc kubenswrapper[4724]: I0226 11:37:56.574468 4724 scope.go:117] "RemoveContainer" containerID="f628d07d09171904eb17b3a4883a21e8aaf92b69ee4ef08c151d9925936bc2da" Feb 26 11:38:00 crc kubenswrapper[4724]: I0226 11:38:00.150613 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535098-6cjhk"] Feb 26 11:38:00 crc kubenswrapper[4724]: I0226 11:38:00.152531 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535098-6cjhk" Feb 26 11:38:00 crc kubenswrapper[4724]: I0226 11:38:00.156538 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:38:00 crc kubenswrapper[4724]: I0226 11:38:00.156758 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:38:00 crc kubenswrapper[4724]: I0226 11:38:00.156886 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:38:00 crc kubenswrapper[4724]: I0226 11:38:00.162659 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535098-6cjhk"] Feb 26 11:38:00 crc kubenswrapper[4724]: I0226 11:38:00.302737 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk5z4\" (UniqueName: \"kubernetes.io/projected/345ac49f-b371-407c-9e58-781821e13a1b-kube-api-access-rk5z4\") pod \"auto-csr-approver-29535098-6cjhk\" (UID: \"345ac49f-b371-407c-9e58-781821e13a1b\") " pod="openshift-infra/auto-csr-approver-29535098-6cjhk" Feb 26 11:38:00 crc kubenswrapper[4724]: I0226 11:38:00.405143 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk5z4\" (UniqueName: \"kubernetes.io/projected/345ac49f-b371-407c-9e58-781821e13a1b-kube-api-access-rk5z4\") pod \"auto-csr-approver-29535098-6cjhk\" (UID: \"345ac49f-b371-407c-9e58-781821e13a1b\") " pod="openshift-infra/auto-csr-approver-29535098-6cjhk" Feb 26 11:38:00 crc kubenswrapper[4724]: I0226 11:38:00.430322 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk5z4\" (UniqueName: \"kubernetes.io/projected/345ac49f-b371-407c-9e58-781821e13a1b-kube-api-access-rk5z4\") pod \"auto-csr-approver-29535098-6cjhk\" (UID: \"345ac49f-b371-407c-9e58-781821e13a1b\") " pod="openshift-infra/auto-csr-approver-29535098-6cjhk" Feb 26 11:38:00 crc kubenswrapper[4724]: I0226 11:38:00.480593 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535098-6cjhk" Feb 26 11:38:00 crc kubenswrapper[4724]: I0226 11:38:00.984923 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535098-6cjhk"] Feb 26 11:38:01 crc kubenswrapper[4724]: I0226 11:38:01.207580 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535098-6cjhk" event={"ID":"345ac49f-b371-407c-9e58-781821e13a1b","Type":"ContainerStarted","Data":"4243a131ee4d6ddf2944b41c4236a4f6283f25749493b82f2e0af047cac74da3"} Feb 26 11:38:03 crc kubenswrapper[4724]: I0226 11:38:03.226938 4724 generic.go:334] "Generic (PLEG): container finished" podID="345ac49f-b371-407c-9e58-781821e13a1b" containerID="75ea0d78279daa310ecf39795bd2e46093f946f1ef572ee41d4941eed8bed574" exitCode=0 Feb 26 11:38:03 crc kubenswrapper[4724]: I0226 11:38:03.227249 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535098-6cjhk" event={"ID":"345ac49f-b371-407c-9e58-781821e13a1b","Type":"ContainerDied","Data":"75ea0d78279daa310ecf39795bd2e46093f946f1ef572ee41d4941eed8bed574"} Feb 26 11:38:04 crc kubenswrapper[4724]: I0226 11:38:04.554119 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535098-6cjhk" Feb 26 11:38:04 crc kubenswrapper[4724]: I0226 11:38:04.587039 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk5z4\" (UniqueName: \"kubernetes.io/projected/345ac49f-b371-407c-9e58-781821e13a1b-kube-api-access-rk5z4\") pod \"345ac49f-b371-407c-9e58-781821e13a1b\" (UID: \"345ac49f-b371-407c-9e58-781821e13a1b\") " Feb 26 11:38:04 crc kubenswrapper[4724]: I0226 11:38:04.592399 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/345ac49f-b371-407c-9e58-781821e13a1b-kube-api-access-rk5z4" (OuterVolumeSpecName: "kube-api-access-rk5z4") pod "345ac49f-b371-407c-9e58-781821e13a1b" (UID: "345ac49f-b371-407c-9e58-781821e13a1b"). InnerVolumeSpecName "kube-api-access-rk5z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:38:04 crc kubenswrapper[4724]: I0226 11:38:04.689252 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk5z4\" (UniqueName: \"kubernetes.io/projected/345ac49f-b371-407c-9e58-781821e13a1b-kube-api-access-rk5z4\") on node \"crc\" DevicePath \"\"" Feb 26 11:38:04 crc kubenswrapper[4724]: I0226 11:38:04.975236 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:38:04 crc kubenswrapper[4724]: E0226 11:38:04.975491 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:38:05 crc kubenswrapper[4724]: I0226 11:38:05.258429 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535098-6cjhk" event={"ID":"345ac49f-b371-407c-9e58-781821e13a1b","Type":"ContainerDied","Data":"4243a131ee4d6ddf2944b41c4236a4f6283f25749493b82f2e0af047cac74da3"} Feb 26 11:38:05 crc kubenswrapper[4724]: I0226 11:38:05.258477 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4243a131ee4d6ddf2944b41c4236a4f6283f25749493b82f2e0af047cac74da3" Feb 26 11:38:05 crc kubenswrapper[4724]: I0226 11:38:05.258530 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535098-6cjhk" Feb 26 11:38:05 crc kubenswrapper[4724]: I0226 11:38:05.632395 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535092-bd82s"] Feb 26 11:38:05 crc kubenswrapper[4724]: I0226 11:38:05.642174 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535092-bd82s"] Feb 26 11:38:05 crc kubenswrapper[4724]: I0226 11:38:05.989291 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe38a495-f33f-49c3-a514-75542323fe2e" path="/var/lib/kubelet/pods/fe38a495-f33f-49c3-a514-75542323fe2e/volumes" Feb 26 11:38:17 crc kubenswrapper[4724]: I0226 11:38:17.976268 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:38:17 crc kubenswrapper[4724]: E0226 11:38:17.978356 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:38:31 crc kubenswrapper[4724]: I0226 11:38:31.975665 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:38:31 crc kubenswrapper[4724]: E0226 11:38:31.976533 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:38:42 crc kubenswrapper[4724]: I0226 11:38:42.975492 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:38:42 crc kubenswrapper[4724]: E0226 11:38:42.976535 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:38:56 crc kubenswrapper[4724]: I0226 11:38:56.653517 4724 scope.go:117] "RemoveContainer" containerID="fcf8d1c0459d5f1cefc067ac2134c960c7cde95abed13391893b2cf7d4df5b2a" Feb 26 11:38:56 crc kubenswrapper[4724]: I0226 11:38:56.700649 4724 scope.go:117] "RemoveContainer" containerID="209c179e834cfbc11ae4615a46134e3f77a34b5848a1e081086216ae023c3126" Feb 26 11:38:57 crc kubenswrapper[4724]: I0226 11:38:57.976310 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:38:57 crc kubenswrapper[4724]: E0226 11:38:57.976562 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.179130 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7lxcr"] Feb 26 11:38:58 crc kubenswrapper[4724]: E0226 11:38:58.179858 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345ac49f-b371-407c-9e58-781821e13a1b" containerName="oc" Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.179879 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="345ac49f-b371-407c-9e58-781821e13a1b" containerName="oc" Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.180070 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="345ac49f-b371-407c-9e58-781821e13a1b" containerName="oc" Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.181827 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.191829 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7lxcr"] Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.283625 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4eba936-488c-4623-b65b-972f18d8dbb9-utilities\") pod \"community-operators-7lxcr\" (UID: \"f4eba936-488c-4623-b65b-972f18d8dbb9\") " pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.283724 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4eba936-488c-4623-b65b-972f18d8dbb9-catalog-content\") pod \"community-operators-7lxcr\" (UID: \"f4eba936-488c-4623-b65b-972f18d8dbb9\") " pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.283769 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w64q\" (UniqueName: \"kubernetes.io/projected/f4eba936-488c-4623-b65b-972f18d8dbb9-kube-api-access-2w64q\") pod \"community-operators-7lxcr\" (UID: \"f4eba936-488c-4623-b65b-972f18d8dbb9\") " pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.386148 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4eba936-488c-4623-b65b-972f18d8dbb9-utilities\") pod \"community-operators-7lxcr\" (UID: \"f4eba936-488c-4623-b65b-972f18d8dbb9\") " pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.386301 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4eba936-488c-4623-b65b-972f18d8dbb9-catalog-content\") pod \"community-operators-7lxcr\" (UID: \"f4eba936-488c-4623-b65b-972f18d8dbb9\") " pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.386363 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w64q\" (UniqueName: \"kubernetes.io/projected/f4eba936-488c-4623-b65b-972f18d8dbb9-kube-api-access-2w64q\") pod \"community-operators-7lxcr\" (UID: \"f4eba936-488c-4623-b65b-972f18d8dbb9\") " pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.386839 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4eba936-488c-4623-b65b-972f18d8dbb9-utilities\") pod \"community-operators-7lxcr\" (UID: \"f4eba936-488c-4623-b65b-972f18d8dbb9\") " pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.386895 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4eba936-488c-4623-b65b-972f18d8dbb9-catalog-content\") pod \"community-operators-7lxcr\" (UID: \"f4eba936-488c-4623-b65b-972f18d8dbb9\") " pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.408012 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w64q\" (UniqueName: \"kubernetes.io/projected/f4eba936-488c-4623-b65b-972f18d8dbb9-kube-api-access-2w64q\") pod \"community-operators-7lxcr\" (UID: \"f4eba936-488c-4623-b65b-972f18d8dbb9\") " pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.518088 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:38:58 crc kubenswrapper[4724]: I0226 11:38:58.991205 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7lxcr"] Feb 26 11:38:59 crc kubenswrapper[4724]: I0226 11:38:59.762256 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4eba936-488c-4623-b65b-972f18d8dbb9" containerID="042d6196d39be979b4f244245dec0799f3cf5285b886f5133d7f8d388c1bf519" exitCode=0 Feb 26 11:38:59 crc kubenswrapper[4724]: I0226 11:38:59.762352 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lxcr" event={"ID":"f4eba936-488c-4623-b65b-972f18d8dbb9","Type":"ContainerDied","Data":"042d6196d39be979b4f244245dec0799f3cf5285b886f5133d7f8d388c1bf519"} Feb 26 11:38:59 crc kubenswrapper[4724]: I0226 11:38:59.762447 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lxcr" event={"ID":"f4eba936-488c-4623-b65b-972f18d8dbb9","Type":"ContainerStarted","Data":"52e47311247c45ee79ac3df21beb08da32ae6b50a18c47d45ec34dbbc38d3d31"} Feb 26 11:38:59 crc kubenswrapper[4724]: I0226 11:38:59.963293 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zk2l8"] Feb 26 11:38:59 crc kubenswrapper[4724]: I0226 11:38:59.966768 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.025803 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zk2l8"] Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.027869 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-utilities\") pod \"redhat-marketplace-zk2l8\" (UID: \"adeb1157-9165-4b0f-afcc-9a8c50b69d3b\") " pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.028156 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-catalog-content\") pod \"redhat-marketplace-zk2l8\" (UID: \"adeb1157-9165-4b0f-afcc-9a8c50b69d3b\") " pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.029682 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74s7d\" (UniqueName: \"kubernetes.io/projected/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-kube-api-access-74s7d\") pod \"redhat-marketplace-zk2l8\" (UID: \"adeb1157-9165-4b0f-afcc-9a8c50b69d3b\") " pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.132365 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-catalog-content\") pod \"redhat-marketplace-zk2l8\" (UID: \"adeb1157-9165-4b0f-afcc-9a8c50b69d3b\") " pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.132467 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74s7d\" (UniqueName: \"kubernetes.io/projected/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-kube-api-access-74s7d\") pod \"redhat-marketplace-zk2l8\" (UID: \"adeb1157-9165-4b0f-afcc-9a8c50b69d3b\") " pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.132575 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-utilities\") pod \"redhat-marketplace-zk2l8\" (UID: \"adeb1157-9165-4b0f-afcc-9a8c50b69d3b\") " pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.132872 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-catalog-content\") pod \"redhat-marketplace-zk2l8\" (UID: \"adeb1157-9165-4b0f-afcc-9a8c50b69d3b\") " pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.133057 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-utilities\") pod \"redhat-marketplace-zk2l8\" (UID: \"adeb1157-9165-4b0f-afcc-9a8c50b69d3b\") " pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.159353 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74s7d\" (UniqueName: \"kubernetes.io/projected/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-kube-api-access-74s7d\") pod \"redhat-marketplace-zk2l8\" (UID: \"adeb1157-9165-4b0f-afcc-9a8c50b69d3b\") " pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.298844 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.568881 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kvrc9"] Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.571089 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.598774 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kvrc9"] Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.744112 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4638ff21-51d9-4b6d-b860-322f48d04d41-catalog-content\") pod \"redhat-operators-kvrc9\" (UID: \"4638ff21-51d9-4b6d-b860-322f48d04d41\") " pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.744211 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5mrv\" (UniqueName: \"kubernetes.io/projected/4638ff21-51d9-4b6d-b860-322f48d04d41-kube-api-access-c5mrv\") pod \"redhat-operators-kvrc9\" (UID: \"4638ff21-51d9-4b6d-b860-322f48d04d41\") " pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.744261 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4638ff21-51d9-4b6d-b860-322f48d04d41-utilities\") pod \"redhat-operators-kvrc9\" (UID: \"4638ff21-51d9-4b6d-b860-322f48d04d41\") " pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.817920 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zk2l8"] Feb 26 11:39:00 crc kubenswrapper[4724]: W0226 11:39:00.828205 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadeb1157_9165_4b0f_afcc_9a8c50b69d3b.slice/crio-3207d44d0103cc6043a57695106ac13c690956b806590532681d7b1015daab91 WatchSource:0}: Error finding container 3207d44d0103cc6043a57695106ac13c690956b806590532681d7b1015daab91: Status 404 returned error can't find the container with id 3207d44d0103cc6043a57695106ac13c690956b806590532681d7b1015daab91 Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.845757 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4638ff21-51d9-4b6d-b860-322f48d04d41-utilities\") pod \"redhat-operators-kvrc9\" (UID: \"4638ff21-51d9-4b6d-b860-322f48d04d41\") " pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.845983 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4638ff21-51d9-4b6d-b860-322f48d04d41-catalog-content\") pod \"redhat-operators-kvrc9\" (UID: \"4638ff21-51d9-4b6d-b860-322f48d04d41\") " pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.846057 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5mrv\" (UniqueName: \"kubernetes.io/projected/4638ff21-51d9-4b6d-b860-322f48d04d41-kube-api-access-c5mrv\") pod \"redhat-operators-kvrc9\" (UID: \"4638ff21-51d9-4b6d-b860-322f48d04d41\") " pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.846824 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4638ff21-51d9-4b6d-b860-322f48d04d41-utilities\") pod \"redhat-operators-kvrc9\" (UID: \"4638ff21-51d9-4b6d-b860-322f48d04d41\") " pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.846885 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4638ff21-51d9-4b6d-b860-322f48d04d41-catalog-content\") pod \"redhat-operators-kvrc9\" (UID: \"4638ff21-51d9-4b6d-b860-322f48d04d41\") " pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.864444 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5mrv\" (UniqueName: \"kubernetes.io/projected/4638ff21-51d9-4b6d-b860-322f48d04d41-kube-api-access-c5mrv\") pod \"redhat-operators-kvrc9\" (UID: \"4638ff21-51d9-4b6d-b860-322f48d04d41\") " pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 11:39:00 crc kubenswrapper[4724]: I0226 11:39:00.897464 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 11:39:01 crc kubenswrapper[4724]: I0226 11:39:01.073397 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-npkbx"] Feb 26 11:39:01 crc kubenswrapper[4724]: I0226 11:39:01.087832 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-npkbx"] Feb 26 11:39:01 crc kubenswrapper[4724]: I0226 11:39:01.373807 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kvrc9"] Feb 26 11:39:01 crc kubenswrapper[4724]: I0226 11:39:01.788498 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lxcr" event={"ID":"f4eba936-488c-4623-b65b-972f18d8dbb9","Type":"ContainerStarted","Data":"83e604f60cdbf7a146bb45c8fa0c151ca16647c79c41a33cb38d875fd559a774"} Feb 26 11:39:01 crc kubenswrapper[4724]: I0226 11:39:01.791912 4724 generic.go:334] "Generic (PLEG): container finished" podID="adeb1157-9165-4b0f-afcc-9a8c50b69d3b" containerID="e58ebd382f82a9fce19ffe650ef0509584934f1391bf65aa45af11aa9d5b217f" exitCode=0 Feb 26 11:39:01 crc kubenswrapper[4724]: I0226 11:39:01.792041 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zk2l8" event={"ID":"adeb1157-9165-4b0f-afcc-9a8c50b69d3b","Type":"ContainerDied","Data":"e58ebd382f82a9fce19ffe650ef0509584934f1391bf65aa45af11aa9d5b217f"} Feb 26 11:39:01 crc kubenswrapper[4724]: I0226 11:39:01.792071 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zk2l8" event={"ID":"adeb1157-9165-4b0f-afcc-9a8c50b69d3b","Type":"ContainerStarted","Data":"3207d44d0103cc6043a57695106ac13c690956b806590532681d7b1015daab91"} Feb 26 11:39:01 crc kubenswrapper[4724]: I0226 11:39:01.794277 4724 generic.go:334] "Generic (PLEG): container finished" podID="4638ff21-51d9-4b6d-b860-322f48d04d41" containerID="6a78be5c990c68530fb888dc18c9580308ba7c294de5f5b475776002f4ec49b4" exitCode=0 Feb 26 11:39:01 crc kubenswrapper[4724]: I0226 11:39:01.794305 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvrc9" event={"ID":"4638ff21-51d9-4b6d-b860-322f48d04d41","Type":"ContainerDied","Data":"6a78be5c990c68530fb888dc18c9580308ba7c294de5f5b475776002f4ec49b4"} Feb 26 11:39:01 crc kubenswrapper[4724]: I0226 11:39:01.794320 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvrc9" event={"ID":"4638ff21-51d9-4b6d-b860-322f48d04d41","Type":"ContainerStarted","Data":"72210747e3293351c2b8dd6aed481f0039d41ba2975c6f5e602c44de8cf4d216"} Feb 26 11:39:01 crc kubenswrapper[4724]: I0226 11:39:01.984951 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c18b60bf-4d85-4125-802b-6de116af3e23" path="/var/lib/kubelet/pods/c18b60bf-4d85-4125-802b-6de116af3e23/volumes" Feb 26 11:39:03 crc kubenswrapper[4724]: I0226 11:39:03.046894 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-bm75g"] Feb 26 11:39:03 crc kubenswrapper[4724]: I0226 11:39:03.066875 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-bm75g"] Feb 26 11:39:03 crc kubenswrapper[4724]: I0226 11:39:03.816363 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4eba936-488c-4623-b65b-972f18d8dbb9" containerID="83e604f60cdbf7a146bb45c8fa0c151ca16647c79c41a33cb38d875fd559a774" exitCode=0 Feb 26 11:39:03 crc kubenswrapper[4724]: I0226 11:39:03.816438 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lxcr" event={"ID":"f4eba936-488c-4623-b65b-972f18d8dbb9","Type":"ContainerDied","Data":"83e604f60cdbf7a146bb45c8fa0c151ca16647c79c41a33cb38d875fd559a774"} Feb 26 11:39:03 crc kubenswrapper[4724]: I0226 11:39:03.822632 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zk2l8" event={"ID":"adeb1157-9165-4b0f-afcc-9a8c50b69d3b","Type":"ContainerStarted","Data":"51c78ca88613b6c80a81291c427da41bf13bc7f1c5011575443daab47538eb20"} Feb 26 11:39:03 crc kubenswrapper[4724]: I0226 11:39:03.990573 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22f21b36-c8f6-4804-9f20-317255534086" path="/var/lib/kubelet/pods/22f21b36-c8f6-4804-9f20-317255534086/volumes" Feb 26 11:39:04 crc kubenswrapper[4724]: I0226 11:39:04.049747 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-5ab1-account-create-update-2pjjt"] Feb 26 11:39:04 crc kubenswrapper[4724]: I0226 11:39:04.049813 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-5ab1-account-create-update-2pjjt"] Feb 26 11:39:04 crc kubenswrapper[4724]: I0226 11:39:04.053631 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-bznxm"] Feb 26 11:39:04 crc kubenswrapper[4724]: I0226 11:39:04.069017 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-bznxm"] Feb 26 11:39:05 crc kubenswrapper[4724]: I0226 11:39:05.864620 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lxcr" event={"ID":"f4eba936-488c-4623-b65b-972f18d8dbb9","Type":"ContainerStarted","Data":"23b32e164a831f8da16c4b94ac54f842badad61e34bd103ab1f8c14a03959f03"} Feb 26 11:39:05 crc kubenswrapper[4724]: I0226 11:39:05.908615 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7lxcr" podStartSLOduration=3.213710531 podStartE2EDuration="7.908589398s" podCreationTimestamp="2026-02-26 11:38:58 +0000 UTC" firstStartedPulling="2026-02-26 11:38:59.765940202 +0000 UTC m=+2006.421679317" lastFinishedPulling="2026-02-26 11:39:04.460819069 +0000 UTC m=+2011.116558184" observedRunningTime="2026-02-26 11:39:05.882663211 +0000 UTC m=+2012.538402356" watchObservedRunningTime="2026-02-26 11:39:05.908589398 +0000 UTC m=+2012.564328513" Feb 26 11:39:05 crc kubenswrapper[4724]: I0226 11:39:05.990081 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d30bb3e5-0cb3-4fe1-9507-6b3527e260c3" path="/var/lib/kubelet/pods/d30bb3e5-0cb3-4fe1-9507-6b3527e260c3/volumes" Feb 26 11:39:05 crc kubenswrapper[4724]: I0226 11:39:05.992021 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f55ef083-be52-48d6-8b62-3d8f92cbeec5" path="/var/lib/kubelet/pods/f55ef083-be52-48d6-8b62-3d8f92cbeec5/volumes" Feb 26 11:39:06 crc kubenswrapper[4724]: I0226 11:39:06.039627 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-be15-account-create-update-dckff"] Feb 26 11:39:06 crc kubenswrapper[4724]: I0226 11:39:06.054523 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-be15-account-create-update-dckff"] Feb 26 11:39:06 crc kubenswrapper[4724]: I0226 11:39:06.879092 4724 generic.go:334] "Generic (PLEG): container finished" podID="adeb1157-9165-4b0f-afcc-9a8c50b69d3b" containerID="51c78ca88613b6c80a81291c427da41bf13bc7f1c5011575443daab47538eb20" exitCode=0 Feb 26 11:39:06 crc kubenswrapper[4724]: I0226 11:39:06.879811 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zk2l8" event={"ID":"adeb1157-9165-4b0f-afcc-9a8c50b69d3b","Type":"ContainerDied","Data":"51c78ca88613b6c80a81291c427da41bf13bc7f1c5011575443daab47538eb20"} Feb 26 11:39:07 crc kubenswrapper[4724]: I0226 11:39:07.037371 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c767-account-create-update-97tv6"] Feb 26 11:39:07 crc kubenswrapper[4724]: I0226 11:39:07.050749 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c767-account-create-update-97tv6"] Feb 26 11:39:07 crc kubenswrapper[4724]: I0226 11:39:07.899300 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zk2l8" event={"ID":"adeb1157-9165-4b0f-afcc-9a8c50b69d3b","Type":"ContainerStarted","Data":"8ca735e150a05728923788430d69ca14ec92ccc307b1ebfc08277b6986f599c3"} Feb 26 11:39:07 crc kubenswrapper[4724]: I0226 11:39:07.933298 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zk2l8" podStartSLOduration=3.378339538 podStartE2EDuration="8.933276303s" podCreationTimestamp="2026-02-26 11:38:59 +0000 UTC" firstStartedPulling="2026-02-26 11:39:01.793793238 +0000 UTC m=+2008.449532373" lastFinishedPulling="2026-02-26 11:39:07.348730023 +0000 UTC m=+2014.004469138" observedRunningTime="2026-02-26 11:39:07.917838632 +0000 UTC m=+2014.573577757" watchObservedRunningTime="2026-02-26 11:39:07.933276303 +0000 UTC m=+2014.589015418" Feb 26 11:39:07 crc kubenswrapper[4724]: I0226 11:39:07.988730 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="448f51f7-7dab-41bb-aafa-2ed352f22710" path="/var/lib/kubelet/pods/448f51f7-7dab-41bb-aafa-2ed352f22710/volumes" Feb 26 11:39:08 crc kubenswrapper[4724]: I0226 11:39:08.008114 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d750effb-07c0-4dab-b0d3-0cf351228638" path="/var/lib/kubelet/pods/d750effb-07c0-4dab-b0d3-0cf351228638/volumes" Feb 26 11:39:08 crc kubenswrapper[4724]: I0226 11:39:08.518566 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:39:08 crc kubenswrapper[4724]: I0226 11:39:08.518608 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:39:09 crc kubenswrapper[4724]: I0226 11:39:09.574679 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-7lxcr" podUID="f4eba936-488c-4623-b65b-972f18d8dbb9" containerName="registry-server" probeResult="failure" output=< Feb 26 11:39:09 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:39:09 crc kubenswrapper[4724]: > Feb 26 11:39:09 crc kubenswrapper[4724]: I0226 11:39:09.976364 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:39:09 crc kubenswrapper[4724]: E0226 11:39:09.976676 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:39:10 crc kubenswrapper[4724]: I0226 11:39:10.302610 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:10 crc kubenswrapper[4724]: I0226 11:39:10.302668 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:11 crc kubenswrapper[4724]: I0226 11:39:11.410768 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zk2l8" podUID="adeb1157-9165-4b0f-afcc-9a8c50b69d3b" containerName="registry-server" probeResult="failure" output=< Feb 26 11:39:11 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:39:11 crc kubenswrapper[4724]: > Feb 26 11:39:15 crc kubenswrapper[4724]: I0226 11:39:15.997726 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvrc9" event={"ID":"4638ff21-51d9-4b6d-b860-322f48d04d41","Type":"ContainerStarted","Data":"cb04aa0ca4513d88aadda6ae00e45b59fe5ef7a8dd95b5aaf66cfdc0e2c0fc10"} Feb 26 11:39:19 crc kubenswrapper[4724]: I0226 11:39:19.562727 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-7lxcr" podUID="f4eba936-488c-4623-b65b-972f18d8dbb9" containerName="registry-server" probeResult="failure" output=< Feb 26 11:39:19 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:39:19 crc kubenswrapper[4724]: > Feb 26 11:39:20 crc kubenswrapper[4724]: I0226 11:39:20.045407 4724 generic.go:334] "Generic (PLEG): container finished" podID="4638ff21-51d9-4b6d-b860-322f48d04d41" containerID="cb04aa0ca4513d88aadda6ae00e45b59fe5ef7a8dd95b5aaf66cfdc0e2c0fc10" exitCode=0 Feb 26 11:39:20 crc kubenswrapper[4724]: I0226 11:39:20.045459 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvrc9" event={"ID":"4638ff21-51d9-4b6d-b860-322f48d04d41","Type":"ContainerDied","Data":"cb04aa0ca4513d88aadda6ae00e45b59fe5ef7a8dd95b5aaf66cfdc0e2c0fc10"} Feb 26 11:39:20 crc kubenswrapper[4724]: I0226 11:39:20.975594 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:39:20 crc kubenswrapper[4724]: E0226 11:39:20.976109 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:39:21 crc kubenswrapper[4724]: I0226 11:39:21.360925 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zk2l8" podUID="adeb1157-9165-4b0f-afcc-9a8c50b69d3b" containerName="registry-server" probeResult="failure" output=< Feb 26 11:39:21 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:39:21 crc kubenswrapper[4724]: > Feb 26 11:39:22 crc kubenswrapper[4724]: I0226 11:39:22.066564 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvrc9" event={"ID":"4638ff21-51d9-4b6d-b860-322f48d04d41","Type":"ContainerStarted","Data":"2ef9acd2479557da1f6ac2c3e5875f7f34d0d90dd5382d3e2b63cb59ab78920d"} Feb 26 11:39:22 crc kubenswrapper[4724]: I0226 11:39:22.084689 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kvrc9" podStartSLOduration=2.232151438 podStartE2EDuration="22.084673567s" podCreationTimestamp="2026-02-26 11:39:00 +0000 UTC" firstStartedPulling="2026-02-26 11:39:01.795789238 +0000 UTC m=+2008.451528353" lastFinishedPulling="2026-02-26 11:39:21.648311367 +0000 UTC m=+2028.304050482" observedRunningTime="2026-02-26 11:39:22.08203894 +0000 UTC m=+2028.737778065" watchObservedRunningTime="2026-02-26 11:39:22.084673567 +0000 UTC m=+2028.740412682" Feb 26 11:39:24 crc kubenswrapper[4724]: I0226 11:39:24.042488 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-k6n2b"] Feb 26 11:39:24 crc kubenswrapper[4724]: I0226 11:39:24.055463 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-k6n2b"] Feb 26 11:39:25 crc kubenswrapper[4724]: I0226 11:39:25.986517 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc1f2e39-391c-4eb6-9278-080dd6a1ec1d" path="/var/lib/kubelet/pods/fc1f2e39-391c-4eb6-9278-080dd6a1ec1d/volumes" Feb 26 11:39:29 crc kubenswrapper[4724]: I0226 11:39:29.572932 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-7lxcr" podUID="f4eba936-488c-4623-b65b-972f18d8dbb9" containerName="registry-server" probeResult="failure" output=< Feb 26 11:39:29 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:39:29 crc kubenswrapper[4724]: > Feb 26 11:39:30 crc kubenswrapper[4724]: I0226 11:39:30.114805 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-lmgxz"] Feb 26 11:39:30 crc kubenswrapper[4724]: I0226 11:39:30.129247 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-n599s"] Feb 26 11:39:30 crc kubenswrapper[4724]: I0226 11:39:30.139553 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-lmgxz"] Feb 26 11:39:30 crc kubenswrapper[4724]: I0226 11:39:30.174926 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-n599s"] Feb 26 11:39:30 crc kubenswrapper[4724]: I0226 11:39:30.898685 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 11:39:30 crc kubenswrapper[4724]: I0226 11:39:30.898744 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 11:39:31 crc kubenswrapper[4724]: I0226 11:39:31.028891 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c054-account-create-update-pdzqj"] Feb 26 11:39:31 crc kubenswrapper[4724]: I0226 11:39:31.048503 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c054-account-create-update-pdzqj"] Feb 26 11:39:31 crc kubenswrapper[4724]: I0226 11:39:31.344870 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zk2l8" podUID="adeb1157-9165-4b0f-afcc-9a8c50b69d3b" containerName="registry-server" probeResult="failure" output=< Feb 26 11:39:31 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:39:31 crc kubenswrapper[4724]: > Feb 26 11:39:31 crc kubenswrapper[4724]: I0226 11:39:31.947353 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kvrc9" podUID="4638ff21-51d9-4b6d-b860-322f48d04d41" containerName="registry-server" probeResult="failure" output=< Feb 26 11:39:31 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:39:31 crc kubenswrapper[4724]: > Feb 26 11:39:31 crc kubenswrapper[4724]: I0226 11:39:31.991139 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69948f24-a054-4969-8449-0a85840a5da9" path="/var/lib/kubelet/pods/69948f24-a054-4969-8449-0a85840a5da9/volumes" Feb 26 11:39:31 crc kubenswrapper[4724]: I0226 11:39:31.996522 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab0dd31d-c5ce-4d29-a9b4-56497a14e09d" path="/var/lib/kubelet/pods/ab0dd31d-c5ce-4d29-a9b4-56497a14e09d/volumes" Feb 26 11:39:32 crc kubenswrapper[4724]: I0226 11:39:32.001060 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b463c40e-2552-4c4a-97b4-4a0aba53b68a" path="/var/lib/kubelet/pods/b463c40e-2552-4c4a-97b4-4a0aba53b68a/volumes" Feb 26 11:39:35 crc kubenswrapper[4724]: I0226 11:39:35.976497 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:39:35 crc kubenswrapper[4724]: E0226 11:39:35.977300 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:39:38 crc kubenswrapper[4724]: I0226 11:39:38.571452 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:39:38 crc kubenswrapper[4724]: I0226 11:39:38.625374 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:39:38 crc kubenswrapper[4724]: I0226 11:39:38.817213 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7lxcr"] Feb 26 11:39:40 crc kubenswrapper[4724]: I0226 11:39:40.225031 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7lxcr" podUID="f4eba936-488c-4623-b65b-972f18d8dbb9" containerName="registry-server" containerID="cri-o://23b32e164a831f8da16c4b94ac54f842badad61e34bd103ab1f8c14a03959f03" gracePeriod=2 Feb 26 11:39:40 crc kubenswrapper[4724]: I0226 11:39:40.360044 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:40 crc kubenswrapper[4724]: I0226 11:39:40.420542 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:40 crc kubenswrapper[4724]: I0226 11:39:40.808941 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:39:40 crc kubenswrapper[4724]: I0226 11:39:40.906947 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w64q\" (UniqueName: \"kubernetes.io/projected/f4eba936-488c-4623-b65b-972f18d8dbb9-kube-api-access-2w64q\") pod \"f4eba936-488c-4623-b65b-972f18d8dbb9\" (UID: \"f4eba936-488c-4623-b65b-972f18d8dbb9\") " Feb 26 11:39:40 crc kubenswrapper[4724]: I0226 11:39:40.907018 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4eba936-488c-4623-b65b-972f18d8dbb9-catalog-content\") pod \"f4eba936-488c-4623-b65b-972f18d8dbb9\" (UID: \"f4eba936-488c-4623-b65b-972f18d8dbb9\") " Feb 26 11:39:40 crc kubenswrapper[4724]: I0226 11:39:40.907193 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4eba936-488c-4623-b65b-972f18d8dbb9-utilities\") pod \"f4eba936-488c-4623-b65b-972f18d8dbb9\" (UID: \"f4eba936-488c-4623-b65b-972f18d8dbb9\") " Feb 26 11:39:40 crc kubenswrapper[4724]: I0226 11:39:40.907928 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4eba936-488c-4623-b65b-972f18d8dbb9-utilities" (OuterVolumeSpecName: "utilities") pod "f4eba936-488c-4623-b65b-972f18d8dbb9" (UID: "f4eba936-488c-4623-b65b-972f18d8dbb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:39:40 crc kubenswrapper[4724]: I0226 11:39:40.914353 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4eba936-488c-4623-b65b-972f18d8dbb9-kube-api-access-2w64q" (OuterVolumeSpecName: "kube-api-access-2w64q") pod "f4eba936-488c-4623-b65b-972f18d8dbb9" (UID: "f4eba936-488c-4623-b65b-972f18d8dbb9"). InnerVolumeSpecName "kube-api-access-2w64q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:39:40 crc kubenswrapper[4724]: I0226 11:39:40.983747 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4eba936-488c-4623-b65b-972f18d8dbb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4eba936-488c-4623-b65b-972f18d8dbb9" (UID: "f4eba936-488c-4623-b65b-972f18d8dbb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.013864 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4eba936-488c-4623-b65b-972f18d8dbb9-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.013900 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w64q\" (UniqueName: \"kubernetes.io/projected/f4eba936-488c-4623-b65b-972f18d8dbb9-kube-api-access-2w64q\") on node \"crc\" DevicePath \"\"" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.013917 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4eba936-488c-4623-b65b-972f18d8dbb9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.050691 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-5l7x7"] Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.060978 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-eecc-account-create-update-zjhmj"] Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.072452 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-4eba-account-create-update-c7l6v"] Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.083955 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-mhtt4"] Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.095628 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-5l7x7"] Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.109135 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-mhtt4"] Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.120306 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-eecc-account-create-update-zjhmj"] Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.130491 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-4eba-account-create-update-c7l6v"] Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.141016 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-189b-account-create-update-5svgh"] Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.149938 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-189b-account-create-update-5svgh"] Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.235484 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4eba936-488c-4623-b65b-972f18d8dbb9" containerID="23b32e164a831f8da16c4b94ac54f842badad61e34bd103ab1f8c14a03959f03" exitCode=0 Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.235546 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lxcr" event={"ID":"f4eba936-488c-4623-b65b-972f18d8dbb9","Type":"ContainerDied","Data":"23b32e164a831f8da16c4b94ac54f842badad61e34bd103ab1f8c14a03959f03"} Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.235584 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lxcr" event={"ID":"f4eba936-488c-4623-b65b-972f18d8dbb9","Type":"ContainerDied","Data":"52e47311247c45ee79ac3df21beb08da32ae6b50a18c47d45ec34dbbc38d3d31"} Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.235609 4724 scope.go:117] "RemoveContainer" containerID="23b32e164a831f8da16c4b94ac54f842badad61e34bd103ab1f8c14a03959f03" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.236582 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7lxcr" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.269595 4724 scope.go:117] "RemoveContainer" containerID="83e604f60cdbf7a146bb45c8fa0c151ca16647c79c41a33cb38d875fd559a774" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.271993 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7lxcr"] Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.281772 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7lxcr"] Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.300151 4724 scope.go:117] "RemoveContainer" containerID="042d6196d39be979b4f244245dec0799f3cf5285b886f5133d7f8d388c1bf519" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.342849 4724 scope.go:117] "RemoveContainer" containerID="23b32e164a831f8da16c4b94ac54f842badad61e34bd103ab1f8c14a03959f03" Feb 26 11:39:41 crc kubenswrapper[4724]: E0226 11:39:41.343396 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23b32e164a831f8da16c4b94ac54f842badad61e34bd103ab1f8c14a03959f03\": container with ID starting with 23b32e164a831f8da16c4b94ac54f842badad61e34bd103ab1f8c14a03959f03 not found: ID does not exist" containerID="23b32e164a831f8da16c4b94ac54f842badad61e34bd103ab1f8c14a03959f03" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.343476 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b32e164a831f8da16c4b94ac54f842badad61e34bd103ab1f8c14a03959f03"} err="failed to get container status \"23b32e164a831f8da16c4b94ac54f842badad61e34bd103ab1f8c14a03959f03\": rpc error: code = NotFound desc = could not find container \"23b32e164a831f8da16c4b94ac54f842badad61e34bd103ab1f8c14a03959f03\": container with ID starting with 23b32e164a831f8da16c4b94ac54f842badad61e34bd103ab1f8c14a03959f03 not found: ID does not exist" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.343533 4724 scope.go:117] "RemoveContainer" containerID="83e604f60cdbf7a146bb45c8fa0c151ca16647c79c41a33cb38d875fd559a774" Feb 26 11:39:41 crc kubenswrapper[4724]: E0226 11:39:41.344035 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83e604f60cdbf7a146bb45c8fa0c151ca16647c79c41a33cb38d875fd559a774\": container with ID starting with 83e604f60cdbf7a146bb45c8fa0c151ca16647c79c41a33cb38d875fd559a774 not found: ID does not exist" containerID="83e604f60cdbf7a146bb45c8fa0c151ca16647c79c41a33cb38d875fd559a774" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.344067 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e604f60cdbf7a146bb45c8fa0c151ca16647c79c41a33cb38d875fd559a774"} err="failed to get container status \"83e604f60cdbf7a146bb45c8fa0c151ca16647c79c41a33cb38d875fd559a774\": rpc error: code = NotFound desc = could not find container \"83e604f60cdbf7a146bb45c8fa0c151ca16647c79c41a33cb38d875fd559a774\": container with ID starting with 83e604f60cdbf7a146bb45c8fa0c151ca16647c79c41a33cb38d875fd559a774 not found: ID does not exist" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.344089 4724 scope.go:117] "RemoveContainer" containerID="042d6196d39be979b4f244245dec0799f3cf5285b886f5133d7f8d388c1bf519" Feb 26 11:39:41 crc kubenswrapper[4724]: E0226 11:39:41.344410 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"042d6196d39be979b4f244245dec0799f3cf5285b886f5133d7f8d388c1bf519\": container with ID starting with 042d6196d39be979b4f244245dec0799f3cf5285b886f5133d7f8d388c1bf519 not found: ID does not exist" containerID="042d6196d39be979b4f244245dec0799f3cf5285b886f5133d7f8d388c1bf519" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.344504 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"042d6196d39be979b4f244245dec0799f3cf5285b886f5133d7f8d388c1bf519"} err="failed to get container status \"042d6196d39be979b4f244245dec0799f3cf5285b886f5133d7f8d388c1bf519\": rpc error: code = NotFound desc = could not find container \"042d6196d39be979b4f244245dec0799f3cf5285b886f5133d7f8d388c1bf519\": container with ID starting with 042d6196d39be979b4f244245dec0799f3cf5285b886f5133d7f8d388c1bf519 not found: ID does not exist" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.410757 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zk2l8"] Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.944317 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kvrc9" podUID="4638ff21-51d9-4b6d-b860-322f48d04d41" containerName="registry-server" probeResult="failure" output=< Feb 26 11:39:41 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:39:41 crc kubenswrapper[4724]: > Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.988502 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="445386f8-9d5a-4cae-b0ef-3838172cb946" path="/var/lib/kubelet/pods/445386f8-9d5a-4cae-b0ef-3838172cb946/volumes" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.991389 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4971957b-b209-42b3-8f60-49fd69abde47" path="/var/lib/kubelet/pods/4971957b-b209-42b3-8f60-49fd69abde47/volumes" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.994785 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="724d020f-8b7e-454d-a956-d34a9d6bcd6b" path="/var/lib/kubelet/pods/724d020f-8b7e-454d-a956-d34a9d6bcd6b/volumes" Feb 26 11:39:41 crc kubenswrapper[4724]: I0226 11:39:41.999004 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a07dd5f3-2e99-4c1d-985a-d47b7f889b54" path="/var/lib/kubelet/pods/a07dd5f3-2e99-4c1d-985a-d47b7f889b54/volumes" Feb 26 11:39:42 crc kubenswrapper[4724]: I0226 11:39:42.004109 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8bdb72a-3792-4705-8601-a78cb69b4226" path="/var/lib/kubelet/pods/c8bdb72a-3792-4705-8601-a78cb69b4226/volumes" Feb 26 11:39:42 crc kubenswrapper[4724]: I0226 11:39:42.006671 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4eba936-488c-4623-b65b-972f18d8dbb9" path="/var/lib/kubelet/pods/f4eba936-488c-4623-b65b-972f18d8dbb9/volumes" Feb 26 11:39:42 crc kubenswrapper[4724]: I0226 11:39:42.246939 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zk2l8" podUID="adeb1157-9165-4b0f-afcc-9a8c50b69d3b" containerName="registry-server" containerID="cri-o://8ca735e150a05728923788430d69ca14ec92ccc307b1ebfc08277b6986f599c3" gracePeriod=2 Feb 26 11:39:42 crc kubenswrapper[4724]: I0226 11:39:42.691480 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:42 crc kubenswrapper[4724]: I0226 11:39:42.852781 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-catalog-content\") pod \"adeb1157-9165-4b0f-afcc-9a8c50b69d3b\" (UID: \"adeb1157-9165-4b0f-afcc-9a8c50b69d3b\") " Feb 26 11:39:42 crc kubenswrapper[4724]: I0226 11:39:42.853185 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74s7d\" (UniqueName: \"kubernetes.io/projected/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-kube-api-access-74s7d\") pod \"adeb1157-9165-4b0f-afcc-9a8c50b69d3b\" (UID: \"adeb1157-9165-4b0f-afcc-9a8c50b69d3b\") " Feb 26 11:39:42 crc kubenswrapper[4724]: I0226 11:39:42.853421 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-utilities\") pod \"adeb1157-9165-4b0f-afcc-9a8c50b69d3b\" (UID: \"adeb1157-9165-4b0f-afcc-9a8c50b69d3b\") " Feb 26 11:39:42 crc kubenswrapper[4724]: I0226 11:39:42.853942 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-utilities" (OuterVolumeSpecName: "utilities") pod "adeb1157-9165-4b0f-afcc-9a8c50b69d3b" (UID: "adeb1157-9165-4b0f-afcc-9a8c50b69d3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:39:42 crc kubenswrapper[4724]: I0226 11:39:42.854398 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:39:42 crc kubenswrapper[4724]: I0226 11:39:42.858596 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-kube-api-access-74s7d" (OuterVolumeSpecName: "kube-api-access-74s7d") pod "adeb1157-9165-4b0f-afcc-9a8c50b69d3b" (UID: "adeb1157-9165-4b0f-afcc-9a8c50b69d3b"). InnerVolumeSpecName "kube-api-access-74s7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:39:42 crc kubenswrapper[4724]: I0226 11:39:42.877840 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adeb1157-9165-4b0f-afcc-9a8c50b69d3b" (UID: "adeb1157-9165-4b0f-afcc-9a8c50b69d3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:39:42 crc kubenswrapper[4724]: I0226 11:39:42.956651 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:39:42 crc kubenswrapper[4724]: I0226 11:39:42.956695 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74s7d\" (UniqueName: \"kubernetes.io/projected/adeb1157-9165-4b0f-afcc-9a8c50b69d3b-kube-api-access-74s7d\") on node \"crc\" DevicePath \"\"" Feb 26 11:39:43 crc kubenswrapper[4724]: I0226 11:39:43.256772 4724 generic.go:334] "Generic (PLEG): container finished" podID="adeb1157-9165-4b0f-afcc-9a8c50b69d3b" containerID="8ca735e150a05728923788430d69ca14ec92ccc307b1ebfc08277b6986f599c3" exitCode=0 Feb 26 11:39:43 crc kubenswrapper[4724]: I0226 11:39:43.256812 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zk2l8" event={"ID":"adeb1157-9165-4b0f-afcc-9a8c50b69d3b","Type":"ContainerDied","Data":"8ca735e150a05728923788430d69ca14ec92ccc307b1ebfc08277b6986f599c3"} Feb 26 11:39:43 crc kubenswrapper[4724]: I0226 11:39:43.256838 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zk2l8" event={"ID":"adeb1157-9165-4b0f-afcc-9a8c50b69d3b","Type":"ContainerDied","Data":"3207d44d0103cc6043a57695106ac13c690956b806590532681d7b1015daab91"} Feb 26 11:39:43 crc kubenswrapper[4724]: I0226 11:39:43.256855 4724 scope.go:117] "RemoveContainer" containerID="8ca735e150a05728923788430d69ca14ec92ccc307b1ebfc08277b6986f599c3" Feb 26 11:39:43 crc kubenswrapper[4724]: I0226 11:39:43.256972 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zk2l8" Feb 26 11:39:43 crc kubenswrapper[4724]: I0226 11:39:43.297143 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zk2l8"] Feb 26 11:39:43 crc kubenswrapper[4724]: I0226 11:39:43.298258 4724 scope.go:117] "RemoveContainer" containerID="51c78ca88613b6c80a81291c427da41bf13bc7f1c5011575443daab47538eb20" Feb 26 11:39:43 crc kubenswrapper[4724]: I0226 11:39:43.308853 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zk2l8"] Feb 26 11:39:43 crc kubenswrapper[4724]: I0226 11:39:43.339703 4724 scope.go:117] "RemoveContainer" containerID="e58ebd382f82a9fce19ffe650ef0509584934f1391bf65aa45af11aa9d5b217f" Feb 26 11:39:43 crc kubenswrapper[4724]: I0226 11:39:43.368964 4724 scope.go:117] "RemoveContainer" containerID="8ca735e150a05728923788430d69ca14ec92ccc307b1ebfc08277b6986f599c3" Feb 26 11:39:43 crc kubenswrapper[4724]: E0226 11:39:43.376731 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ca735e150a05728923788430d69ca14ec92ccc307b1ebfc08277b6986f599c3\": container with ID starting with 8ca735e150a05728923788430d69ca14ec92ccc307b1ebfc08277b6986f599c3 not found: ID does not exist" containerID="8ca735e150a05728923788430d69ca14ec92ccc307b1ebfc08277b6986f599c3" Feb 26 11:39:43 crc kubenswrapper[4724]: I0226 11:39:43.376886 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ca735e150a05728923788430d69ca14ec92ccc307b1ebfc08277b6986f599c3"} err="failed to get container status \"8ca735e150a05728923788430d69ca14ec92ccc307b1ebfc08277b6986f599c3\": rpc error: code = NotFound desc = could not find container \"8ca735e150a05728923788430d69ca14ec92ccc307b1ebfc08277b6986f599c3\": container with ID starting with 8ca735e150a05728923788430d69ca14ec92ccc307b1ebfc08277b6986f599c3 not found: ID does not exist" Feb 26 11:39:43 crc kubenswrapper[4724]: I0226 11:39:43.376990 4724 scope.go:117] "RemoveContainer" containerID="51c78ca88613b6c80a81291c427da41bf13bc7f1c5011575443daab47538eb20" Feb 26 11:39:43 crc kubenswrapper[4724]: E0226 11:39:43.377574 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51c78ca88613b6c80a81291c427da41bf13bc7f1c5011575443daab47538eb20\": container with ID starting with 51c78ca88613b6c80a81291c427da41bf13bc7f1c5011575443daab47538eb20 not found: ID does not exist" containerID="51c78ca88613b6c80a81291c427da41bf13bc7f1c5011575443daab47538eb20" Feb 26 11:39:43 crc kubenswrapper[4724]: I0226 11:39:43.377620 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51c78ca88613b6c80a81291c427da41bf13bc7f1c5011575443daab47538eb20"} err="failed to get container status \"51c78ca88613b6c80a81291c427da41bf13bc7f1c5011575443daab47538eb20\": rpc error: code = NotFound desc = could not find container \"51c78ca88613b6c80a81291c427da41bf13bc7f1c5011575443daab47538eb20\": container with ID starting with 51c78ca88613b6c80a81291c427da41bf13bc7f1c5011575443daab47538eb20 not found: ID does not exist" Feb 26 11:39:43 crc kubenswrapper[4724]: I0226 11:39:43.377645 4724 scope.go:117] "RemoveContainer" containerID="e58ebd382f82a9fce19ffe650ef0509584934f1391bf65aa45af11aa9d5b217f" Feb 26 11:39:43 crc kubenswrapper[4724]: E0226 11:39:43.378102 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e58ebd382f82a9fce19ffe650ef0509584934f1391bf65aa45af11aa9d5b217f\": container with ID starting with e58ebd382f82a9fce19ffe650ef0509584934f1391bf65aa45af11aa9d5b217f not found: ID does not exist" containerID="e58ebd382f82a9fce19ffe650ef0509584934f1391bf65aa45af11aa9d5b217f" Feb 26 11:39:43 crc kubenswrapper[4724]: I0226 11:39:43.378144 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e58ebd382f82a9fce19ffe650ef0509584934f1391bf65aa45af11aa9d5b217f"} err="failed to get container status \"e58ebd382f82a9fce19ffe650ef0509584934f1391bf65aa45af11aa9d5b217f\": rpc error: code = NotFound desc = could not find container \"e58ebd382f82a9fce19ffe650ef0509584934f1391bf65aa45af11aa9d5b217f\": container with ID starting with e58ebd382f82a9fce19ffe650ef0509584934f1391bf65aa45af11aa9d5b217f not found: ID does not exist" Feb 26 11:39:44 crc kubenswrapper[4724]: I0226 11:39:43.990049 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adeb1157-9165-4b0f-afcc-9a8c50b69d3b" path="/var/lib/kubelet/pods/adeb1157-9165-4b0f-afcc-9a8c50b69d3b/volumes" Feb 26 11:39:45 crc kubenswrapper[4724]: I0226 11:39:45.029480 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-sp2k2"] Feb 26 11:39:45 crc kubenswrapper[4724]: I0226 11:39:45.043981 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-sp2k2"] Feb 26 11:39:45 crc kubenswrapper[4724]: I0226 11:39:45.988054 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94ddc7ed-7a58-4859-acc1-f6e9796dff95" path="/var/lib/kubelet/pods/94ddc7ed-7a58-4859-acc1-f6e9796dff95/volumes" Feb 26 11:39:49 crc kubenswrapper[4724]: I0226 11:39:49.030591 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6v8t4"] Feb 26 11:39:49 crc kubenswrapper[4724]: I0226 11:39:49.038796 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6v8t4"] Feb 26 11:39:49 crc kubenswrapper[4724]: I0226 11:39:49.987458 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5a58b47-8a63-4ec7-aad6-5b7668e56faa" path="/var/lib/kubelet/pods/f5a58b47-8a63-4ec7-aad6-5b7668e56faa/volumes" Feb 26 11:39:50 crc kubenswrapper[4724]: I0226 11:39:50.976099 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:39:50 crc kubenswrapper[4724]: E0226 11:39:50.976472 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:39:51 crc kubenswrapper[4724]: I0226 11:39:51.941552 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kvrc9" podUID="4638ff21-51d9-4b6d-b860-322f48d04d41" containerName="registry-server" probeResult="failure" output=< Feb 26 11:39:51 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:39:51 crc kubenswrapper[4724]: > Feb 26 11:39:56 crc kubenswrapper[4724]: I0226 11:39:56.834432 4724 scope.go:117] "RemoveContainer" containerID="5d64263900c6d4441fb7e16054cea6d07bffe8f76a42f8f6a297bd1bbf9b370d" Feb 26 11:39:56 crc kubenswrapper[4724]: I0226 11:39:56.865818 4724 scope.go:117] "RemoveContainer" containerID="d0291347c52910dd8b1fc1d553d72bf2ac4dff608b401b522f6d41ab56af53f2" Feb 26 11:39:56 crc kubenswrapper[4724]: I0226 11:39:56.943033 4724 scope.go:117] "RemoveContainer" containerID="0d4d9a38cfea90a768d263d089365ccd094611cb59711921bf8c684118a170f2" Feb 26 11:39:57 crc kubenswrapper[4724]: I0226 11:39:57.047025 4724 scope.go:117] "RemoveContainer" containerID="e692bc8e1416f9b1d0afcb0b9f4e8f41a0b6d8aefed7dd652d0ae8efdb358a76" Feb 26 11:39:57 crc kubenswrapper[4724]: I0226 11:39:57.122994 4724 scope.go:117] "RemoveContainer" containerID="5eb150436a51f707d5b2b1c9c73b54c2a7d6c68558b1cd03bce9952ef768d1f1" Feb 26 11:39:57 crc kubenswrapper[4724]: I0226 11:39:57.172816 4724 scope.go:117] "RemoveContainer" containerID="46fafd04c5672acd344a3d68e52aeac492feec2f72c0edab74e5c872d0b52e95" Feb 26 11:39:57 crc kubenswrapper[4724]: I0226 11:39:57.221742 4724 scope.go:117] "RemoveContainer" containerID="5283b1bf7f17d10b3c1ef3cf7f7708d7a06576df4acbc7e41a0b4770e2af9392" Feb 26 11:39:57 crc kubenswrapper[4724]: I0226 11:39:57.266573 4724 scope.go:117] "RemoveContainer" containerID="0b5afc72420088c44db251ca4328abc3f87e7aa7d21eeef10810463c036615a9" Feb 26 11:39:57 crc kubenswrapper[4724]: I0226 11:39:57.300085 4724 scope.go:117] "RemoveContainer" containerID="5d35e005213ebc8f35ff1e070ceecfc17c89396b8959da61f9678c26661fb115" Feb 26 11:39:57 crc kubenswrapper[4724]: I0226 11:39:57.325131 4724 scope.go:117] "RemoveContainer" containerID="e79dce8d3c67b715cfdb3148bab9a6e27d2568eb40b08b3221eb3421b3c4f4bc" Feb 26 11:39:57 crc kubenswrapper[4724]: I0226 11:39:57.351576 4724 scope.go:117] "RemoveContainer" containerID="12aef3eb63f611cba309c05081f312028b7458ba9d7f9ef2c514a1100339337a" Feb 26 11:39:57 crc kubenswrapper[4724]: I0226 11:39:57.400249 4724 scope.go:117] "RemoveContainer" containerID="8122adcd5d7ac1923df354d69a5299acf98a8a65a40b47120df7eb1625d0ad9e" Feb 26 11:39:57 crc kubenswrapper[4724]: I0226 11:39:57.452816 4724 scope.go:117] "RemoveContainer" containerID="d143bfa34abc036a7a83f3ced969efba4f761e89c2d7a62db54c37be67c471da" Feb 26 11:39:57 crc kubenswrapper[4724]: I0226 11:39:57.484492 4724 scope.go:117] "RemoveContainer" containerID="13dd34b2fee27b1bf61adc8145b2d97b48f7c21f29e6ad77f11e7bf52966aabd" Feb 26 11:39:57 crc kubenswrapper[4724]: I0226 11:39:57.524037 4724 scope.go:117] "RemoveContainer" containerID="2ce603486d2b7cd9c95715600768435c25e1b1f4df8ab88dac1b372401148755" Feb 26 11:39:57 crc kubenswrapper[4724]: I0226 11:39:57.554051 4724 scope.go:117] "RemoveContainer" containerID="2f173bf1c648b98533051df7b5eedc8205255da152e3ac406e3e4d9813f0fb00" Feb 26 11:39:57 crc kubenswrapper[4724]: I0226 11:39:57.600518 4724 scope.go:117] "RemoveContainer" containerID="96eed766d4870393f6f54c6af52c022e01e8758dc73bec5501dc654f759c0c56" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.177477 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535100-5dgl8"] Feb 26 11:40:00 crc kubenswrapper[4724]: E0226 11:40:00.178296 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4eba936-488c-4623-b65b-972f18d8dbb9" containerName="extract-utilities" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.178315 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4eba936-488c-4623-b65b-972f18d8dbb9" containerName="extract-utilities" Feb 26 11:40:00 crc kubenswrapper[4724]: E0226 11:40:00.178328 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4eba936-488c-4623-b65b-972f18d8dbb9" containerName="extract-content" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.178336 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4eba936-488c-4623-b65b-972f18d8dbb9" containerName="extract-content" Feb 26 11:40:00 crc kubenswrapper[4724]: E0226 11:40:00.178370 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adeb1157-9165-4b0f-afcc-9a8c50b69d3b" containerName="extract-content" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.178379 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="adeb1157-9165-4b0f-afcc-9a8c50b69d3b" containerName="extract-content" Feb 26 11:40:00 crc kubenswrapper[4724]: E0226 11:40:00.178398 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4eba936-488c-4623-b65b-972f18d8dbb9" containerName="registry-server" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.178406 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4eba936-488c-4623-b65b-972f18d8dbb9" containerName="registry-server" Feb 26 11:40:00 crc kubenswrapper[4724]: E0226 11:40:00.178423 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adeb1157-9165-4b0f-afcc-9a8c50b69d3b" containerName="registry-server" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.178433 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="adeb1157-9165-4b0f-afcc-9a8c50b69d3b" containerName="registry-server" Feb 26 11:40:00 crc kubenswrapper[4724]: E0226 11:40:00.178459 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adeb1157-9165-4b0f-afcc-9a8c50b69d3b" containerName="extract-utilities" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.178468 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="adeb1157-9165-4b0f-afcc-9a8c50b69d3b" containerName="extract-utilities" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.178705 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="adeb1157-9165-4b0f-afcc-9a8c50b69d3b" containerName="registry-server" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.178740 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4eba936-488c-4623-b65b-972f18d8dbb9" containerName="registry-server" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.179568 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535100-5dgl8" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.183873 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.183932 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.183881 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.196171 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535100-5dgl8"] Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.308276 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lrf7\" (UniqueName: \"kubernetes.io/projected/1ec5fe4d-b8c4-42ed-894e-c5927452d116-kube-api-access-2lrf7\") pod \"auto-csr-approver-29535100-5dgl8\" (UID: \"1ec5fe4d-b8c4-42ed-894e-c5927452d116\") " pod="openshift-infra/auto-csr-approver-29535100-5dgl8" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.409870 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lrf7\" (UniqueName: \"kubernetes.io/projected/1ec5fe4d-b8c4-42ed-894e-c5927452d116-kube-api-access-2lrf7\") pod \"auto-csr-approver-29535100-5dgl8\" (UID: \"1ec5fe4d-b8c4-42ed-894e-c5927452d116\") " pod="openshift-infra/auto-csr-approver-29535100-5dgl8" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.464613 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lrf7\" (UniqueName: \"kubernetes.io/projected/1ec5fe4d-b8c4-42ed-894e-c5927452d116-kube-api-access-2lrf7\") pod \"auto-csr-approver-29535100-5dgl8\" (UID: \"1ec5fe4d-b8c4-42ed-894e-c5927452d116\") " pod="openshift-infra/auto-csr-approver-29535100-5dgl8" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.510254 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535100-5dgl8" Feb 26 11:40:00 crc kubenswrapper[4724]: I0226 11:40:00.965316 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 11:40:01 crc kubenswrapper[4724]: I0226 11:40:01.015285 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 11:40:01 crc kubenswrapper[4724]: I0226 11:40:01.116743 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535100-5dgl8"] Feb 26 11:40:01 crc kubenswrapper[4724]: I0226 11:40:01.460058 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535100-5dgl8" event={"ID":"1ec5fe4d-b8c4-42ed-894e-c5927452d116","Type":"ContainerStarted","Data":"7310c14de715c7c8a50aec927c6fb43473e10cc7344e9a74e1ae51b67d92af02"} Feb 26 11:40:02 crc kubenswrapper[4724]: I0226 11:40:02.077882 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kvrc9"] Feb 26 11:40:02 crc kubenswrapper[4724]: I0226 11:40:02.201956 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rgvbv"] Feb 26 11:40:02 crc kubenswrapper[4724]: I0226 11:40:02.202529 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rgvbv" podUID="11e1e3c7-2b69-4645-9219-806bc00f5717" containerName="registry-server" containerID="cri-o://7a55874dfed892b1f0935adbc519bb605d08005363b20300a214536fcf65e46b" gracePeriod=2 Feb 26 11:40:02 crc kubenswrapper[4724]: I0226 11:40:02.479551 4724 generic.go:334] "Generic (PLEG): container finished" podID="11e1e3c7-2b69-4645-9219-806bc00f5717" containerID="7a55874dfed892b1f0935adbc519bb605d08005363b20300a214536fcf65e46b" exitCode=0 Feb 26 11:40:02 crc kubenswrapper[4724]: I0226 11:40:02.479589 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgvbv" event={"ID":"11e1e3c7-2b69-4645-9219-806bc00f5717","Type":"ContainerDied","Data":"7a55874dfed892b1f0935adbc519bb605d08005363b20300a214536fcf65e46b"} Feb 26 11:40:02 crc kubenswrapper[4724]: I0226 11:40:02.953154 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.078471 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11e1e3c7-2b69-4645-9219-806bc00f5717-catalog-content\") pod \"11e1e3c7-2b69-4645-9219-806bc00f5717\" (UID: \"11e1e3c7-2b69-4645-9219-806bc00f5717\") " Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.078682 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11e1e3c7-2b69-4645-9219-806bc00f5717-utilities\") pod \"11e1e3c7-2b69-4645-9219-806bc00f5717\" (UID: \"11e1e3c7-2b69-4645-9219-806bc00f5717\") " Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.078713 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk62p\" (UniqueName: \"kubernetes.io/projected/11e1e3c7-2b69-4645-9219-806bc00f5717-kube-api-access-rk62p\") pod \"11e1e3c7-2b69-4645-9219-806bc00f5717\" (UID: \"11e1e3c7-2b69-4645-9219-806bc00f5717\") " Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.084767 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e1e3c7-2b69-4645-9219-806bc00f5717-utilities" (OuterVolumeSpecName: "utilities") pod "11e1e3c7-2b69-4645-9219-806bc00f5717" (UID: "11e1e3c7-2b69-4645-9219-806bc00f5717"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.101879 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e1e3c7-2b69-4645-9219-806bc00f5717-kube-api-access-rk62p" (OuterVolumeSpecName: "kube-api-access-rk62p") pod "11e1e3c7-2b69-4645-9219-806bc00f5717" (UID: "11e1e3c7-2b69-4645-9219-806bc00f5717"). InnerVolumeSpecName "kube-api-access-rk62p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.182481 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11e1e3c7-2b69-4645-9219-806bc00f5717-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.182519 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk62p\" (UniqueName: \"kubernetes.io/projected/11e1e3c7-2b69-4645-9219-806bc00f5717-kube-api-access-rk62p\") on node \"crc\" DevicePath \"\"" Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.316486 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e1e3c7-2b69-4645-9219-806bc00f5717-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11e1e3c7-2b69-4645-9219-806bc00f5717" (UID: "11e1e3c7-2b69-4645-9219-806bc00f5717"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.386902 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11e1e3c7-2b69-4645-9219-806bc00f5717-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.491772 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgvbv" event={"ID":"11e1e3c7-2b69-4645-9219-806bc00f5717","Type":"ContainerDied","Data":"1db7d4f73687c8f7fc1cc43bdbeb7b63894416c1c95c6412b9fb499dad8b67ce"} Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.492096 4724 scope.go:117] "RemoveContainer" containerID="7a55874dfed892b1f0935adbc519bb605d08005363b20300a214536fcf65e46b" Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.492245 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgvbv" Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.530028 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rgvbv"] Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.539617 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rgvbv"] Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.542395 4724 scope.go:117] "RemoveContainer" containerID="9839beb9d9f07797c6c47f08c3ff8a4c742a9feaecbc6f516be4db8526d5be9b" Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.581009 4724 scope.go:117] "RemoveContainer" containerID="e74868a908cb2b969cb7866ad998411af99c3357bc99f76067c98dc0fdb85701" Feb 26 11:40:03 crc kubenswrapper[4724]: I0226 11:40:03.986430 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e1e3c7-2b69-4645-9219-806bc00f5717" path="/var/lib/kubelet/pods/11e1e3c7-2b69-4645-9219-806bc00f5717/volumes" Feb 26 11:40:04 crc kubenswrapper[4724]: I0226 11:40:04.503562 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535100-5dgl8" event={"ID":"1ec5fe4d-b8c4-42ed-894e-c5927452d116","Type":"ContainerStarted","Data":"9713a4d0fb94114a0a9331cc38f4ef1c364373d64c51753cfd1957d8d3f4ae9f"} Feb 26 11:40:04 crc kubenswrapper[4724]: I0226 11:40:04.530650 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535100-5dgl8" podStartSLOduration=2.888251592 podStartE2EDuration="4.530629499s" podCreationTimestamp="2026-02-26 11:40:00 +0000 UTC" firstStartedPulling="2026-02-26 11:40:01.133560257 +0000 UTC m=+2067.789299392" lastFinishedPulling="2026-02-26 11:40:02.775938184 +0000 UTC m=+2069.431677299" observedRunningTime="2026-02-26 11:40:04.520288536 +0000 UTC m=+2071.176027651" watchObservedRunningTime="2026-02-26 11:40:04.530629499 +0000 UTC m=+2071.186368604" Feb 26 11:40:05 crc kubenswrapper[4724]: I0226 11:40:05.976073 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:40:05 crc kubenswrapper[4724]: E0226 11:40:05.976497 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:40:06 crc kubenswrapper[4724]: I0226 11:40:06.521767 4724 generic.go:334] "Generic (PLEG): container finished" podID="1ec5fe4d-b8c4-42ed-894e-c5927452d116" containerID="9713a4d0fb94114a0a9331cc38f4ef1c364373d64c51753cfd1957d8d3f4ae9f" exitCode=0 Feb 26 11:40:06 crc kubenswrapper[4724]: I0226 11:40:06.521830 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535100-5dgl8" event={"ID":"1ec5fe4d-b8c4-42ed-894e-c5927452d116","Type":"ContainerDied","Data":"9713a4d0fb94114a0a9331cc38f4ef1c364373d64c51753cfd1957d8d3f4ae9f"} Feb 26 11:40:07 crc kubenswrapper[4724]: I0226 11:40:07.883143 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535100-5dgl8" Feb 26 11:40:07 crc kubenswrapper[4724]: I0226 11:40:07.993513 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lrf7\" (UniqueName: \"kubernetes.io/projected/1ec5fe4d-b8c4-42ed-894e-c5927452d116-kube-api-access-2lrf7\") pod \"1ec5fe4d-b8c4-42ed-894e-c5927452d116\" (UID: \"1ec5fe4d-b8c4-42ed-894e-c5927452d116\") " Feb 26 11:40:08 crc kubenswrapper[4724]: I0226 11:40:08.009404 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec5fe4d-b8c4-42ed-894e-c5927452d116-kube-api-access-2lrf7" (OuterVolumeSpecName: "kube-api-access-2lrf7") pod "1ec5fe4d-b8c4-42ed-894e-c5927452d116" (UID: "1ec5fe4d-b8c4-42ed-894e-c5927452d116"). InnerVolumeSpecName "kube-api-access-2lrf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:40:08 crc kubenswrapper[4724]: I0226 11:40:08.095731 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lrf7\" (UniqueName: \"kubernetes.io/projected/1ec5fe4d-b8c4-42ed-894e-c5927452d116-kube-api-access-2lrf7\") on node \"crc\" DevicePath \"\"" Feb 26 11:40:08 crc kubenswrapper[4724]: I0226 11:40:08.539683 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535100-5dgl8" event={"ID":"1ec5fe4d-b8c4-42ed-894e-c5927452d116","Type":"ContainerDied","Data":"7310c14de715c7c8a50aec927c6fb43473e10cc7344e9a74e1ae51b67d92af02"} Feb 26 11:40:08 crc kubenswrapper[4724]: I0226 11:40:08.539730 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7310c14de715c7c8a50aec927c6fb43473e10cc7344e9a74e1ae51b67d92af02" Feb 26 11:40:08 crc kubenswrapper[4724]: I0226 11:40:08.539734 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535100-5dgl8" Feb 26 11:40:08 crc kubenswrapper[4724]: I0226 11:40:08.592725 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535094-nv6rp"] Feb 26 11:40:08 crc kubenswrapper[4724]: I0226 11:40:08.601671 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535094-nv6rp"] Feb 26 11:40:09 crc kubenswrapper[4724]: I0226 11:40:09.987784 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="235c375a-3a2e-4ec0-88d9-6aee5b464dd2" path="/var/lib/kubelet/pods/235c375a-3a2e-4ec0-88d9-6aee5b464dd2/volumes" Feb 26 11:40:18 crc kubenswrapper[4724]: I0226 11:40:18.976089 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:40:19 crc kubenswrapper[4724]: I0226 11:40:19.642806 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"6a82f967eea840ca5de412ba47b3c1a0b6b8bb3dc6664316ec3d32f3e1eadd2e"} Feb 26 11:40:58 crc kubenswrapper[4724]: I0226 11:40:58.140724 4724 scope.go:117] "RemoveContainer" containerID="4157c795975368696249e150c6d3ade3f4b5dd8cbf8e5f014d54c297115943fc" Feb 26 11:41:02 crc kubenswrapper[4724]: I0226 11:41:02.468009 4724 generic.go:334] "Generic (PLEG): container finished" podID="fb1451db-04cb-41fc-b46a-3a64ea6e8528" containerID="3bd9f0c660b01ab36ebef299ce821e9b76db2d56c8de6721f4df62bc6acd1c6e" exitCode=0 Feb 26 11:41:02 crc kubenswrapper[4724]: I0226 11:41:02.468215 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" event={"ID":"fb1451db-04cb-41fc-b46a-3a64ea6e8528","Type":"ContainerDied","Data":"3bd9f0c660b01ab36ebef299ce821e9b76db2d56c8de6721f4df62bc6acd1c6e"} Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.131722 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.237845 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-inventory\") pod \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.237911 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmdgr\" (UniqueName: \"kubernetes.io/projected/fb1451db-04cb-41fc-b46a-3a64ea6e8528-kube-api-access-bmdgr\") pod \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.238121 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-bootstrap-combined-ca-bundle\") pod \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.238210 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-ssh-key-openstack-edpm-ipam\") pod \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\" (UID: \"fb1451db-04cb-41fc-b46a-3a64ea6e8528\") " Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.245313 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb1451db-04cb-41fc-b46a-3a64ea6e8528-kube-api-access-bmdgr" (OuterVolumeSpecName: "kube-api-access-bmdgr") pod "fb1451db-04cb-41fc-b46a-3a64ea6e8528" (UID: "fb1451db-04cb-41fc-b46a-3a64ea6e8528"). InnerVolumeSpecName "kube-api-access-bmdgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.245724 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fb1451db-04cb-41fc-b46a-3a64ea6e8528" (UID: "fb1451db-04cb-41fc-b46a-3a64ea6e8528"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.274143 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-inventory" (OuterVolumeSpecName: "inventory") pod "fb1451db-04cb-41fc-b46a-3a64ea6e8528" (UID: "fb1451db-04cb-41fc-b46a-3a64ea6e8528"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.277323 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fb1451db-04cb-41fc-b46a-3a64ea6e8528" (UID: "fb1451db-04cb-41fc-b46a-3a64ea6e8528"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.340201 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.340243 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.340252 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmdgr\" (UniqueName: \"kubernetes.io/projected/fb1451db-04cb-41fc-b46a-3a64ea6e8528-kube-api-access-bmdgr\") on node \"crc\" DevicePath \"\"" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.340261 4724 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1451db-04cb-41fc-b46a-3a64ea6e8528-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.519755 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" event={"ID":"fb1451db-04cb-41fc-b46a-3a64ea6e8528","Type":"ContainerDied","Data":"ab395d4f730050585022d256cf355608d2b342c1502c9a8c0453e3fefe07c342"} Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.519798 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.519798 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab395d4f730050585022d256cf355608d2b342c1502c9a8c0453e3fefe07c342" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.621172 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q"] Feb 26 11:41:04 crc kubenswrapper[4724]: E0226 11:41:04.622057 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec5fe4d-b8c4-42ed-894e-c5927452d116" containerName="oc" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.622123 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec5fe4d-b8c4-42ed-894e-c5927452d116" containerName="oc" Feb 26 11:41:04 crc kubenswrapper[4724]: E0226 11:41:04.622259 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1451db-04cb-41fc-b46a-3a64ea6e8528" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.622318 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1451db-04cb-41fc-b46a-3a64ea6e8528" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 11:41:04 crc kubenswrapper[4724]: E0226 11:41:04.622382 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e1e3c7-2b69-4645-9219-806bc00f5717" containerName="registry-server" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.622433 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e1e3c7-2b69-4645-9219-806bc00f5717" containerName="registry-server" Feb 26 11:41:04 crc kubenswrapper[4724]: E0226 11:41:04.622495 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e1e3c7-2b69-4645-9219-806bc00f5717" containerName="extract-utilities" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.622549 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e1e3c7-2b69-4645-9219-806bc00f5717" containerName="extract-utilities" Feb 26 11:41:04 crc kubenswrapper[4724]: E0226 11:41:04.622605 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e1e3c7-2b69-4645-9219-806bc00f5717" containerName="extract-content" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.622660 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e1e3c7-2b69-4645-9219-806bc00f5717" containerName="extract-content" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.622875 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec5fe4d-b8c4-42ed-894e-c5927452d116" containerName="oc" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.622936 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e1e3c7-2b69-4645-9219-806bc00f5717" containerName="registry-server" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.623002 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1451db-04cb-41fc-b46a-3a64ea6e8528" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.623875 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.626677 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.626927 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.627054 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.627245 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.713199 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q"] Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.746406 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3587d474-38c2-4bdb-af02-8f03932c85bc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q\" (UID: \"3587d474-38c2-4bdb-af02-8f03932c85bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.746534 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3587d474-38c2-4bdb-af02-8f03932c85bc-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q\" (UID: \"3587d474-38c2-4bdb-af02-8f03932c85bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.746577 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drrjt\" (UniqueName: \"kubernetes.io/projected/3587d474-38c2-4bdb-af02-8f03932c85bc-kube-api-access-drrjt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q\" (UID: \"3587d474-38c2-4bdb-af02-8f03932c85bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.848853 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3587d474-38c2-4bdb-af02-8f03932c85bc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q\" (UID: \"3587d474-38c2-4bdb-af02-8f03932c85bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.848934 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3587d474-38c2-4bdb-af02-8f03932c85bc-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q\" (UID: \"3587d474-38c2-4bdb-af02-8f03932c85bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.848959 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drrjt\" (UniqueName: \"kubernetes.io/projected/3587d474-38c2-4bdb-af02-8f03932c85bc-kube-api-access-drrjt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q\" (UID: \"3587d474-38c2-4bdb-af02-8f03932c85bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.856886 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3587d474-38c2-4bdb-af02-8f03932c85bc-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q\" (UID: \"3587d474-38c2-4bdb-af02-8f03932c85bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.857312 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3587d474-38c2-4bdb-af02-8f03932c85bc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q\" (UID: \"3587d474-38c2-4bdb-af02-8f03932c85bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.898041 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drrjt\" (UniqueName: \"kubernetes.io/projected/3587d474-38c2-4bdb-af02-8f03932c85bc-kube-api-access-drrjt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q\" (UID: \"3587d474-38c2-4bdb-af02-8f03932c85bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" Feb 26 11:41:04 crc kubenswrapper[4724]: I0226 11:41:04.941994 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" Feb 26 11:41:05 crc kubenswrapper[4724]: I0226 11:41:05.496689 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q"] Feb 26 11:41:05 crc kubenswrapper[4724]: I0226 11:41:05.511910 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:41:05 crc kubenswrapper[4724]: I0226 11:41:05.529438 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" event={"ID":"3587d474-38c2-4bdb-af02-8f03932c85bc","Type":"ContainerStarted","Data":"63fd9a8a5d2d33ce00c7602fe87b627a4730dce0bf8a5fa5e9b19e3b89bce853"} Feb 26 11:41:06 crc kubenswrapper[4724]: I0226 11:41:06.542745 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" event={"ID":"3587d474-38c2-4bdb-af02-8f03932c85bc","Type":"ContainerStarted","Data":"c59c76aca1a7185aa5d2259a2e2758e9d227e702276b7470d0c5980d69e1b943"} Feb 26 11:41:06 crc kubenswrapper[4724]: I0226 11:41:06.566834 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" podStartSLOduration=1.8885460059999999 podStartE2EDuration="2.566814013s" podCreationTimestamp="2026-02-26 11:41:04 +0000 UTC" firstStartedPulling="2026-02-26 11:41:05.511669484 +0000 UTC m=+2132.167408599" lastFinishedPulling="2026-02-26 11:41:06.189937491 +0000 UTC m=+2132.845676606" observedRunningTime="2026-02-26 11:41:06.562545804 +0000 UTC m=+2133.218284929" watchObservedRunningTime="2026-02-26 11:41:06.566814013 +0000 UTC m=+2133.222553128" Feb 26 11:41:11 crc kubenswrapper[4724]: I0226 11:41:11.043220 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-bnckl"] Feb 26 11:41:11 crc kubenswrapper[4724]: I0226 11:41:11.052730 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-bnckl"] Feb 26 11:41:11 crc kubenswrapper[4724]: I0226 11:41:11.991245 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f54af76-4781-4532-b8fc-5100f18b0579" path="/var/lib/kubelet/pods/3f54af76-4781-4532-b8fc-5100f18b0579/volumes" Feb 26 11:41:14 crc kubenswrapper[4724]: I0226 11:41:14.107640 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-b5xkt"] Feb 26 11:41:14 crc kubenswrapper[4724]: I0226 11:41:14.158910 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-b5xkt"] Feb 26 11:41:15 crc kubenswrapper[4724]: I0226 11:41:15.992504 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb3c003b-9f91-4c11-a530-3f39fe5072b3" path="/var/lib/kubelet/pods/fb3c003b-9f91-4c11-a530-3f39fe5072b3/volumes" Feb 26 11:41:28 crc kubenswrapper[4724]: I0226 11:41:28.031392 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-rkvvl"] Feb 26 11:41:28 crc kubenswrapper[4724]: I0226 11:41:28.039615 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-rkvvl"] Feb 26 11:41:29 crc kubenswrapper[4724]: I0226 11:41:29.988349 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dedd4492-c73a-4f47-8243-fea2dd842a4f" path="/var/lib/kubelet/pods/dedd4492-c73a-4f47-8243-fea2dd842a4f/volumes" Feb 26 11:41:40 crc kubenswrapper[4724]: I0226 11:41:40.078293 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-jrqgs"] Feb 26 11:41:40 crc kubenswrapper[4724]: I0226 11:41:40.091827 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-jrqgs"] Feb 26 11:41:41 crc kubenswrapper[4724]: I0226 11:41:41.988541 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65202f21-3756-4083-b158-9f06dca33deb" path="/var/lib/kubelet/pods/65202f21-3756-4083-b158-9f06dca33deb/volumes" Feb 26 11:41:45 crc kubenswrapper[4724]: I0226 11:41:45.035751 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-fllvh"] Feb 26 11:41:45 crc kubenswrapper[4724]: I0226 11:41:45.044483 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-b6cqc"] Feb 26 11:41:45 crc kubenswrapper[4724]: I0226 11:41:45.055971 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-b6cqc"] Feb 26 11:41:45 crc kubenswrapper[4724]: I0226 11:41:45.064770 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-fllvh"] Feb 26 11:41:45 crc kubenswrapper[4724]: I0226 11:41:45.988143 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba5fb0ea-707e-4123-8510-b1d1f9976c34" path="/var/lib/kubelet/pods/ba5fb0ea-707e-4123-8510-b1d1f9976c34/volumes" Feb 26 11:41:45 crc kubenswrapper[4724]: I0226 11:41:45.991467 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6f963de-7cc1-40fa-93ce-5f1facd31ffc" path="/var/lib/kubelet/pods/f6f963de-7cc1-40fa-93ce-5f1facd31ffc/volumes" Feb 26 11:41:58 crc kubenswrapper[4724]: I0226 11:41:58.261375 4724 scope.go:117] "RemoveContainer" containerID="7dc216225ecc5fac07af675a3ec7380426408abd80a1107e041dcb41e471d115" Feb 26 11:41:58 crc kubenswrapper[4724]: I0226 11:41:58.314280 4724 scope.go:117] "RemoveContainer" containerID="1c6c39dc2d7757dbed1a2892e1c42c6122582363b7b13fb7765bb627d4ad724b" Feb 26 11:41:58 crc kubenswrapper[4724]: I0226 11:41:58.346498 4724 scope.go:117] "RemoveContainer" containerID="6531ce102a318f4e1d9c9d45ec01a52344227633d6e92c79d338f39d229919e8" Feb 26 11:41:58 crc kubenswrapper[4724]: I0226 11:41:58.404229 4724 scope.go:117] "RemoveContainer" containerID="0c3d6803c259df57f6cd352267d647dad45979ecb49ea616bc8093f7a864db34" Feb 26 11:41:58 crc kubenswrapper[4724]: I0226 11:41:58.457835 4724 scope.go:117] "RemoveContainer" containerID="d18746885daa54caaa95fe8bec1dd5ec0a80732abefe55267bf92a3efa0fcb54" Feb 26 11:41:58 crc kubenswrapper[4724]: I0226 11:41:58.536927 4724 scope.go:117] "RemoveContainer" containerID="30c67edacf3dbdd37a1504690171351de2cf9b8023717ca71b1d73366dc02fc8" Feb 26 11:42:00 crc kubenswrapper[4724]: I0226 11:42:00.153261 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535102-5hzg5"] Feb 26 11:42:00 crc kubenswrapper[4724]: I0226 11:42:00.155000 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535102-5hzg5" Feb 26 11:42:00 crc kubenswrapper[4724]: I0226 11:42:00.161863 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535102-5hzg5"] Feb 26 11:42:00 crc kubenswrapper[4724]: I0226 11:42:00.174716 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:42:00 crc kubenswrapper[4724]: I0226 11:42:00.174925 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:42:00 crc kubenswrapper[4724]: I0226 11:42:00.175032 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:42:00 crc kubenswrapper[4724]: I0226 11:42:00.212590 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcq98\" (UniqueName: \"kubernetes.io/projected/64123437-3525-406b-b430-90dcfb4aaecb-kube-api-access-lcq98\") pod \"auto-csr-approver-29535102-5hzg5\" (UID: \"64123437-3525-406b-b430-90dcfb4aaecb\") " pod="openshift-infra/auto-csr-approver-29535102-5hzg5" Feb 26 11:42:00 crc kubenswrapper[4724]: I0226 11:42:00.314279 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcq98\" (UniqueName: \"kubernetes.io/projected/64123437-3525-406b-b430-90dcfb4aaecb-kube-api-access-lcq98\") pod \"auto-csr-approver-29535102-5hzg5\" (UID: \"64123437-3525-406b-b430-90dcfb4aaecb\") " pod="openshift-infra/auto-csr-approver-29535102-5hzg5" Feb 26 11:42:00 crc kubenswrapper[4724]: I0226 11:42:00.335164 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcq98\" (UniqueName: \"kubernetes.io/projected/64123437-3525-406b-b430-90dcfb4aaecb-kube-api-access-lcq98\") pod \"auto-csr-approver-29535102-5hzg5\" (UID: \"64123437-3525-406b-b430-90dcfb4aaecb\") " pod="openshift-infra/auto-csr-approver-29535102-5hzg5" Feb 26 11:42:00 crc kubenswrapper[4724]: I0226 11:42:00.507934 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535102-5hzg5" Feb 26 11:42:00 crc kubenswrapper[4724]: I0226 11:42:00.973101 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535102-5hzg5"] Feb 26 11:42:01 crc kubenswrapper[4724]: I0226 11:42:01.045225 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535102-5hzg5" event={"ID":"64123437-3525-406b-b430-90dcfb4aaecb","Type":"ContainerStarted","Data":"8fe2790f0903c17171f4740711ac6a2db0dee5bb0dff1322799962f47bed81d2"} Feb 26 11:42:03 crc kubenswrapper[4724]: I0226 11:42:03.063871 4724 generic.go:334] "Generic (PLEG): container finished" podID="64123437-3525-406b-b430-90dcfb4aaecb" containerID="6bcd250dea32e3bfdcfe5956287d51c5be020c332108d1700230774b3d75897c" exitCode=0 Feb 26 11:42:03 crc kubenswrapper[4724]: I0226 11:42:03.064021 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535102-5hzg5" event={"ID":"64123437-3525-406b-b430-90dcfb4aaecb","Type":"ContainerDied","Data":"6bcd250dea32e3bfdcfe5956287d51c5be020c332108d1700230774b3d75897c"} Feb 26 11:42:04 crc kubenswrapper[4724]: I0226 11:42:04.429616 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535102-5hzg5" Feb 26 11:42:04 crc kubenswrapper[4724]: I0226 11:42:04.592064 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcq98\" (UniqueName: \"kubernetes.io/projected/64123437-3525-406b-b430-90dcfb4aaecb-kube-api-access-lcq98\") pod \"64123437-3525-406b-b430-90dcfb4aaecb\" (UID: \"64123437-3525-406b-b430-90dcfb4aaecb\") " Feb 26 11:42:04 crc kubenswrapper[4724]: I0226 11:42:04.601086 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64123437-3525-406b-b430-90dcfb4aaecb-kube-api-access-lcq98" (OuterVolumeSpecName: "kube-api-access-lcq98") pod "64123437-3525-406b-b430-90dcfb4aaecb" (UID: "64123437-3525-406b-b430-90dcfb4aaecb"). InnerVolumeSpecName "kube-api-access-lcq98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:42:04 crc kubenswrapper[4724]: I0226 11:42:04.694660 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcq98\" (UniqueName: \"kubernetes.io/projected/64123437-3525-406b-b430-90dcfb4aaecb-kube-api-access-lcq98\") on node \"crc\" DevicePath \"\"" Feb 26 11:42:05 crc kubenswrapper[4724]: I0226 11:42:05.085617 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535102-5hzg5" event={"ID":"64123437-3525-406b-b430-90dcfb4aaecb","Type":"ContainerDied","Data":"8fe2790f0903c17171f4740711ac6a2db0dee5bb0dff1322799962f47bed81d2"} Feb 26 11:42:05 crc kubenswrapper[4724]: I0226 11:42:05.086141 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fe2790f0903c17171f4740711ac6a2db0dee5bb0dff1322799962f47bed81d2" Feb 26 11:42:05 crc kubenswrapper[4724]: I0226 11:42:05.085724 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535102-5hzg5" Feb 26 11:42:05 crc kubenswrapper[4724]: I0226 11:42:05.502020 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535096-72km4"] Feb 26 11:42:05 crc kubenswrapper[4724]: I0226 11:42:05.511571 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535096-72km4"] Feb 26 11:42:05 crc kubenswrapper[4724]: I0226 11:42:05.986613 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ed5fa8e-48cb-497b-b871-3dd17b4a77e2" path="/var/lib/kubelet/pods/4ed5fa8e-48cb-497b-b871-3dd17b4a77e2/volumes" Feb 26 11:42:13 crc kubenswrapper[4724]: I0226 11:42:13.521345 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-746558bfbf-gbdpm" podUID="acbb8b99-0b04-48c7-904e-a5c5304813a3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 26 11:42:42 crc kubenswrapper[4724]: I0226 11:42:42.048649 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2caf-account-create-update-lqcj8"] Feb 26 11:42:42 crc kubenswrapper[4724]: I0226 11:42:42.057212 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d3bd-account-create-update-lnz8z"] Feb 26 11:42:42 crc kubenswrapper[4724]: I0226 11:42:42.066527 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2caf-account-create-update-lqcj8"] Feb 26 11:42:42 crc kubenswrapper[4724]: I0226 11:42:42.074922 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d3bd-account-create-update-lnz8z"] Feb 26 11:42:43 crc kubenswrapper[4724]: I0226 11:42:43.030879 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-pw6nq"] Feb 26 11:42:43 crc kubenswrapper[4724]: I0226 11:42:43.039017 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6tpht"] Feb 26 11:42:43 crc kubenswrapper[4724]: I0226 11:42:43.048403 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3641-account-create-update-fq2s7"] Feb 26 11:42:43 crc kubenswrapper[4724]: I0226 11:42:43.057997 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-7grhf"] Feb 26 11:42:43 crc kubenswrapper[4724]: I0226 11:42:43.065640 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6tpht"] Feb 26 11:42:43 crc kubenswrapper[4724]: I0226 11:42:43.073212 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-pw6nq"] Feb 26 11:42:43 crc kubenswrapper[4724]: I0226 11:42:43.080554 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3641-account-create-update-fq2s7"] Feb 26 11:42:43 crc kubenswrapper[4724]: I0226 11:42:43.087206 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-7grhf"] Feb 26 11:42:43 crc kubenswrapper[4724]: I0226 11:42:43.993433 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="565dc4e0-05d9-4e31-8a8a-0865909b2523" path="/var/lib/kubelet/pods/565dc4e0-05d9-4e31-8a8a-0865909b2523/volumes" Feb 26 11:42:44 crc kubenswrapper[4724]: I0226 11:42:44.023662 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6014f5be-ec67-4cfd-89f7-74db5e786dc0" path="/var/lib/kubelet/pods/6014f5be-ec67-4cfd-89f7-74db5e786dc0/volumes" Feb 26 11:42:44 crc kubenswrapper[4724]: I0226 11:42:44.037374 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6804ceff-36ec-4004-baf8-69e65d998378" path="/var/lib/kubelet/pods/6804ceff-36ec-4004-baf8-69e65d998378/volumes" Feb 26 11:42:44 crc kubenswrapper[4724]: I0226 11:42:44.049945 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ddd79bf-b594-45ff-95e6-69bb0bc58dca" path="/var/lib/kubelet/pods/7ddd79bf-b594-45ff-95e6-69bb0bc58dca/volumes" Feb 26 11:42:44 crc kubenswrapper[4724]: I0226 11:42:44.055756 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86a8dfd5-eeec-402d-a5fb-a087eae65b81" path="/var/lib/kubelet/pods/86a8dfd5-eeec-402d-a5fb-a087eae65b81/volumes" Feb 26 11:42:44 crc kubenswrapper[4724]: I0226 11:42:44.067319 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1589cfa-091d-47f6-bd8f-0db0f5756cce" path="/var/lib/kubelet/pods/c1589cfa-091d-47f6-bd8f-0db0f5756cce/volumes" Feb 26 11:42:46 crc kubenswrapper[4724]: I0226 11:42:46.906943 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:42:46 crc kubenswrapper[4724]: I0226 11:42:46.907284 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:42:58 crc kubenswrapper[4724]: I0226 11:42:58.696445 4724 scope.go:117] "RemoveContainer" containerID="684ca4f93f7b4f276a1adfbd9a7f4246ffc141e2e1a7e712ee79bdcc3a738275" Feb 26 11:42:58 crc kubenswrapper[4724]: I0226 11:42:58.732904 4724 scope.go:117] "RemoveContainer" containerID="e72cf10a2d626e9777a88fb79acd2b087f1c80291a99a3d78bd179f459f362d3" Feb 26 11:42:58 crc kubenswrapper[4724]: I0226 11:42:58.776153 4724 scope.go:117] "RemoveContainer" containerID="bfdb3719c294050a657994e90245081edae1735458b800544e754ffadbead17e" Feb 26 11:42:58 crc kubenswrapper[4724]: I0226 11:42:58.838070 4724 scope.go:117] "RemoveContainer" containerID="0c21e402f6c3d81a7341fec53fae9339e565f3e6cf87a08555fd4d3504e0b875" Feb 26 11:42:58 crc kubenswrapper[4724]: I0226 11:42:58.884430 4724 scope.go:117] "RemoveContainer" containerID="0d12246f7ac283fd04bef33e790f782f6d2590bdb3c6fb6836964d4636a0ae6f" Feb 26 11:42:58 crc kubenswrapper[4724]: I0226 11:42:58.931956 4724 scope.go:117] "RemoveContainer" containerID="685ed3136017ae0ff55073399d58ad796fb73f378f72676c21f60ecc6f86cedb" Feb 26 11:42:58 crc kubenswrapper[4724]: I0226 11:42:58.973811 4724 scope.go:117] "RemoveContainer" containerID="dbced93decb21b9e23f0f5687b3b63463048bed4da57ca9ddb5457ed23d25894" Feb 26 11:43:16 crc kubenswrapper[4724]: I0226 11:43:16.907137 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:43:16 crc kubenswrapper[4724]: I0226 11:43:16.908636 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:43:35 crc kubenswrapper[4724]: I0226 11:43:35.977785 4724 generic.go:334] "Generic (PLEG): container finished" podID="3587d474-38c2-4bdb-af02-8f03932c85bc" containerID="c59c76aca1a7185aa5d2259a2e2758e9d227e702276b7470d0c5980d69e1b943" exitCode=0 Feb 26 11:43:35 crc kubenswrapper[4724]: I0226 11:43:35.986644 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" event={"ID":"3587d474-38c2-4bdb-af02-8f03932c85bc","Type":"ContainerDied","Data":"c59c76aca1a7185aa5d2259a2e2758e9d227e702276b7470d0c5980d69e1b943"} Feb 26 11:43:37 crc kubenswrapper[4724]: I0226 11:43:37.433159 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" Feb 26 11:43:37 crc kubenswrapper[4724]: I0226 11:43:37.567816 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3587d474-38c2-4bdb-af02-8f03932c85bc-inventory\") pod \"3587d474-38c2-4bdb-af02-8f03932c85bc\" (UID: \"3587d474-38c2-4bdb-af02-8f03932c85bc\") " Feb 26 11:43:37 crc kubenswrapper[4724]: I0226 11:43:37.567973 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drrjt\" (UniqueName: \"kubernetes.io/projected/3587d474-38c2-4bdb-af02-8f03932c85bc-kube-api-access-drrjt\") pod \"3587d474-38c2-4bdb-af02-8f03932c85bc\" (UID: \"3587d474-38c2-4bdb-af02-8f03932c85bc\") " Feb 26 11:43:37 crc kubenswrapper[4724]: I0226 11:43:37.568070 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3587d474-38c2-4bdb-af02-8f03932c85bc-ssh-key-openstack-edpm-ipam\") pod \"3587d474-38c2-4bdb-af02-8f03932c85bc\" (UID: \"3587d474-38c2-4bdb-af02-8f03932c85bc\") " Feb 26 11:43:37 crc kubenswrapper[4724]: I0226 11:43:37.588720 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3587d474-38c2-4bdb-af02-8f03932c85bc-kube-api-access-drrjt" (OuterVolumeSpecName: "kube-api-access-drrjt") pod "3587d474-38c2-4bdb-af02-8f03932c85bc" (UID: "3587d474-38c2-4bdb-af02-8f03932c85bc"). InnerVolumeSpecName "kube-api-access-drrjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:43:37 crc kubenswrapper[4724]: I0226 11:43:37.598409 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3587d474-38c2-4bdb-af02-8f03932c85bc-inventory" (OuterVolumeSpecName: "inventory") pod "3587d474-38c2-4bdb-af02-8f03932c85bc" (UID: "3587d474-38c2-4bdb-af02-8f03932c85bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:43:37 crc kubenswrapper[4724]: I0226 11:43:37.603995 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3587d474-38c2-4bdb-af02-8f03932c85bc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3587d474-38c2-4bdb-af02-8f03932c85bc" (UID: "3587d474-38c2-4bdb-af02-8f03932c85bc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:43:37 crc kubenswrapper[4724]: I0226 11:43:37.670530 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3587d474-38c2-4bdb-af02-8f03932c85bc-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:43:37 crc kubenswrapper[4724]: I0226 11:43:37.670588 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drrjt\" (UniqueName: \"kubernetes.io/projected/3587d474-38c2-4bdb-af02-8f03932c85bc-kube-api-access-drrjt\") on node \"crc\" DevicePath \"\"" Feb 26 11:43:37 crc kubenswrapper[4724]: I0226 11:43:37.670604 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3587d474-38c2-4bdb-af02-8f03932c85bc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:43:37 crc kubenswrapper[4724]: I0226 11:43:37.997162 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" event={"ID":"3587d474-38c2-4bdb-af02-8f03932c85bc","Type":"ContainerDied","Data":"63fd9a8a5d2d33ce00c7602fe87b627a4730dce0bf8a5fa5e9b19e3b89bce853"} Feb 26 11:43:37 crc kubenswrapper[4724]: I0226 11:43:37.997223 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63fd9a8a5d2d33ce00c7602fe87b627a4730dce0bf8a5fa5e9b19e3b89bce853" Feb 26 11:43:37 crc kubenswrapper[4724]: I0226 11:43:37.997281 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.097081 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626"] Feb 26 11:43:38 crc kubenswrapper[4724]: E0226 11:43:38.097515 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64123437-3525-406b-b430-90dcfb4aaecb" containerName="oc" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.097534 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="64123437-3525-406b-b430-90dcfb4aaecb" containerName="oc" Feb 26 11:43:38 crc kubenswrapper[4724]: E0226 11:43:38.097554 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3587d474-38c2-4bdb-af02-8f03932c85bc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.097561 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3587d474-38c2-4bdb-af02-8f03932c85bc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.097724 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="64123437-3525-406b-b430-90dcfb4aaecb" containerName="oc" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.097754 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3587d474-38c2-4bdb-af02-8f03932c85bc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.098419 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.102987 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.103463 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.103661 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.103835 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.108777 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626"] Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.181033 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a96647e0-99f5-4a89-823e-87f946fbfc02-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6c626\" (UID: \"a96647e0-99f5-4a89-823e-87f946fbfc02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.181204 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mhv8\" (UniqueName: \"kubernetes.io/projected/a96647e0-99f5-4a89-823e-87f946fbfc02-kube-api-access-7mhv8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6c626\" (UID: \"a96647e0-99f5-4a89-823e-87f946fbfc02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.181363 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a96647e0-99f5-4a89-823e-87f946fbfc02-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6c626\" (UID: \"a96647e0-99f5-4a89-823e-87f946fbfc02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.283841 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a96647e0-99f5-4a89-823e-87f946fbfc02-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6c626\" (UID: \"a96647e0-99f5-4a89-823e-87f946fbfc02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.284061 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a96647e0-99f5-4a89-823e-87f946fbfc02-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6c626\" (UID: \"a96647e0-99f5-4a89-823e-87f946fbfc02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.284130 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mhv8\" (UniqueName: \"kubernetes.io/projected/a96647e0-99f5-4a89-823e-87f946fbfc02-kube-api-access-7mhv8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6c626\" (UID: \"a96647e0-99f5-4a89-823e-87f946fbfc02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.295056 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a96647e0-99f5-4a89-823e-87f946fbfc02-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6c626\" (UID: \"a96647e0-99f5-4a89-823e-87f946fbfc02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.295279 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a96647e0-99f5-4a89-823e-87f946fbfc02-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6c626\" (UID: \"a96647e0-99f5-4a89-823e-87f946fbfc02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.312165 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mhv8\" (UniqueName: \"kubernetes.io/projected/a96647e0-99f5-4a89-823e-87f946fbfc02-kube-api-access-7mhv8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6c626\" (UID: \"a96647e0-99f5-4a89-823e-87f946fbfc02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.415014 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" Feb 26 11:43:38 crc kubenswrapper[4724]: I0226 11:43:38.939316 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626"] Feb 26 11:43:39 crc kubenswrapper[4724]: I0226 11:43:39.006274 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" event={"ID":"a96647e0-99f5-4a89-823e-87f946fbfc02","Type":"ContainerStarted","Data":"5e1458f80b689cbc7c1300d60a237ee122cf373d639f2c11d0b77fa80202e1bf"} Feb 26 11:43:40 crc kubenswrapper[4724]: I0226 11:43:40.015824 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" event={"ID":"a96647e0-99f5-4a89-823e-87f946fbfc02","Type":"ContainerStarted","Data":"b9ec344eb15c2fb9b192c7e1324341179c8322ff58e7765fbbb49cd986141d82"} Feb 26 11:43:40 crc kubenswrapper[4724]: I0226 11:43:40.058156 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" podStartSLOduration=1.643517626 podStartE2EDuration="2.058139807s" podCreationTimestamp="2026-02-26 11:43:38 +0000 UTC" firstStartedPulling="2026-02-26 11:43:38.947702573 +0000 UTC m=+2285.603441688" lastFinishedPulling="2026-02-26 11:43:39.362324744 +0000 UTC m=+2286.018063869" observedRunningTime="2026-02-26 11:43:40.046617474 +0000 UTC m=+2286.702356589" watchObservedRunningTime="2026-02-26 11:43:40.058139807 +0000 UTC m=+2286.713878922" Feb 26 11:43:46 crc kubenswrapper[4724]: I0226 11:43:46.905718 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:43:46 crc kubenswrapper[4724]: I0226 11:43:46.906257 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:43:46 crc kubenswrapper[4724]: I0226 11:43:46.906306 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:43:46 crc kubenswrapper[4724]: I0226 11:43:46.907114 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a82f967eea840ca5de412ba47b3c1a0b6b8bb3dc6664316ec3d32f3e1eadd2e"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:43:46 crc kubenswrapper[4724]: I0226 11:43:46.907192 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://6a82f967eea840ca5de412ba47b3c1a0b6b8bb3dc6664316ec3d32f3e1eadd2e" gracePeriod=600 Feb 26 11:43:47 crc kubenswrapper[4724]: I0226 11:43:47.070844 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="6a82f967eea840ca5de412ba47b3c1a0b6b8bb3dc6664316ec3d32f3e1eadd2e" exitCode=0 Feb 26 11:43:47 crc kubenswrapper[4724]: I0226 11:43:47.070905 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"6a82f967eea840ca5de412ba47b3c1a0b6b8bb3dc6664316ec3d32f3e1eadd2e"} Feb 26 11:43:47 crc kubenswrapper[4724]: I0226 11:43:47.071313 4724 scope.go:117] "RemoveContainer" containerID="98cff81133c634bbb917725458c2489c49b8e00b432441fe8e6244d814ddb5ef" Feb 26 11:43:48 crc kubenswrapper[4724]: I0226 11:43:48.092530 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b"} Feb 26 11:44:00 crc kubenswrapper[4724]: I0226 11:44:00.147590 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535104-nhbvs"] Feb 26 11:44:00 crc kubenswrapper[4724]: I0226 11:44:00.149929 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535104-nhbvs" Feb 26 11:44:00 crc kubenswrapper[4724]: I0226 11:44:00.154380 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr9ld\" (UniqueName: \"kubernetes.io/projected/12b153bf-7f6f-4454-bf64-bba111ce8391-kube-api-access-kr9ld\") pod \"auto-csr-approver-29535104-nhbvs\" (UID: \"12b153bf-7f6f-4454-bf64-bba111ce8391\") " pod="openshift-infra/auto-csr-approver-29535104-nhbvs" Feb 26 11:44:00 crc kubenswrapper[4724]: I0226 11:44:00.161160 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:44:00 crc kubenswrapper[4724]: I0226 11:44:00.161550 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:44:00 crc kubenswrapper[4724]: I0226 11:44:00.162094 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535104-nhbvs"] Feb 26 11:44:00 crc kubenswrapper[4724]: I0226 11:44:00.163508 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:44:00 crc kubenswrapper[4724]: I0226 11:44:00.256154 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr9ld\" (UniqueName: \"kubernetes.io/projected/12b153bf-7f6f-4454-bf64-bba111ce8391-kube-api-access-kr9ld\") pod \"auto-csr-approver-29535104-nhbvs\" (UID: \"12b153bf-7f6f-4454-bf64-bba111ce8391\") " pod="openshift-infra/auto-csr-approver-29535104-nhbvs" Feb 26 11:44:00 crc kubenswrapper[4724]: I0226 11:44:00.285806 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr9ld\" (UniqueName: \"kubernetes.io/projected/12b153bf-7f6f-4454-bf64-bba111ce8391-kube-api-access-kr9ld\") pod \"auto-csr-approver-29535104-nhbvs\" (UID: \"12b153bf-7f6f-4454-bf64-bba111ce8391\") " pod="openshift-infra/auto-csr-approver-29535104-nhbvs" Feb 26 11:44:00 crc kubenswrapper[4724]: I0226 11:44:00.494774 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535104-nhbvs" Feb 26 11:44:01 crc kubenswrapper[4724]: I0226 11:44:01.006609 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535104-nhbvs"] Feb 26 11:44:01 crc kubenswrapper[4724]: W0226 11:44:01.014489 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12b153bf_7f6f_4454_bf64_bba111ce8391.slice/crio-c9361b1915711854cde2c452fa34a3ce2c4f1885667382ca3fcb60bfcadd93ce WatchSource:0}: Error finding container c9361b1915711854cde2c452fa34a3ce2c4f1885667382ca3fcb60bfcadd93ce: Status 404 returned error can't find the container with id c9361b1915711854cde2c452fa34a3ce2c4f1885667382ca3fcb60bfcadd93ce Feb 26 11:44:01 crc kubenswrapper[4724]: I0226 11:44:01.207998 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535104-nhbvs" event={"ID":"12b153bf-7f6f-4454-bf64-bba111ce8391","Type":"ContainerStarted","Data":"c9361b1915711854cde2c452fa34a3ce2c4f1885667382ca3fcb60bfcadd93ce"} Feb 26 11:44:02 crc kubenswrapper[4724]: I0226 11:44:02.048943 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vzfph"] Feb 26 11:44:02 crc kubenswrapper[4724]: I0226 11:44:02.058514 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vzfph"] Feb 26 11:44:03 crc kubenswrapper[4724]: I0226 11:44:03.239002 4724 generic.go:334] "Generic (PLEG): container finished" podID="12b153bf-7f6f-4454-bf64-bba111ce8391" containerID="7bfaece8e0c9084d55dc97f998815ba5c2cfe537e8859c4ab4a011b4beee29b9" exitCode=0 Feb 26 11:44:03 crc kubenswrapper[4724]: I0226 11:44:03.239095 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535104-nhbvs" event={"ID":"12b153bf-7f6f-4454-bf64-bba111ce8391","Type":"ContainerDied","Data":"7bfaece8e0c9084d55dc97f998815ba5c2cfe537e8859c4ab4a011b4beee29b9"} Feb 26 11:44:04 crc kubenswrapper[4724]: I0226 11:44:04.009564 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41d636b5-9092-4373-a1f9-8c79f5b9ddaa" path="/var/lib/kubelet/pods/41d636b5-9092-4373-a1f9-8c79f5b9ddaa/volumes" Feb 26 11:44:04 crc kubenswrapper[4724]: I0226 11:44:04.628280 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535104-nhbvs" Feb 26 11:44:04 crc kubenswrapper[4724]: I0226 11:44:04.633936 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr9ld\" (UniqueName: \"kubernetes.io/projected/12b153bf-7f6f-4454-bf64-bba111ce8391-kube-api-access-kr9ld\") pod \"12b153bf-7f6f-4454-bf64-bba111ce8391\" (UID: \"12b153bf-7f6f-4454-bf64-bba111ce8391\") " Feb 26 11:44:04 crc kubenswrapper[4724]: I0226 11:44:04.641694 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b153bf-7f6f-4454-bf64-bba111ce8391-kube-api-access-kr9ld" (OuterVolumeSpecName: "kube-api-access-kr9ld") pod "12b153bf-7f6f-4454-bf64-bba111ce8391" (UID: "12b153bf-7f6f-4454-bf64-bba111ce8391"). InnerVolumeSpecName "kube-api-access-kr9ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:44:04 crc kubenswrapper[4724]: I0226 11:44:04.736065 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr9ld\" (UniqueName: \"kubernetes.io/projected/12b153bf-7f6f-4454-bf64-bba111ce8391-kube-api-access-kr9ld\") on node \"crc\" DevicePath \"\"" Feb 26 11:44:05 crc kubenswrapper[4724]: I0226 11:44:05.258140 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535104-nhbvs" event={"ID":"12b153bf-7f6f-4454-bf64-bba111ce8391","Type":"ContainerDied","Data":"c9361b1915711854cde2c452fa34a3ce2c4f1885667382ca3fcb60bfcadd93ce"} Feb 26 11:44:05 crc kubenswrapper[4724]: I0226 11:44:05.258476 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9361b1915711854cde2c452fa34a3ce2c4f1885667382ca3fcb60bfcadd93ce" Feb 26 11:44:05 crc kubenswrapper[4724]: I0226 11:44:05.258221 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535104-nhbvs" Feb 26 11:44:05 crc kubenswrapper[4724]: I0226 11:44:05.700134 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535098-6cjhk"] Feb 26 11:44:05 crc kubenswrapper[4724]: I0226 11:44:05.708444 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535098-6cjhk"] Feb 26 11:44:05 crc kubenswrapper[4724]: I0226 11:44:05.991644 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="345ac49f-b371-407c-9e58-781821e13a1b" path="/var/lib/kubelet/pods/345ac49f-b371-407c-9e58-781821e13a1b/volumes" Feb 26 11:44:35 crc kubenswrapper[4724]: I0226 11:44:35.049383 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-wsfkc"] Feb 26 11:44:35 crc kubenswrapper[4724]: I0226 11:44:35.058590 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-wsfkc"] Feb 26 11:44:35 crc kubenswrapper[4724]: I0226 11:44:35.987308 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d" path="/var/lib/kubelet/pods/3ccbf941-d7e2-4a85-9ec2-6b1ddceb126d/volumes" Feb 26 11:44:42 crc kubenswrapper[4724]: I0226 11:44:42.044050 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pqttj"] Feb 26 11:44:42 crc kubenswrapper[4724]: I0226 11:44:42.056435 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pqttj"] Feb 26 11:44:43 crc kubenswrapper[4724]: I0226 11:44:43.988032 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d532a325-83f4-45d6-8363-8fab02ca4afc" path="/var/lib/kubelet/pods/d532a325-83f4-45d6-8363-8fab02ca4afc/volumes" Feb 26 11:44:52 crc kubenswrapper[4724]: I0226 11:44:52.744491 4724 generic.go:334] "Generic (PLEG): container finished" podID="a96647e0-99f5-4a89-823e-87f946fbfc02" containerID="b9ec344eb15c2fb9b192c7e1324341179c8322ff58e7765fbbb49cd986141d82" exitCode=0 Feb 26 11:44:52 crc kubenswrapper[4724]: I0226 11:44:52.744697 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" event={"ID":"a96647e0-99f5-4a89-823e-87f946fbfc02","Type":"ContainerDied","Data":"b9ec344eb15c2fb9b192c7e1324341179c8322ff58e7765fbbb49cd986141d82"} Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.289561 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.305963 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a96647e0-99f5-4a89-823e-87f946fbfc02-ssh-key-openstack-edpm-ipam\") pod \"a96647e0-99f5-4a89-823e-87f946fbfc02\" (UID: \"a96647e0-99f5-4a89-823e-87f946fbfc02\") " Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.306060 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a96647e0-99f5-4a89-823e-87f946fbfc02-inventory\") pod \"a96647e0-99f5-4a89-823e-87f946fbfc02\" (UID: \"a96647e0-99f5-4a89-823e-87f946fbfc02\") " Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.306257 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mhv8\" (UniqueName: \"kubernetes.io/projected/a96647e0-99f5-4a89-823e-87f946fbfc02-kube-api-access-7mhv8\") pod \"a96647e0-99f5-4a89-823e-87f946fbfc02\" (UID: \"a96647e0-99f5-4a89-823e-87f946fbfc02\") " Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.313899 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a96647e0-99f5-4a89-823e-87f946fbfc02-kube-api-access-7mhv8" (OuterVolumeSpecName: "kube-api-access-7mhv8") pod "a96647e0-99f5-4a89-823e-87f946fbfc02" (UID: "a96647e0-99f5-4a89-823e-87f946fbfc02"). InnerVolumeSpecName "kube-api-access-7mhv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.342824 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a96647e0-99f5-4a89-823e-87f946fbfc02-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a96647e0-99f5-4a89-823e-87f946fbfc02" (UID: "a96647e0-99f5-4a89-823e-87f946fbfc02"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.342797 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a96647e0-99f5-4a89-823e-87f946fbfc02-inventory" (OuterVolumeSpecName: "inventory") pod "a96647e0-99f5-4a89-823e-87f946fbfc02" (UID: "a96647e0-99f5-4a89-823e-87f946fbfc02"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.409991 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a96647e0-99f5-4a89-823e-87f946fbfc02-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.410021 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a96647e0-99f5-4a89-823e-87f946fbfc02-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.410031 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mhv8\" (UniqueName: \"kubernetes.io/projected/a96647e0-99f5-4a89-823e-87f946fbfc02-kube-api-access-7mhv8\") on node \"crc\" DevicePath \"\"" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.763433 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" event={"ID":"a96647e0-99f5-4a89-823e-87f946fbfc02","Type":"ContainerDied","Data":"5e1458f80b689cbc7c1300d60a237ee122cf373d639f2c11d0b77fa80202e1bf"} Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.763710 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e1458f80b689cbc7c1300d60a237ee122cf373d639f2c11d0b77fa80202e1bf" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.763507 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6c626" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.914052 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd"] Feb 26 11:44:54 crc kubenswrapper[4724]: E0226 11:44:54.914597 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a96647e0-99f5-4a89-823e-87f946fbfc02" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.914625 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a96647e0-99f5-4a89-823e-87f946fbfc02" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 11:44:54 crc kubenswrapper[4724]: E0226 11:44:54.914653 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b153bf-7f6f-4454-bf64-bba111ce8391" containerName="oc" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.914664 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b153bf-7f6f-4454-bf64-bba111ce8391" containerName="oc" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.915641 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b153bf-7f6f-4454-bf64-bba111ce8391" containerName="oc" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.915707 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a96647e0-99f5-4a89-823e-87f946fbfc02" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.916419 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.924931 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.924963 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.925655 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.925882 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:44:54 crc kubenswrapper[4724]: I0226 11:44:54.938355 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd"] Feb 26 11:44:55 crc kubenswrapper[4724]: I0226 11:44:55.020447 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4b3aebd-40f4-47b8-836b-dd94ef4010af-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd\" (UID: \"e4b3aebd-40f4-47b8-836b-dd94ef4010af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" Feb 26 11:44:55 crc kubenswrapper[4724]: I0226 11:44:55.020577 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4r8v\" (UniqueName: \"kubernetes.io/projected/e4b3aebd-40f4-47b8-836b-dd94ef4010af-kube-api-access-k4r8v\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd\" (UID: \"e4b3aebd-40f4-47b8-836b-dd94ef4010af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" Feb 26 11:44:55 crc kubenswrapper[4724]: I0226 11:44:55.020630 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4b3aebd-40f4-47b8-836b-dd94ef4010af-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd\" (UID: \"e4b3aebd-40f4-47b8-836b-dd94ef4010af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" Feb 26 11:44:55 crc kubenswrapper[4724]: I0226 11:44:55.121993 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4b3aebd-40f4-47b8-836b-dd94ef4010af-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd\" (UID: \"e4b3aebd-40f4-47b8-836b-dd94ef4010af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" Feb 26 11:44:55 crc kubenswrapper[4724]: I0226 11:44:55.122165 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4b3aebd-40f4-47b8-836b-dd94ef4010af-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd\" (UID: \"e4b3aebd-40f4-47b8-836b-dd94ef4010af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" Feb 26 11:44:55 crc kubenswrapper[4724]: I0226 11:44:55.122225 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4r8v\" (UniqueName: \"kubernetes.io/projected/e4b3aebd-40f4-47b8-836b-dd94ef4010af-kube-api-access-k4r8v\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd\" (UID: \"e4b3aebd-40f4-47b8-836b-dd94ef4010af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" Feb 26 11:44:55 crc kubenswrapper[4724]: I0226 11:44:55.130148 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4b3aebd-40f4-47b8-836b-dd94ef4010af-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd\" (UID: \"e4b3aebd-40f4-47b8-836b-dd94ef4010af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" Feb 26 11:44:55 crc kubenswrapper[4724]: I0226 11:44:55.151706 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4r8v\" (UniqueName: \"kubernetes.io/projected/e4b3aebd-40f4-47b8-836b-dd94ef4010af-kube-api-access-k4r8v\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd\" (UID: \"e4b3aebd-40f4-47b8-836b-dd94ef4010af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" Feb 26 11:44:55 crc kubenswrapper[4724]: I0226 11:44:55.151710 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4b3aebd-40f4-47b8-836b-dd94ef4010af-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd\" (UID: \"e4b3aebd-40f4-47b8-836b-dd94ef4010af\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" Feb 26 11:44:55 crc kubenswrapper[4724]: I0226 11:44:55.232622 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" Feb 26 11:44:55 crc kubenswrapper[4724]: I0226 11:44:55.834106 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd"] Feb 26 11:44:55 crc kubenswrapper[4724]: W0226 11:44:55.842062 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4b3aebd_40f4_47b8_836b_dd94ef4010af.slice/crio-8406b930ebf98200c9b9246a285f8d08d0da25b7e5d52e94ad1aa41e351d4ff0 WatchSource:0}: Error finding container 8406b930ebf98200c9b9246a285f8d08d0da25b7e5d52e94ad1aa41e351d4ff0: Status 404 returned error can't find the container with id 8406b930ebf98200c9b9246a285f8d08d0da25b7e5d52e94ad1aa41e351d4ff0 Feb 26 11:44:56 crc kubenswrapper[4724]: I0226 11:44:56.784132 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" event={"ID":"e4b3aebd-40f4-47b8-836b-dd94ef4010af","Type":"ContainerStarted","Data":"72643d5ce80e2ad9a57aaf977240399ecff53e115fd8caeb746fed208777a1eb"} Feb 26 11:44:56 crc kubenswrapper[4724]: I0226 11:44:56.784592 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" event={"ID":"e4b3aebd-40f4-47b8-836b-dd94ef4010af","Type":"ContainerStarted","Data":"8406b930ebf98200c9b9246a285f8d08d0da25b7e5d52e94ad1aa41e351d4ff0"} Feb 26 11:44:56 crc kubenswrapper[4724]: I0226 11:44:56.806844 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" podStartSLOduration=2.3961898870000002 podStartE2EDuration="2.806826416s" podCreationTimestamp="2026-02-26 11:44:54 +0000 UTC" firstStartedPulling="2026-02-26 11:44:55.853396452 +0000 UTC m=+2362.509135567" lastFinishedPulling="2026-02-26 11:44:56.264032971 +0000 UTC m=+2362.919772096" observedRunningTime="2026-02-26 11:44:56.803631005 +0000 UTC m=+2363.459370140" watchObservedRunningTime="2026-02-26 11:44:56.806826416 +0000 UTC m=+2363.462565531" Feb 26 11:44:59 crc kubenswrapper[4724]: I0226 11:44:59.190094 4724 scope.go:117] "RemoveContainer" containerID="d15d1e88bef1821a3610412a77c674c3b0a76248f6c7eeb262765f4a14d32856" Feb 26 11:44:59 crc kubenswrapper[4724]: I0226 11:44:59.222030 4724 scope.go:117] "RemoveContainer" containerID="563a5f8a59eb586e1fef7cd004568c34552bfbb258e006ce774a199146989847" Feb 26 11:44:59 crc kubenswrapper[4724]: I0226 11:44:59.282652 4724 scope.go:117] "RemoveContainer" containerID="75ea0d78279daa310ecf39795bd2e46093f946f1ef572ee41d4941eed8bed574" Feb 26 11:44:59 crc kubenswrapper[4724]: I0226 11:44:59.362501 4724 scope.go:117] "RemoveContainer" containerID="524c1bfe8f3d0d91ea2d6f151b6b555d1cb1ea11319c9dd099fb62aa16cc2055" Feb 26 11:45:00 crc kubenswrapper[4724]: I0226 11:45:00.141167 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc"] Feb 26 11:45:00 crc kubenswrapper[4724]: I0226 11:45:00.142701 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" Feb 26 11:45:00 crc kubenswrapper[4724]: I0226 11:45:00.146558 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 11:45:00 crc kubenswrapper[4724]: I0226 11:45:00.146600 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 11:45:00 crc kubenswrapper[4724]: I0226 11:45:00.165226 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc"] Feb 26 11:45:00 crc kubenswrapper[4724]: I0226 11:45:00.254865 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/825b34fe-cee9-42f2-9954-1aa50c2b748e-config-volume\") pod \"collect-profiles-29535105-nm7kc\" (UID: \"825b34fe-cee9-42f2-9954-1aa50c2b748e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" Feb 26 11:45:00 crc kubenswrapper[4724]: I0226 11:45:00.254911 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j75gq\" (UniqueName: \"kubernetes.io/projected/825b34fe-cee9-42f2-9954-1aa50c2b748e-kube-api-access-j75gq\") pod \"collect-profiles-29535105-nm7kc\" (UID: \"825b34fe-cee9-42f2-9954-1aa50c2b748e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" Feb 26 11:45:00 crc kubenswrapper[4724]: I0226 11:45:00.255294 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/825b34fe-cee9-42f2-9954-1aa50c2b748e-secret-volume\") pod \"collect-profiles-29535105-nm7kc\" (UID: \"825b34fe-cee9-42f2-9954-1aa50c2b748e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" Feb 26 11:45:00 crc kubenswrapper[4724]: I0226 11:45:00.357306 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/825b34fe-cee9-42f2-9954-1aa50c2b748e-config-volume\") pod \"collect-profiles-29535105-nm7kc\" (UID: \"825b34fe-cee9-42f2-9954-1aa50c2b748e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" Feb 26 11:45:00 crc kubenswrapper[4724]: I0226 11:45:00.357362 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j75gq\" (UniqueName: \"kubernetes.io/projected/825b34fe-cee9-42f2-9954-1aa50c2b748e-kube-api-access-j75gq\") pod \"collect-profiles-29535105-nm7kc\" (UID: \"825b34fe-cee9-42f2-9954-1aa50c2b748e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" Feb 26 11:45:00 crc kubenswrapper[4724]: I0226 11:45:00.357564 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/825b34fe-cee9-42f2-9954-1aa50c2b748e-secret-volume\") pod \"collect-profiles-29535105-nm7kc\" (UID: \"825b34fe-cee9-42f2-9954-1aa50c2b748e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" Feb 26 11:45:00 crc kubenswrapper[4724]: I0226 11:45:00.358762 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/825b34fe-cee9-42f2-9954-1aa50c2b748e-config-volume\") pod \"collect-profiles-29535105-nm7kc\" (UID: \"825b34fe-cee9-42f2-9954-1aa50c2b748e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" Feb 26 11:45:00 crc kubenswrapper[4724]: I0226 11:45:00.375922 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/825b34fe-cee9-42f2-9954-1aa50c2b748e-secret-volume\") pod \"collect-profiles-29535105-nm7kc\" (UID: \"825b34fe-cee9-42f2-9954-1aa50c2b748e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" Feb 26 11:45:00 crc kubenswrapper[4724]: I0226 11:45:00.385939 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j75gq\" (UniqueName: \"kubernetes.io/projected/825b34fe-cee9-42f2-9954-1aa50c2b748e-kube-api-access-j75gq\") pod \"collect-profiles-29535105-nm7kc\" (UID: \"825b34fe-cee9-42f2-9954-1aa50c2b748e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" Feb 26 11:45:00 crc kubenswrapper[4724]: I0226 11:45:00.483652 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" Feb 26 11:45:01 crc kubenswrapper[4724]: I0226 11:45:01.017690 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc"] Feb 26 11:45:01 crc kubenswrapper[4724]: W0226 11:45:01.028421 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod825b34fe_cee9_42f2_9954_1aa50c2b748e.slice/crio-bec0e87a6b9165942c9efd4819eb491265564331a7f565f23c1627e74fd275da WatchSource:0}: Error finding container bec0e87a6b9165942c9efd4819eb491265564331a7f565f23c1627e74fd275da: Status 404 returned error can't find the container with id bec0e87a6b9165942c9efd4819eb491265564331a7f565f23c1627e74fd275da Feb 26 11:45:01 crc kubenswrapper[4724]: I0226 11:45:01.831992 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" event={"ID":"825b34fe-cee9-42f2-9954-1aa50c2b748e","Type":"ContainerStarted","Data":"d1baa7ba9938a5a6ae314e9aa6e2cde1549114b86caaf45fc5355a77350f6642"} Feb 26 11:45:01 crc kubenswrapper[4724]: I0226 11:45:01.832370 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" event={"ID":"825b34fe-cee9-42f2-9954-1aa50c2b748e","Type":"ContainerStarted","Data":"bec0e87a6b9165942c9efd4819eb491265564331a7f565f23c1627e74fd275da"} Feb 26 11:45:01 crc kubenswrapper[4724]: I0226 11:45:01.838063 4724 generic.go:334] "Generic (PLEG): container finished" podID="e4b3aebd-40f4-47b8-836b-dd94ef4010af" containerID="72643d5ce80e2ad9a57aaf977240399ecff53e115fd8caeb746fed208777a1eb" exitCode=0 Feb 26 11:45:01 crc kubenswrapper[4724]: I0226 11:45:01.838111 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" event={"ID":"e4b3aebd-40f4-47b8-836b-dd94ef4010af","Type":"ContainerDied","Data":"72643d5ce80e2ad9a57aaf977240399ecff53e115fd8caeb746fed208777a1eb"} Feb 26 11:45:01 crc kubenswrapper[4724]: I0226 11:45:01.876283 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" podStartSLOduration=1.876260765 podStartE2EDuration="1.876260765s" podCreationTimestamp="2026-02-26 11:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 11:45:01.870482488 +0000 UTC m=+2368.526221603" watchObservedRunningTime="2026-02-26 11:45:01.876260765 +0000 UTC m=+2368.531999880" Feb 26 11:45:02 crc kubenswrapper[4724]: I0226 11:45:02.848592 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" event={"ID":"825b34fe-cee9-42f2-9954-1aa50c2b748e","Type":"ContainerDied","Data":"d1baa7ba9938a5a6ae314e9aa6e2cde1549114b86caaf45fc5355a77350f6642"} Feb 26 11:45:02 crc kubenswrapper[4724]: I0226 11:45:02.848552 4724 generic.go:334] "Generic (PLEG): container finished" podID="825b34fe-cee9-42f2-9954-1aa50c2b748e" containerID="d1baa7ba9938a5a6ae314e9aa6e2cde1549114b86caaf45fc5355a77350f6642" exitCode=0 Feb 26 11:45:03 crc kubenswrapper[4724]: I0226 11:45:03.321670 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" Feb 26 11:45:03 crc kubenswrapper[4724]: I0226 11:45:03.440361 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4r8v\" (UniqueName: \"kubernetes.io/projected/e4b3aebd-40f4-47b8-836b-dd94ef4010af-kube-api-access-k4r8v\") pod \"e4b3aebd-40f4-47b8-836b-dd94ef4010af\" (UID: \"e4b3aebd-40f4-47b8-836b-dd94ef4010af\") " Feb 26 11:45:03 crc kubenswrapper[4724]: I0226 11:45:03.440439 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4b3aebd-40f4-47b8-836b-dd94ef4010af-inventory\") pod \"e4b3aebd-40f4-47b8-836b-dd94ef4010af\" (UID: \"e4b3aebd-40f4-47b8-836b-dd94ef4010af\") " Feb 26 11:45:03 crc kubenswrapper[4724]: I0226 11:45:03.440500 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4b3aebd-40f4-47b8-836b-dd94ef4010af-ssh-key-openstack-edpm-ipam\") pod \"e4b3aebd-40f4-47b8-836b-dd94ef4010af\" (UID: \"e4b3aebd-40f4-47b8-836b-dd94ef4010af\") " Feb 26 11:45:03 crc kubenswrapper[4724]: I0226 11:45:03.448476 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4b3aebd-40f4-47b8-836b-dd94ef4010af-kube-api-access-k4r8v" (OuterVolumeSpecName: "kube-api-access-k4r8v") pod "e4b3aebd-40f4-47b8-836b-dd94ef4010af" (UID: "e4b3aebd-40f4-47b8-836b-dd94ef4010af"). InnerVolumeSpecName "kube-api-access-k4r8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:45:03 crc kubenswrapper[4724]: I0226 11:45:03.473305 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4b3aebd-40f4-47b8-836b-dd94ef4010af-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e4b3aebd-40f4-47b8-836b-dd94ef4010af" (UID: "e4b3aebd-40f4-47b8-836b-dd94ef4010af"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:03 crc kubenswrapper[4724]: I0226 11:45:03.473407 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4b3aebd-40f4-47b8-836b-dd94ef4010af-inventory" (OuterVolumeSpecName: "inventory") pod "e4b3aebd-40f4-47b8-836b-dd94ef4010af" (UID: "e4b3aebd-40f4-47b8-836b-dd94ef4010af"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:03 crc kubenswrapper[4724]: I0226 11:45:03.542543 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4b3aebd-40f4-47b8-836b-dd94ef4010af-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:03 crc kubenswrapper[4724]: I0226 11:45:03.542584 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4b3aebd-40f4-47b8-836b-dd94ef4010af-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:03 crc kubenswrapper[4724]: I0226 11:45:03.542601 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4r8v\" (UniqueName: \"kubernetes.io/projected/e4b3aebd-40f4-47b8-836b-dd94ef4010af-kube-api-access-k4r8v\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:03 crc kubenswrapper[4724]: I0226 11:45:03.860164 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" Feb 26 11:45:03 crc kubenswrapper[4724]: I0226 11:45:03.867278 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd" event={"ID":"e4b3aebd-40f4-47b8-836b-dd94ef4010af","Type":"ContainerDied","Data":"8406b930ebf98200c9b9246a285f8d08d0da25b7e5d52e94ad1aa41e351d4ff0"} Feb 26 11:45:03 crc kubenswrapper[4724]: I0226 11:45:03.867332 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8406b930ebf98200c9b9246a285f8d08d0da25b7e5d52e94ad1aa41e351d4ff0" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.003509 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v"] Feb 26 11:45:04 crc kubenswrapper[4724]: E0226 11:45:04.003894 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b3aebd-40f4-47b8-836b-dd94ef4010af" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.003915 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b3aebd-40f4-47b8-836b-dd94ef4010af" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.004214 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4b3aebd-40f4-47b8-836b-dd94ef4010af" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.005001 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.009622 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v"] Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.014450 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.014709 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.014918 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.015079 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.051807 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34c7b1bf-1861-40ec-910b-36f494a396f6-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pvg6v\" (UID: \"34c7b1bf-1861-40ec-910b-36f494a396f6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.051956 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjjrl\" (UniqueName: \"kubernetes.io/projected/34c7b1bf-1861-40ec-910b-36f494a396f6-kube-api-access-wjjrl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pvg6v\" (UID: \"34c7b1bf-1861-40ec-910b-36f494a396f6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.052012 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34c7b1bf-1861-40ec-910b-36f494a396f6-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pvg6v\" (UID: \"34c7b1bf-1861-40ec-910b-36f494a396f6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.153713 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34c7b1bf-1861-40ec-910b-36f494a396f6-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pvg6v\" (UID: \"34c7b1bf-1861-40ec-910b-36f494a396f6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.153822 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjjrl\" (UniqueName: \"kubernetes.io/projected/34c7b1bf-1861-40ec-910b-36f494a396f6-kube-api-access-wjjrl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pvg6v\" (UID: \"34c7b1bf-1861-40ec-910b-36f494a396f6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.153854 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34c7b1bf-1861-40ec-910b-36f494a396f6-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pvg6v\" (UID: \"34c7b1bf-1861-40ec-910b-36f494a396f6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.162048 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34c7b1bf-1861-40ec-910b-36f494a396f6-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pvg6v\" (UID: \"34c7b1bf-1861-40ec-910b-36f494a396f6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.176957 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34c7b1bf-1861-40ec-910b-36f494a396f6-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pvg6v\" (UID: \"34c7b1bf-1861-40ec-910b-36f494a396f6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.186794 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjjrl\" (UniqueName: \"kubernetes.io/projected/34c7b1bf-1861-40ec-910b-36f494a396f6-kube-api-access-wjjrl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pvg6v\" (UID: \"34c7b1bf-1861-40ec-910b-36f494a396f6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.331518 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.460126 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.568152 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j75gq\" (UniqueName: \"kubernetes.io/projected/825b34fe-cee9-42f2-9954-1aa50c2b748e-kube-api-access-j75gq\") pod \"825b34fe-cee9-42f2-9954-1aa50c2b748e\" (UID: \"825b34fe-cee9-42f2-9954-1aa50c2b748e\") " Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.568242 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/825b34fe-cee9-42f2-9954-1aa50c2b748e-secret-volume\") pod \"825b34fe-cee9-42f2-9954-1aa50c2b748e\" (UID: \"825b34fe-cee9-42f2-9954-1aa50c2b748e\") " Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.568279 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/825b34fe-cee9-42f2-9954-1aa50c2b748e-config-volume\") pod \"825b34fe-cee9-42f2-9954-1aa50c2b748e\" (UID: \"825b34fe-cee9-42f2-9954-1aa50c2b748e\") " Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.569411 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/825b34fe-cee9-42f2-9954-1aa50c2b748e-config-volume" (OuterVolumeSpecName: "config-volume") pod "825b34fe-cee9-42f2-9954-1aa50c2b748e" (UID: "825b34fe-cee9-42f2-9954-1aa50c2b748e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.574580 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/825b34fe-cee9-42f2-9954-1aa50c2b748e-kube-api-access-j75gq" (OuterVolumeSpecName: "kube-api-access-j75gq") pod "825b34fe-cee9-42f2-9954-1aa50c2b748e" (UID: "825b34fe-cee9-42f2-9954-1aa50c2b748e"). InnerVolumeSpecName "kube-api-access-j75gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.576662 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/825b34fe-cee9-42f2-9954-1aa50c2b748e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "825b34fe-cee9-42f2-9954-1aa50c2b748e" (UID: "825b34fe-cee9-42f2-9954-1aa50c2b748e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.670351 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j75gq\" (UniqueName: \"kubernetes.io/projected/825b34fe-cee9-42f2-9954-1aa50c2b748e-kube-api-access-j75gq\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.670707 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/825b34fe-cee9-42f2-9954-1aa50c2b748e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.670720 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/825b34fe-cee9-42f2-9954-1aa50c2b748e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.870973 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" event={"ID":"825b34fe-cee9-42f2-9954-1aa50c2b748e","Type":"ContainerDied","Data":"bec0e87a6b9165942c9efd4819eb491265564331a7f565f23c1627e74fd275da"} Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.871016 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bec0e87a6b9165942c9efd4819eb491265564331a7f565f23c1627e74fd275da" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.871052 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc" Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.943015 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4"] Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.954408 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535060-x9rz4"] Feb 26 11:45:04 crc kubenswrapper[4724]: W0226 11:45:04.959812 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34c7b1bf_1861_40ec_910b_36f494a396f6.slice/crio-6093b42353f92cae4d2b9774244449f6ccbbf531e617f4397b375f420b12d531 WatchSource:0}: Error finding container 6093b42353f92cae4d2b9774244449f6ccbbf531e617f4397b375f420b12d531: Status 404 returned error can't find the container with id 6093b42353f92cae4d2b9774244449f6ccbbf531e617f4397b375f420b12d531 Feb 26 11:45:04 crc kubenswrapper[4724]: I0226 11:45:04.964475 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v"] Feb 26 11:45:05 crc kubenswrapper[4724]: I0226 11:45:05.882322 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" event={"ID":"34c7b1bf-1861-40ec-910b-36f494a396f6","Type":"ContainerStarted","Data":"8ae10e035c69c439c15af42873441ba404d8cd16a994076f396908cfb7baf21e"} Feb 26 11:45:05 crc kubenswrapper[4724]: I0226 11:45:05.883435 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" event={"ID":"34c7b1bf-1861-40ec-910b-36f494a396f6","Type":"ContainerStarted","Data":"6093b42353f92cae4d2b9774244449f6ccbbf531e617f4397b375f420b12d531"} Feb 26 11:45:05 crc kubenswrapper[4724]: I0226 11:45:05.905566 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" podStartSLOduration=2.436470803 podStartE2EDuration="2.905546671s" podCreationTimestamp="2026-02-26 11:45:03 +0000 UTC" firstStartedPulling="2026-02-26 11:45:04.962569063 +0000 UTC m=+2371.618308178" lastFinishedPulling="2026-02-26 11:45:05.431644931 +0000 UTC m=+2372.087384046" observedRunningTime="2026-02-26 11:45:05.901841207 +0000 UTC m=+2372.557580322" watchObservedRunningTime="2026-02-26 11:45:05.905546671 +0000 UTC m=+2372.561285786" Feb 26 11:45:05 crc kubenswrapper[4724]: I0226 11:45:05.991659 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3546882-cc78-45d2-b99d-9d14605bdc5b" path="/var/lib/kubelet/pods/f3546882-cc78-45d2-b99d-9d14605bdc5b/volumes" Feb 26 11:45:15 crc kubenswrapper[4724]: I0226 11:45:15.043232 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-2brpm"] Feb 26 11:45:15 crc kubenswrapper[4724]: I0226 11:45:15.052477 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-2brpm"] Feb 26 11:45:15 crc kubenswrapper[4724]: I0226 11:45:15.993203 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1754e31b-5617-4b43-96ec-fa7f2845b2de" path="/var/lib/kubelet/pods/1754e31b-5617-4b43-96ec-fa7f2845b2de/volumes" Feb 26 11:45:43 crc kubenswrapper[4724]: I0226 11:45:43.207712 4724 generic.go:334] "Generic (PLEG): container finished" podID="34c7b1bf-1861-40ec-910b-36f494a396f6" containerID="8ae10e035c69c439c15af42873441ba404d8cd16a994076f396908cfb7baf21e" exitCode=0 Feb 26 11:45:43 crc kubenswrapper[4724]: I0226 11:45:43.207809 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" event={"ID":"34c7b1bf-1861-40ec-910b-36f494a396f6","Type":"ContainerDied","Data":"8ae10e035c69c439c15af42873441ba404d8cd16a994076f396908cfb7baf21e"} Feb 26 11:45:44 crc kubenswrapper[4724]: I0226 11:45:44.664341 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" Feb 26 11:45:44 crc kubenswrapper[4724]: I0226 11:45:44.785223 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34c7b1bf-1861-40ec-910b-36f494a396f6-inventory\") pod \"34c7b1bf-1861-40ec-910b-36f494a396f6\" (UID: \"34c7b1bf-1861-40ec-910b-36f494a396f6\") " Feb 26 11:45:44 crc kubenswrapper[4724]: I0226 11:45:44.785510 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjjrl\" (UniqueName: \"kubernetes.io/projected/34c7b1bf-1861-40ec-910b-36f494a396f6-kube-api-access-wjjrl\") pod \"34c7b1bf-1861-40ec-910b-36f494a396f6\" (UID: \"34c7b1bf-1861-40ec-910b-36f494a396f6\") " Feb 26 11:45:44 crc kubenswrapper[4724]: I0226 11:45:44.785572 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34c7b1bf-1861-40ec-910b-36f494a396f6-ssh-key-openstack-edpm-ipam\") pod \"34c7b1bf-1861-40ec-910b-36f494a396f6\" (UID: \"34c7b1bf-1861-40ec-910b-36f494a396f6\") " Feb 26 11:45:44 crc kubenswrapper[4724]: I0226 11:45:44.793942 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c7b1bf-1861-40ec-910b-36f494a396f6-kube-api-access-wjjrl" (OuterVolumeSpecName: "kube-api-access-wjjrl") pod "34c7b1bf-1861-40ec-910b-36f494a396f6" (UID: "34c7b1bf-1861-40ec-910b-36f494a396f6"). InnerVolumeSpecName "kube-api-access-wjjrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:45:44 crc kubenswrapper[4724]: I0226 11:45:44.821286 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c7b1bf-1861-40ec-910b-36f494a396f6-inventory" (OuterVolumeSpecName: "inventory") pod "34c7b1bf-1861-40ec-910b-36f494a396f6" (UID: "34c7b1bf-1861-40ec-910b-36f494a396f6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:44 crc kubenswrapper[4724]: I0226 11:45:44.822780 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c7b1bf-1861-40ec-910b-36f494a396f6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "34c7b1bf-1861-40ec-910b-36f494a396f6" (UID: "34c7b1bf-1861-40ec-910b-36f494a396f6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:45:44 crc kubenswrapper[4724]: I0226 11:45:44.888196 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34c7b1bf-1861-40ec-910b-36f494a396f6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:44 crc kubenswrapper[4724]: I0226 11:45:44.888238 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34c7b1bf-1861-40ec-910b-36f494a396f6-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:44 crc kubenswrapper[4724]: I0226 11:45:44.888247 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjjrl\" (UniqueName: \"kubernetes.io/projected/34c7b1bf-1861-40ec-910b-36f494a396f6-kube-api-access-wjjrl\") on node \"crc\" DevicePath \"\"" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.226342 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" event={"ID":"34c7b1bf-1861-40ec-910b-36f494a396f6","Type":"ContainerDied","Data":"6093b42353f92cae4d2b9774244449f6ccbbf531e617f4397b375f420b12d531"} Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.226755 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6093b42353f92cae4d2b9774244449f6ccbbf531e617f4397b375f420b12d531" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.226393 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pvg6v" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.409463 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt"] Feb 26 11:45:45 crc kubenswrapper[4724]: E0226 11:45:45.410306 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825b34fe-cee9-42f2-9954-1aa50c2b748e" containerName="collect-profiles" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.410320 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="825b34fe-cee9-42f2-9954-1aa50c2b748e" containerName="collect-profiles" Feb 26 11:45:45 crc kubenswrapper[4724]: E0226 11:45:45.410346 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c7b1bf-1861-40ec-910b-36f494a396f6" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.410353 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c7b1bf-1861-40ec-910b-36f494a396f6" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.410950 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="825b34fe-cee9-42f2-9954-1aa50c2b748e" containerName="collect-profiles" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.410971 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c7b1bf-1861-40ec-910b-36f494a396f6" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.413331 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.429909 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.431637 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.431957 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.433832 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.447768 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt"] Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.521369 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p9f9\" (UniqueName: \"kubernetes.io/projected/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-kube-api-access-8p9f9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt\" (UID: \"cdfbc2ed-ca25-4209-b3d8-d372bc73801e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.521773 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt\" (UID: \"cdfbc2ed-ca25-4209-b3d8-d372bc73801e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.521881 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt\" (UID: \"cdfbc2ed-ca25-4209-b3d8-d372bc73801e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.624518 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p9f9\" (UniqueName: \"kubernetes.io/projected/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-kube-api-access-8p9f9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt\" (UID: \"cdfbc2ed-ca25-4209-b3d8-d372bc73801e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.624654 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt\" (UID: \"cdfbc2ed-ca25-4209-b3d8-d372bc73801e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.624697 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt\" (UID: \"cdfbc2ed-ca25-4209-b3d8-d372bc73801e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.632019 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt\" (UID: \"cdfbc2ed-ca25-4209-b3d8-d372bc73801e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.640380 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt\" (UID: \"cdfbc2ed-ca25-4209-b3d8-d372bc73801e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.645719 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p9f9\" (UniqueName: \"kubernetes.io/projected/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-kube-api-access-8p9f9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt\" (UID: \"cdfbc2ed-ca25-4209-b3d8-d372bc73801e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" Feb 26 11:45:45 crc kubenswrapper[4724]: I0226 11:45:45.756701 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" Feb 26 11:45:46 crc kubenswrapper[4724]: I0226 11:45:46.290964 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt"] Feb 26 11:45:47 crc kubenswrapper[4724]: I0226 11:45:47.247850 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" event={"ID":"cdfbc2ed-ca25-4209-b3d8-d372bc73801e","Type":"ContainerStarted","Data":"09dadd7b843a92260ae4916248137830c98e2179b4fbdb4730ea7642bb8421ef"} Feb 26 11:45:48 crc kubenswrapper[4724]: I0226 11:45:48.256813 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" event={"ID":"cdfbc2ed-ca25-4209-b3d8-d372bc73801e","Type":"ContainerStarted","Data":"669229a53877032b3344787799ff2c1b46e4cb0408c57f39466e302174947d68"} Feb 26 11:45:48 crc kubenswrapper[4724]: I0226 11:45:48.272893 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" podStartSLOduration=2.1935239380000002 podStartE2EDuration="3.272874588s" podCreationTimestamp="2026-02-26 11:45:45 +0000 UTC" firstStartedPulling="2026-02-26 11:45:46.306553676 +0000 UTC m=+2412.962292791" lastFinishedPulling="2026-02-26 11:45:47.385904326 +0000 UTC m=+2414.041643441" observedRunningTime="2026-02-26 11:45:48.272464028 +0000 UTC m=+2414.928203143" watchObservedRunningTime="2026-02-26 11:45:48.272874588 +0000 UTC m=+2414.928613703" Feb 26 11:45:59 crc kubenswrapper[4724]: I0226 11:45:59.582404 4724 scope.go:117] "RemoveContainer" containerID="ec77abe513b5c472b56cee1421d6050aa9092dbd78704fe99fa22f0ac25b7bcf" Feb 26 11:45:59 crc kubenswrapper[4724]: I0226 11:45:59.611710 4724 scope.go:117] "RemoveContainer" containerID="f5d855befd6f0cf09abce249c8c865a342d106e58162aa0b964bfc614c10c871" Feb 26 11:46:00 crc kubenswrapper[4724]: I0226 11:46:00.140632 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535106-gjk6f"] Feb 26 11:46:00 crc kubenswrapper[4724]: I0226 11:46:00.142331 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535106-gjk6f" Feb 26 11:46:00 crc kubenswrapper[4724]: I0226 11:46:00.145864 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:46:00 crc kubenswrapper[4724]: I0226 11:46:00.146162 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:46:00 crc kubenswrapper[4724]: I0226 11:46:00.147239 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:46:00 crc kubenswrapper[4724]: I0226 11:46:00.151268 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535106-gjk6f"] Feb 26 11:46:00 crc kubenswrapper[4724]: I0226 11:46:00.220126 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r479\" (UniqueName: \"kubernetes.io/projected/ffd8514e-7ae7-4dff-a626-86bc0c716293-kube-api-access-8r479\") pod \"auto-csr-approver-29535106-gjk6f\" (UID: \"ffd8514e-7ae7-4dff-a626-86bc0c716293\") " pod="openshift-infra/auto-csr-approver-29535106-gjk6f" Feb 26 11:46:00 crc kubenswrapper[4724]: I0226 11:46:00.322114 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r479\" (UniqueName: \"kubernetes.io/projected/ffd8514e-7ae7-4dff-a626-86bc0c716293-kube-api-access-8r479\") pod \"auto-csr-approver-29535106-gjk6f\" (UID: \"ffd8514e-7ae7-4dff-a626-86bc0c716293\") " pod="openshift-infra/auto-csr-approver-29535106-gjk6f" Feb 26 11:46:00 crc kubenswrapper[4724]: I0226 11:46:00.341994 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r479\" (UniqueName: \"kubernetes.io/projected/ffd8514e-7ae7-4dff-a626-86bc0c716293-kube-api-access-8r479\") pod \"auto-csr-approver-29535106-gjk6f\" (UID: \"ffd8514e-7ae7-4dff-a626-86bc0c716293\") " pod="openshift-infra/auto-csr-approver-29535106-gjk6f" Feb 26 11:46:00 crc kubenswrapper[4724]: I0226 11:46:00.473585 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535106-gjk6f" Feb 26 11:46:00 crc kubenswrapper[4724]: I0226 11:46:00.959570 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535106-gjk6f"] Feb 26 11:46:01 crc kubenswrapper[4724]: I0226 11:46:01.372523 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535106-gjk6f" event={"ID":"ffd8514e-7ae7-4dff-a626-86bc0c716293","Type":"ContainerStarted","Data":"8723c23c652d070e0d4cf2cc3f2a2581c569005b5ef4bbed373d9cd093c02074"} Feb 26 11:46:03 crc kubenswrapper[4724]: I0226 11:46:03.390593 4724 generic.go:334] "Generic (PLEG): container finished" podID="ffd8514e-7ae7-4dff-a626-86bc0c716293" containerID="975089a6d1d8eb823665d7b973a5b2021971cd8e5f8d2a62f4a23eff8998969b" exitCode=0 Feb 26 11:46:03 crc kubenswrapper[4724]: I0226 11:46:03.390717 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535106-gjk6f" event={"ID":"ffd8514e-7ae7-4dff-a626-86bc0c716293","Type":"ContainerDied","Data":"975089a6d1d8eb823665d7b973a5b2021971cd8e5f8d2a62f4a23eff8998969b"} Feb 26 11:46:04 crc kubenswrapper[4724]: I0226 11:46:04.757030 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535106-gjk6f" Feb 26 11:46:04 crc kubenswrapper[4724]: I0226 11:46:04.837808 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r479\" (UniqueName: \"kubernetes.io/projected/ffd8514e-7ae7-4dff-a626-86bc0c716293-kube-api-access-8r479\") pod \"ffd8514e-7ae7-4dff-a626-86bc0c716293\" (UID: \"ffd8514e-7ae7-4dff-a626-86bc0c716293\") " Feb 26 11:46:04 crc kubenswrapper[4724]: I0226 11:46:04.843521 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffd8514e-7ae7-4dff-a626-86bc0c716293-kube-api-access-8r479" (OuterVolumeSpecName: "kube-api-access-8r479") pod "ffd8514e-7ae7-4dff-a626-86bc0c716293" (UID: "ffd8514e-7ae7-4dff-a626-86bc0c716293"). InnerVolumeSpecName "kube-api-access-8r479". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:46:04 crc kubenswrapper[4724]: I0226 11:46:04.940165 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r479\" (UniqueName: \"kubernetes.io/projected/ffd8514e-7ae7-4dff-a626-86bc0c716293-kube-api-access-8r479\") on node \"crc\" DevicePath \"\"" Feb 26 11:46:05 crc kubenswrapper[4724]: I0226 11:46:05.412440 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535106-gjk6f" event={"ID":"ffd8514e-7ae7-4dff-a626-86bc0c716293","Type":"ContainerDied","Data":"8723c23c652d070e0d4cf2cc3f2a2581c569005b5ef4bbed373d9cd093c02074"} Feb 26 11:46:05 crc kubenswrapper[4724]: I0226 11:46:05.413068 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8723c23c652d070e0d4cf2cc3f2a2581c569005b5ef4bbed373d9cd093c02074" Feb 26 11:46:05 crc kubenswrapper[4724]: I0226 11:46:05.412546 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535106-gjk6f" Feb 26 11:46:05 crc kubenswrapper[4724]: I0226 11:46:05.842153 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535100-5dgl8"] Feb 26 11:46:05 crc kubenswrapper[4724]: I0226 11:46:05.850134 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535100-5dgl8"] Feb 26 11:46:05 crc kubenswrapper[4724]: I0226 11:46:05.991444 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec5fe4d-b8c4-42ed-894e-c5927452d116" path="/var/lib/kubelet/pods/1ec5fe4d-b8c4-42ed-894e-c5927452d116/volumes" Feb 26 11:46:16 crc kubenswrapper[4724]: I0226 11:46:16.906514 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:46:16 crc kubenswrapper[4724]: I0226 11:46:16.907028 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:46:31 crc kubenswrapper[4724]: I0226 11:46:31.635649 4724 generic.go:334] "Generic (PLEG): container finished" podID="cdfbc2ed-ca25-4209-b3d8-d372bc73801e" containerID="669229a53877032b3344787799ff2c1b46e4cb0408c57f39466e302174947d68" exitCode=0 Feb 26 11:46:31 crc kubenswrapper[4724]: I0226 11:46:31.636153 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" event={"ID":"cdfbc2ed-ca25-4209-b3d8-d372bc73801e","Type":"ContainerDied","Data":"669229a53877032b3344787799ff2c1b46e4cb0408c57f39466e302174947d68"} Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.087059 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.193880 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-inventory\") pod \"cdfbc2ed-ca25-4209-b3d8-d372bc73801e\" (UID: \"cdfbc2ed-ca25-4209-b3d8-d372bc73801e\") " Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.193990 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p9f9\" (UniqueName: \"kubernetes.io/projected/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-kube-api-access-8p9f9\") pod \"cdfbc2ed-ca25-4209-b3d8-d372bc73801e\" (UID: \"cdfbc2ed-ca25-4209-b3d8-d372bc73801e\") " Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.194079 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-ssh-key-openstack-edpm-ipam\") pod \"cdfbc2ed-ca25-4209-b3d8-d372bc73801e\" (UID: \"cdfbc2ed-ca25-4209-b3d8-d372bc73801e\") " Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.201482 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-kube-api-access-8p9f9" (OuterVolumeSpecName: "kube-api-access-8p9f9") pod "cdfbc2ed-ca25-4209-b3d8-d372bc73801e" (UID: "cdfbc2ed-ca25-4209-b3d8-d372bc73801e"). InnerVolumeSpecName "kube-api-access-8p9f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.225344 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-inventory" (OuterVolumeSpecName: "inventory") pod "cdfbc2ed-ca25-4209-b3d8-d372bc73801e" (UID: "cdfbc2ed-ca25-4209-b3d8-d372bc73801e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.237816 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cdfbc2ed-ca25-4209-b3d8-d372bc73801e" (UID: "cdfbc2ed-ca25-4209-b3d8-d372bc73801e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.295512 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.295668 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p9f9\" (UniqueName: \"kubernetes.io/projected/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-kube-api-access-8p9f9\") on node \"crc\" DevicePath \"\"" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.295738 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdfbc2ed-ca25-4209-b3d8-d372bc73801e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.655302 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" event={"ID":"cdfbc2ed-ca25-4209-b3d8-d372bc73801e","Type":"ContainerDied","Data":"09dadd7b843a92260ae4916248137830c98e2179b4fbdb4730ea7642bb8421ef"} Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.655369 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09dadd7b843a92260ae4916248137830c98e2179b4fbdb4730ea7642bb8421ef" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.655373 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.807284 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gwnp7"] Feb 26 11:46:33 crc kubenswrapper[4724]: E0226 11:46:33.807766 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd8514e-7ae7-4dff-a626-86bc0c716293" containerName="oc" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.807787 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd8514e-7ae7-4dff-a626-86bc0c716293" containerName="oc" Feb 26 11:46:33 crc kubenswrapper[4724]: E0226 11:46:33.808120 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdfbc2ed-ca25-4209-b3d8-d372bc73801e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.808141 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdfbc2ed-ca25-4209-b3d8-d372bc73801e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.808438 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdfbc2ed-ca25-4209-b3d8-d372bc73801e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.808480 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffd8514e-7ae7-4dff-a626-86bc0c716293" containerName="oc" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.809291 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.814622 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.814872 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.815229 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.815343 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.822147 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gwnp7"] Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.910770 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2206a227-78b8-4ca1-a425-fb061de91843-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gwnp7\" (UID: \"2206a227-78b8-4ca1-a425-fb061de91843\") " pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.910872 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkgtq\" (UniqueName: \"kubernetes.io/projected/2206a227-78b8-4ca1-a425-fb061de91843-kube-api-access-pkgtq\") pod \"ssh-known-hosts-edpm-deployment-gwnp7\" (UID: \"2206a227-78b8-4ca1-a425-fb061de91843\") " pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" Feb 26 11:46:33 crc kubenswrapper[4724]: I0226 11:46:33.911297 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2206a227-78b8-4ca1-a425-fb061de91843-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gwnp7\" (UID: \"2206a227-78b8-4ca1-a425-fb061de91843\") " pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" Feb 26 11:46:34 crc kubenswrapper[4724]: I0226 11:46:34.012608 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2206a227-78b8-4ca1-a425-fb061de91843-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gwnp7\" (UID: \"2206a227-78b8-4ca1-a425-fb061de91843\") " pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" Feb 26 11:46:34 crc kubenswrapper[4724]: I0226 11:46:34.013080 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2206a227-78b8-4ca1-a425-fb061de91843-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gwnp7\" (UID: \"2206a227-78b8-4ca1-a425-fb061de91843\") " pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" Feb 26 11:46:34 crc kubenswrapper[4724]: I0226 11:46:34.013298 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkgtq\" (UniqueName: \"kubernetes.io/projected/2206a227-78b8-4ca1-a425-fb061de91843-kube-api-access-pkgtq\") pod \"ssh-known-hosts-edpm-deployment-gwnp7\" (UID: \"2206a227-78b8-4ca1-a425-fb061de91843\") " pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" Feb 26 11:46:34 crc kubenswrapper[4724]: I0226 11:46:34.016148 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:46:34 crc kubenswrapper[4724]: I0226 11:46:34.016297 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:46:34 crc kubenswrapper[4724]: I0226 11:46:34.029267 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2206a227-78b8-4ca1-a425-fb061de91843-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gwnp7\" (UID: \"2206a227-78b8-4ca1-a425-fb061de91843\") " pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" Feb 26 11:46:34 crc kubenswrapper[4724]: I0226 11:46:34.031315 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2206a227-78b8-4ca1-a425-fb061de91843-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gwnp7\" (UID: \"2206a227-78b8-4ca1-a425-fb061de91843\") " pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" Feb 26 11:46:34 crc kubenswrapper[4724]: I0226 11:46:34.031938 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkgtq\" (UniqueName: \"kubernetes.io/projected/2206a227-78b8-4ca1-a425-fb061de91843-kube-api-access-pkgtq\") pod \"ssh-known-hosts-edpm-deployment-gwnp7\" (UID: \"2206a227-78b8-4ca1-a425-fb061de91843\") " pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" Feb 26 11:46:34 crc kubenswrapper[4724]: I0226 11:46:34.131691 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:46:34 crc kubenswrapper[4724]: I0226 11:46:34.139529 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" Feb 26 11:46:34 crc kubenswrapper[4724]: I0226 11:46:34.288726 4724 container_manager_linux.go:630] "Failed to ensure state" containerName="/system.slice" err="failed to move PID 120119 into the system container \"/system.slice\": " Feb 26 11:46:34 crc kubenswrapper[4724]: I0226 11:46:34.711248 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gwnp7"] Feb 26 11:46:34 crc kubenswrapper[4724]: I0226 11:46:34.719815 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:46:35 crc kubenswrapper[4724]: I0226 11:46:35.191823 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:46:35 crc kubenswrapper[4724]: I0226 11:46:35.678685 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" event={"ID":"2206a227-78b8-4ca1-a425-fb061de91843","Type":"ContainerStarted","Data":"22652cb41decac4434b04aba939812ff11ce0ff9d03daff3829f4685b8a70cd2"} Feb 26 11:46:35 crc kubenswrapper[4724]: I0226 11:46:35.679334 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" event={"ID":"2206a227-78b8-4ca1-a425-fb061de91843","Type":"ContainerStarted","Data":"e2d5de72065fb8a37290a3650d91ba99d2268955c1fc4b12b74d2665169e7ddb"} Feb 26 11:46:35 crc kubenswrapper[4724]: I0226 11:46:35.704586 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" podStartSLOduration=2.234912438 podStartE2EDuration="2.704563977s" podCreationTimestamp="2026-02-26 11:46:33 +0000 UTC" firstStartedPulling="2026-02-26 11:46:34.719555644 +0000 UTC m=+2461.375294759" lastFinishedPulling="2026-02-26 11:46:35.189207183 +0000 UTC m=+2461.844946298" observedRunningTime="2026-02-26 11:46:35.697925018 +0000 UTC m=+2462.353664153" watchObservedRunningTime="2026-02-26 11:46:35.704563977 +0000 UTC m=+2462.360303092" Feb 26 11:46:42 crc kubenswrapper[4724]: I0226 11:46:42.739616 4724 generic.go:334] "Generic (PLEG): container finished" podID="2206a227-78b8-4ca1-a425-fb061de91843" containerID="22652cb41decac4434b04aba939812ff11ce0ff9d03daff3829f4685b8a70cd2" exitCode=0 Feb 26 11:46:42 crc kubenswrapper[4724]: I0226 11:46:42.739699 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" event={"ID":"2206a227-78b8-4ca1-a425-fb061de91843","Type":"ContainerDied","Data":"22652cb41decac4434b04aba939812ff11ce0ff9d03daff3829f4685b8a70cd2"} Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.133601 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.218138 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkgtq\" (UniqueName: \"kubernetes.io/projected/2206a227-78b8-4ca1-a425-fb061de91843-kube-api-access-pkgtq\") pod \"2206a227-78b8-4ca1-a425-fb061de91843\" (UID: \"2206a227-78b8-4ca1-a425-fb061de91843\") " Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.218250 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2206a227-78b8-4ca1-a425-fb061de91843-ssh-key-openstack-edpm-ipam\") pod \"2206a227-78b8-4ca1-a425-fb061de91843\" (UID: \"2206a227-78b8-4ca1-a425-fb061de91843\") " Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.218308 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2206a227-78b8-4ca1-a425-fb061de91843-inventory-0\") pod \"2206a227-78b8-4ca1-a425-fb061de91843\" (UID: \"2206a227-78b8-4ca1-a425-fb061de91843\") " Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.229777 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2206a227-78b8-4ca1-a425-fb061de91843-kube-api-access-pkgtq" (OuterVolumeSpecName: "kube-api-access-pkgtq") pod "2206a227-78b8-4ca1-a425-fb061de91843" (UID: "2206a227-78b8-4ca1-a425-fb061de91843"). InnerVolumeSpecName "kube-api-access-pkgtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.263952 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2206a227-78b8-4ca1-a425-fb061de91843-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2206a227-78b8-4ca1-a425-fb061de91843" (UID: "2206a227-78b8-4ca1-a425-fb061de91843"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.266652 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2206a227-78b8-4ca1-a425-fb061de91843-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "2206a227-78b8-4ca1-a425-fb061de91843" (UID: "2206a227-78b8-4ca1-a425-fb061de91843"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.320300 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkgtq\" (UniqueName: \"kubernetes.io/projected/2206a227-78b8-4ca1-a425-fb061de91843-kube-api-access-pkgtq\") on node \"crc\" DevicePath \"\"" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.320329 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2206a227-78b8-4ca1-a425-fb061de91843-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.320345 4724 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2206a227-78b8-4ca1-a425-fb061de91843-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.759630 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" event={"ID":"2206a227-78b8-4ca1-a425-fb061de91843","Type":"ContainerDied","Data":"e2d5de72065fb8a37290a3650d91ba99d2268955c1fc4b12b74d2665169e7ddb"} Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.759941 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2d5de72065fb8a37290a3650d91ba99d2268955c1fc4b12b74d2665169e7ddb" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.760001 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gwnp7" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.858468 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w"] Feb 26 11:46:44 crc kubenswrapper[4724]: E0226 11:46:44.858936 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2206a227-78b8-4ca1-a425-fb061de91843" containerName="ssh-known-hosts-edpm-deployment" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.858959 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2206a227-78b8-4ca1-a425-fb061de91843" containerName="ssh-known-hosts-edpm-deployment" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.861558 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="2206a227-78b8-4ca1-a425-fb061de91843" containerName="ssh-known-hosts-edpm-deployment" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.862532 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.865762 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.866319 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.866461 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.868596 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.870147 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w"] Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.933575 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cad1abca-ca70-4988-804c-ca6d35ba05d7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rds5w\" (UID: \"cad1abca-ca70-4988-804c-ca6d35ba05d7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.933735 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cad1abca-ca70-4988-804c-ca6d35ba05d7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rds5w\" (UID: \"cad1abca-ca70-4988-804c-ca6d35ba05d7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" Feb 26 11:46:44 crc kubenswrapper[4724]: I0226 11:46:44.934011 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44ww2\" (UniqueName: \"kubernetes.io/projected/cad1abca-ca70-4988-804c-ca6d35ba05d7-kube-api-access-44ww2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rds5w\" (UID: \"cad1abca-ca70-4988-804c-ca6d35ba05d7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" Feb 26 11:46:45 crc kubenswrapper[4724]: I0226 11:46:45.036166 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cad1abca-ca70-4988-804c-ca6d35ba05d7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rds5w\" (UID: \"cad1abca-ca70-4988-804c-ca6d35ba05d7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" Feb 26 11:46:45 crc kubenswrapper[4724]: I0226 11:46:45.036326 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cad1abca-ca70-4988-804c-ca6d35ba05d7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rds5w\" (UID: \"cad1abca-ca70-4988-804c-ca6d35ba05d7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" Feb 26 11:46:45 crc kubenswrapper[4724]: I0226 11:46:45.036429 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44ww2\" (UniqueName: \"kubernetes.io/projected/cad1abca-ca70-4988-804c-ca6d35ba05d7-kube-api-access-44ww2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rds5w\" (UID: \"cad1abca-ca70-4988-804c-ca6d35ba05d7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" Feb 26 11:46:45 crc kubenswrapper[4724]: I0226 11:46:45.045963 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cad1abca-ca70-4988-804c-ca6d35ba05d7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rds5w\" (UID: \"cad1abca-ca70-4988-804c-ca6d35ba05d7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" Feb 26 11:46:45 crc kubenswrapper[4724]: I0226 11:46:45.047289 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cad1abca-ca70-4988-804c-ca6d35ba05d7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rds5w\" (UID: \"cad1abca-ca70-4988-804c-ca6d35ba05d7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" Feb 26 11:46:45 crc kubenswrapper[4724]: I0226 11:46:45.066089 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44ww2\" (UniqueName: \"kubernetes.io/projected/cad1abca-ca70-4988-804c-ca6d35ba05d7-kube-api-access-44ww2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rds5w\" (UID: \"cad1abca-ca70-4988-804c-ca6d35ba05d7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" Feb 26 11:46:45 crc kubenswrapper[4724]: I0226 11:46:45.179549 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" Feb 26 11:46:45 crc kubenswrapper[4724]: I0226 11:46:45.749296 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w"] Feb 26 11:46:45 crc kubenswrapper[4724]: I0226 11:46:45.776707 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" event={"ID":"cad1abca-ca70-4988-804c-ca6d35ba05d7","Type":"ContainerStarted","Data":"fddde90fe0f89e5325a712844c75a287fa7714f5ae62fe4d9df29b03346cb4de"} Feb 26 11:46:46 crc kubenswrapper[4724]: I0226 11:46:46.906696 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:46:46 crc kubenswrapper[4724]: I0226 11:46:46.906757 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:46:47 crc kubenswrapper[4724]: I0226 11:46:47.807066 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" event={"ID":"cad1abca-ca70-4988-804c-ca6d35ba05d7","Type":"ContainerStarted","Data":"59d3c7b05e546b29e395bf641147a6e3246e693653edc9c45ad2b03b2e77171b"} Feb 26 11:46:47 crc kubenswrapper[4724]: I0226 11:46:47.822174 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" podStartSLOduration=2.433314892 podStartE2EDuration="3.822152676s" podCreationTimestamp="2026-02-26 11:46:44 +0000 UTC" firstStartedPulling="2026-02-26 11:46:45.75340399 +0000 UTC m=+2472.409143105" lastFinishedPulling="2026-02-26 11:46:47.142241774 +0000 UTC m=+2473.797980889" observedRunningTime="2026-02-26 11:46:47.821193261 +0000 UTC m=+2474.476932386" watchObservedRunningTime="2026-02-26 11:46:47.822152676 +0000 UTC m=+2474.477891801" Feb 26 11:46:54 crc kubenswrapper[4724]: I0226 11:46:54.870373 4724 generic.go:334] "Generic (PLEG): container finished" podID="cad1abca-ca70-4988-804c-ca6d35ba05d7" containerID="59d3c7b05e546b29e395bf641147a6e3246e693653edc9c45ad2b03b2e77171b" exitCode=0 Feb 26 11:46:54 crc kubenswrapper[4724]: I0226 11:46:54.870514 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" event={"ID":"cad1abca-ca70-4988-804c-ca6d35ba05d7","Type":"ContainerDied","Data":"59d3c7b05e546b29e395bf641147a6e3246e693653edc9c45ad2b03b2e77171b"} Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.322283 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.509090 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cad1abca-ca70-4988-804c-ca6d35ba05d7-inventory\") pod \"cad1abca-ca70-4988-804c-ca6d35ba05d7\" (UID: \"cad1abca-ca70-4988-804c-ca6d35ba05d7\") " Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.509331 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cad1abca-ca70-4988-804c-ca6d35ba05d7-ssh-key-openstack-edpm-ipam\") pod \"cad1abca-ca70-4988-804c-ca6d35ba05d7\" (UID: \"cad1abca-ca70-4988-804c-ca6d35ba05d7\") " Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.509443 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44ww2\" (UniqueName: \"kubernetes.io/projected/cad1abca-ca70-4988-804c-ca6d35ba05d7-kube-api-access-44ww2\") pod \"cad1abca-ca70-4988-804c-ca6d35ba05d7\" (UID: \"cad1abca-ca70-4988-804c-ca6d35ba05d7\") " Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.514318 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad1abca-ca70-4988-804c-ca6d35ba05d7-kube-api-access-44ww2" (OuterVolumeSpecName: "kube-api-access-44ww2") pod "cad1abca-ca70-4988-804c-ca6d35ba05d7" (UID: "cad1abca-ca70-4988-804c-ca6d35ba05d7"). InnerVolumeSpecName "kube-api-access-44ww2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.535529 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cad1abca-ca70-4988-804c-ca6d35ba05d7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cad1abca-ca70-4988-804c-ca6d35ba05d7" (UID: "cad1abca-ca70-4988-804c-ca6d35ba05d7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.556269 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cad1abca-ca70-4988-804c-ca6d35ba05d7-inventory" (OuterVolumeSpecName: "inventory") pod "cad1abca-ca70-4988-804c-ca6d35ba05d7" (UID: "cad1abca-ca70-4988-804c-ca6d35ba05d7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.615911 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cad1abca-ca70-4988-804c-ca6d35ba05d7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.616158 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44ww2\" (UniqueName: \"kubernetes.io/projected/cad1abca-ca70-4988-804c-ca6d35ba05d7-kube-api-access-44ww2\") on node \"crc\" DevicePath \"\"" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.616262 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cad1abca-ca70-4988-804c-ca6d35ba05d7-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.893544 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" event={"ID":"cad1abca-ca70-4988-804c-ca6d35ba05d7","Type":"ContainerDied","Data":"fddde90fe0f89e5325a712844c75a287fa7714f5ae62fe4d9df29b03346cb4de"} Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.893962 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fddde90fe0f89e5325a712844c75a287fa7714f5ae62fe4d9df29b03346cb4de" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.893746 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rds5w" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.976260 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn"] Feb 26 11:46:56 crc kubenswrapper[4724]: E0226 11:46:56.976700 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad1abca-ca70-4988-804c-ca6d35ba05d7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.976719 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad1abca-ca70-4988-804c-ca6d35ba05d7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.976987 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cad1abca-ca70-4988-804c-ca6d35ba05d7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.978035 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.980533 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.982804 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.982828 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.992309 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn"] Feb 26 11:46:56 crc kubenswrapper[4724]: I0226 11:46:56.995319 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:46:57 crc kubenswrapper[4724]: I0226 11:46:57.125600 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa584460-b6d4-4fe8-b351-f55f6c5a969a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn\" (UID: \"fa584460-b6d4-4fe8-b351-f55f6c5a969a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" Feb 26 11:46:57 crc kubenswrapper[4724]: I0226 11:46:57.125968 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa584460-b6d4-4fe8-b351-f55f6c5a969a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn\" (UID: \"fa584460-b6d4-4fe8-b351-f55f6c5a969a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" Feb 26 11:46:57 crc kubenswrapper[4724]: I0226 11:46:57.126108 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6lpw\" (UniqueName: \"kubernetes.io/projected/fa584460-b6d4-4fe8-b351-f55f6c5a969a-kube-api-access-s6lpw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn\" (UID: \"fa584460-b6d4-4fe8-b351-f55f6c5a969a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" Feb 26 11:46:57 crc kubenswrapper[4724]: I0226 11:46:57.229390 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa584460-b6d4-4fe8-b351-f55f6c5a969a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn\" (UID: \"fa584460-b6d4-4fe8-b351-f55f6c5a969a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" Feb 26 11:46:57 crc kubenswrapper[4724]: I0226 11:46:57.229513 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6lpw\" (UniqueName: \"kubernetes.io/projected/fa584460-b6d4-4fe8-b351-f55f6c5a969a-kube-api-access-s6lpw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn\" (UID: \"fa584460-b6d4-4fe8-b351-f55f6c5a969a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" Feb 26 11:46:57 crc kubenswrapper[4724]: I0226 11:46:57.230160 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa584460-b6d4-4fe8-b351-f55f6c5a969a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn\" (UID: \"fa584460-b6d4-4fe8-b351-f55f6c5a969a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" Feb 26 11:46:57 crc kubenswrapper[4724]: I0226 11:46:57.236543 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa584460-b6d4-4fe8-b351-f55f6c5a969a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn\" (UID: \"fa584460-b6d4-4fe8-b351-f55f6c5a969a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" Feb 26 11:46:57 crc kubenswrapper[4724]: I0226 11:46:57.236870 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa584460-b6d4-4fe8-b351-f55f6c5a969a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn\" (UID: \"fa584460-b6d4-4fe8-b351-f55f6c5a969a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" Feb 26 11:46:57 crc kubenswrapper[4724]: I0226 11:46:57.252652 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6lpw\" (UniqueName: \"kubernetes.io/projected/fa584460-b6d4-4fe8-b351-f55f6c5a969a-kube-api-access-s6lpw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn\" (UID: \"fa584460-b6d4-4fe8-b351-f55f6c5a969a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" Feb 26 11:46:57 crc kubenswrapper[4724]: I0226 11:46:57.299896 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" Feb 26 11:46:57 crc kubenswrapper[4724]: I0226 11:46:57.817138 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn"] Feb 26 11:46:57 crc kubenswrapper[4724]: I0226 11:46:57.909614 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" event={"ID":"fa584460-b6d4-4fe8-b351-f55f6c5a969a","Type":"ContainerStarted","Data":"e11b2e8710bdf585d6fa583f357cdc732f703eb263db7e1c05719edf78fda425"} Feb 26 11:46:58 crc kubenswrapper[4724]: I0226 11:46:58.921368 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" event={"ID":"fa584460-b6d4-4fe8-b351-f55f6c5a969a","Type":"ContainerStarted","Data":"964c4078b0599ad6bd5456f78b32d344a9be955a46bef2998c568a7daba4334c"} Feb 26 11:46:58 crc kubenswrapper[4724]: I0226 11:46:58.944551 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" podStartSLOduration=2.442898566 podStartE2EDuration="2.94453025s" podCreationTimestamp="2026-02-26 11:46:56 +0000 UTC" firstStartedPulling="2026-02-26 11:46:57.833043251 +0000 UTC m=+2484.488782366" lastFinishedPulling="2026-02-26 11:46:58.334674935 +0000 UTC m=+2484.990414050" observedRunningTime="2026-02-26 11:46:58.939685826 +0000 UTC m=+2485.595424941" watchObservedRunningTime="2026-02-26 11:46:58.94453025 +0000 UTC m=+2485.600269365" Feb 26 11:46:59 crc kubenswrapper[4724]: I0226 11:46:59.718273 4724 scope.go:117] "RemoveContainer" containerID="9713a4d0fb94114a0a9331cc38f4ef1c364373d64c51753cfd1957d8d3f4ae9f" Feb 26 11:47:08 crc kubenswrapper[4724]: I0226 11:47:08.012466 4724 generic.go:334] "Generic (PLEG): container finished" podID="fa584460-b6d4-4fe8-b351-f55f6c5a969a" containerID="964c4078b0599ad6bd5456f78b32d344a9be955a46bef2998c568a7daba4334c" exitCode=0 Feb 26 11:47:08 crc kubenswrapper[4724]: I0226 11:47:08.013050 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" event={"ID":"fa584460-b6d4-4fe8-b351-f55f6c5a969a","Type":"ContainerDied","Data":"964c4078b0599ad6bd5456f78b32d344a9be955a46bef2998c568a7daba4334c"} Feb 26 11:47:09 crc kubenswrapper[4724]: I0226 11:47:09.398882 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" Feb 26 11:47:09 crc kubenswrapper[4724]: I0226 11:47:09.577465 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa584460-b6d4-4fe8-b351-f55f6c5a969a-inventory\") pod \"fa584460-b6d4-4fe8-b351-f55f6c5a969a\" (UID: \"fa584460-b6d4-4fe8-b351-f55f6c5a969a\") " Feb 26 11:47:09 crc kubenswrapper[4724]: I0226 11:47:09.577766 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa584460-b6d4-4fe8-b351-f55f6c5a969a-ssh-key-openstack-edpm-ipam\") pod \"fa584460-b6d4-4fe8-b351-f55f6c5a969a\" (UID: \"fa584460-b6d4-4fe8-b351-f55f6c5a969a\") " Feb 26 11:47:09 crc kubenswrapper[4724]: I0226 11:47:09.577801 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6lpw\" (UniqueName: \"kubernetes.io/projected/fa584460-b6d4-4fe8-b351-f55f6c5a969a-kube-api-access-s6lpw\") pod \"fa584460-b6d4-4fe8-b351-f55f6c5a969a\" (UID: \"fa584460-b6d4-4fe8-b351-f55f6c5a969a\") " Feb 26 11:47:09 crc kubenswrapper[4724]: I0226 11:47:09.584507 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa584460-b6d4-4fe8-b351-f55f6c5a969a-kube-api-access-s6lpw" (OuterVolumeSpecName: "kube-api-access-s6lpw") pod "fa584460-b6d4-4fe8-b351-f55f6c5a969a" (UID: "fa584460-b6d4-4fe8-b351-f55f6c5a969a"). InnerVolumeSpecName "kube-api-access-s6lpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:47:09 crc kubenswrapper[4724]: I0226 11:47:09.611207 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa584460-b6d4-4fe8-b351-f55f6c5a969a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fa584460-b6d4-4fe8-b351-f55f6c5a969a" (UID: "fa584460-b6d4-4fe8-b351-f55f6c5a969a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:09 crc kubenswrapper[4724]: I0226 11:47:09.611695 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa584460-b6d4-4fe8-b351-f55f6c5a969a-inventory" (OuterVolumeSpecName: "inventory") pod "fa584460-b6d4-4fe8-b351-f55f6c5a969a" (UID: "fa584460-b6d4-4fe8-b351-f55f6c5a969a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:09 crc kubenswrapper[4724]: I0226 11:47:09.680277 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa584460-b6d4-4fe8-b351-f55f6c5a969a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:09 crc kubenswrapper[4724]: I0226 11:47:09.680315 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6lpw\" (UniqueName: \"kubernetes.io/projected/fa584460-b6d4-4fe8-b351-f55f6c5a969a-kube-api-access-s6lpw\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:09 crc kubenswrapper[4724]: I0226 11:47:09.680325 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa584460-b6d4-4fe8-b351-f55f6c5a969a-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.030243 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" event={"ID":"fa584460-b6d4-4fe8-b351-f55f6c5a969a","Type":"ContainerDied","Data":"e11b2e8710bdf585d6fa583f357cdc732f703eb263db7e1c05719edf78fda425"} Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.030289 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e11b2e8710bdf585d6fa583f357cdc732f703eb263db7e1c05719edf78fda425" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.030341 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.151393 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp"] Feb 26 11:47:10 crc kubenswrapper[4724]: E0226 11:47:10.151834 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa584460-b6d4-4fe8-b351-f55f6c5a969a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.151850 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa584460-b6d4-4fe8-b351-f55f6c5a969a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.152058 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa584460-b6d4-4fe8-b351-f55f6c5a969a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.152918 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.155084 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.155346 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.156357 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.156459 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.156731 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.156735 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.157916 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.158041 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.169605 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp"] Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.291608 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.291891 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.291933 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.291966 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.292056 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.292111 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.292138 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.292195 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.292238 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfd8d\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-kube-api-access-sfd8d\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.292267 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.292496 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.292559 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.292653 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.292693 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.394074 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfd8d\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-kube-api-access-sfd8d\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.394142 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.394211 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.394235 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.394284 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.394313 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.394349 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.394369 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.394387 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.394412 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.394440 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.394477 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.394503 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.394552 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.401456 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.402956 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.403079 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.403162 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.403662 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.403804 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.404227 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.404288 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.404286 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.405475 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.406612 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.406896 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.407397 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.412341 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfd8d\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-kube-api-access-sfd8d\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:10 crc kubenswrapper[4724]: I0226 11:47:10.472046 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:11 crc kubenswrapper[4724]: I0226 11:47:11.013302 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp"] Feb 26 11:47:11 crc kubenswrapper[4724]: I0226 11:47:11.040654 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" event={"ID":"5f7c705e-b14f-49dc-9510-4c4b71838bbf","Type":"ContainerStarted","Data":"c5873471599a6de48ba9f42ac112a46c203c500694fae77ef53da53af46d8ab5"} Feb 26 11:47:12 crc kubenswrapper[4724]: I0226 11:47:12.050685 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" event={"ID":"5f7c705e-b14f-49dc-9510-4c4b71838bbf","Type":"ContainerStarted","Data":"a22924d3ac4a2f6772dedaf88ff78ab3e87633e52c5ee3c1b70f11a51976839d"} Feb 26 11:47:12 crc kubenswrapper[4724]: I0226 11:47:12.073770 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" podStartSLOduration=1.6919952029999998 podStartE2EDuration="2.07375421s" podCreationTimestamp="2026-02-26 11:47:10 +0000 UTC" firstStartedPulling="2026-02-26 11:47:11.012226915 +0000 UTC m=+2497.667966030" lastFinishedPulling="2026-02-26 11:47:11.393985922 +0000 UTC m=+2498.049725037" observedRunningTime="2026-02-26 11:47:12.071681027 +0000 UTC m=+2498.727420162" watchObservedRunningTime="2026-02-26 11:47:12.07375421 +0000 UTC m=+2498.729493325" Feb 26 11:47:16 crc kubenswrapper[4724]: I0226 11:47:16.907077 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:47:16 crc kubenswrapper[4724]: I0226 11:47:16.908369 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:47:16 crc kubenswrapper[4724]: I0226 11:47:16.908459 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:47:16 crc kubenswrapper[4724]: I0226 11:47:16.909339 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:47:16 crc kubenswrapper[4724]: I0226 11:47:16.909403 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" gracePeriod=600 Feb 26 11:47:17 crc kubenswrapper[4724]: E0226 11:47:17.067393 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:47:17 crc kubenswrapper[4724]: I0226 11:47:17.109408 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" exitCode=0 Feb 26 11:47:17 crc kubenswrapper[4724]: I0226 11:47:17.109477 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b"} Feb 26 11:47:17 crc kubenswrapper[4724]: I0226 11:47:17.109570 4724 scope.go:117] "RemoveContainer" containerID="6a82f967eea840ca5de412ba47b3c1a0b6b8bb3dc6664316ec3d32f3e1eadd2e" Feb 26 11:47:17 crc kubenswrapper[4724]: I0226 11:47:17.110254 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:47:17 crc kubenswrapper[4724]: E0226 11:47:17.110496 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:47:23 crc kubenswrapper[4724]: I0226 11:47:23.843457 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qdhg9"] Feb 26 11:47:23 crc kubenswrapper[4724]: I0226 11:47:23.847211 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:23 crc kubenswrapper[4724]: I0226 11:47:23.857069 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdhg9"] Feb 26 11:47:23 crc kubenswrapper[4724]: I0226 11:47:23.967824 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcbc5\" (UniqueName: \"kubernetes.io/projected/7822292b-3d2e-4b41-8d69-c47458de19cf-kube-api-access-pcbc5\") pod \"certified-operators-qdhg9\" (UID: \"7822292b-3d2e-4b41-8d69-c47458de19cf\") " pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:23 crc kubenswrapper[4724]: I0226 11:47:23.967911 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7822292b-3d2e-4b41-8d69-c47458de19cf-catalog-content\") pod \"certified-operators-qdhg9\" (UID: \"7822292b-3d2e-4b41-8d69-c47458de19cf\") " pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:23 crc kubenswrapper[4724]: I0226 11:47:23.968069 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7822292b-3d2e-4b41-8d69-c47458de19cf-utilities\") pod \"certified-operators-qdhg9\" (UID: \"7822292b-3d2e-4b41-8d69-c47458de19cf\") " pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:24 crc kubenswrapper[4724]: I0226 11:47:24.069752 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7822292b-3d2e-4b41-8d69-c47458de19cf-utilities\") pod \"certified-operators-qdhg9\" (UID: \"7822292b-3d2e-4b41-8d69-c47458de19cf\") " pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:24 crc kubenswrapper[4724]: I0226 11:47:24.069829 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcbc5\" (UniqueName: \"kubernetes.io/projected/7822292b-3d2e-4b41-8d69-c47458de19cf-kube-api-access-pcbc5\") pod \"certified-operators-qdhg9\" (UID: \"7822292b-3d2e-4b41-8d69-c47458de19cf\") " pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:24 crc kubenswrapper[4724]: I0226 11:47:24.069876 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7822292b-3d2e-4b41-8d69-c47458de19cf-catalog-content\") pod \"certified-operators-qdhg9\" (UID: \"7822292b-3d2e-4b41-8d69-c47458de19cf\") " pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:24 crc kubenswrapper[4724]: I0226 11:47:24.070304 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7822292b-3d2e-4b41-8d69-c47458de19cf-catalog-content\") pod \"certified-operators-qdhg9\" (UID: \"7822292b-3d2e-4b41-8d69-c47458de19cf\") " pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:24 crc kubenswrapper[4724]: I0226 11:47:24.071017 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7822292b-3d2e-4b41-8d69-c47458de19cf-utilities\") pod \"certified-operators-qdhg9\" (UID: \"7822292b-3d2e-4b41-8d69-c47458de19cf\") " pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:24 crc kubenswrapper[4724]: I0226 11:47:24.093468 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcbc5\" (UniqueName: \"kubernetes.io/projected/7822292b-3d2e-4b41-8d69-c47458de19cf-kube-api-access-pcbc5\") pod \"certified-operators-qdhg9\" (UID: \"7822292b-3d2e-4b41-8d69-c47458de19cf\") " pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:24 crc kubenswrapper[4724]: I0226 11:47:24.220040 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:24 crc kubenswrapper[4724]: I0226 11:47:24.636380 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdhg9"] Feb 26 11:47:25 crc kubenswrapper[4724]: I0226 11:47:25.188108 4724 generic.go:334] "Generic (PLEG): container finished" podID="7822292b-3d2e-4b41-8d69-c47458de19cf" containerID="e399be3d6e5aa39031b331673f6ad83db1c4cd4f1e8f188f1305a37af9fa3f22" exitCode=0 Feb 26 11:47:25 crc kubenswrapper[4724]: I0226 11:47:25.188227 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdhg9" event={"ID":"7822292b-3d2e-4b41-8d69-c47458de19cf","Type":"ContainerDied","Data":"e399be3d6e5aa39031b331673f6ad83db1c4cd4f1e8f188f1305a37af9fa3f22"} Feb 26 11:47:25 crc kubenswrapper[4724]: I0226 11:47:25.188452 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdhg9" event={"ID":"7822292b-3d2e-4b41-8d69-c47458de19cf","Type":"ContainerStarted","Data":"bcd9c0236c6904056f9dec71009b4c9c01134741d08e76a1eb5a40a61e9001ed"} Feb 26 11:47:28 crc kubenswrapper[4724]: I0226 11:47:28.216985 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdhg9" event={"ID":"7822292b-3d2e-4b41-8d69-c47458de19cf","Type":"ContainerStarted","Data":"2d49dbcb2115470a097541a39fd2a4f0cc00d0e2bcd96137bf68de63c33fdf16"} Feb 26 11:47:28 crc kubenswrapper[4724]: I0226 11:47:28.976983 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:47:28 crc kubenswrapper[4724]: E0226 11:47:28.977366 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:47:30 crc kubenswrapper[4724]: I0226 11:47:30.236383 4724 generic.go:334] "Generic (PLEG): container finished" podID="7822292b-3d2e-4b41-8d69-c47458de19cf" containerID="2d49dbcb2115470a097541a39fd2a4f0cc00d0e2bcd96137bf68de63c33fdf16" exitCode=0 Feb 26 11:47:30 crc kubenswrapper[4724]: I0226 11:47:30.236473 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdhg9" event={"ID":"7822292b-3d2e-4b41-8d69-c47458de19cf","Type":"ContainerDied","Data":"2d49dbcb2115470a097541a39fd2a4f0cc00d0e2bcd96137bf68de63c33fdf16"} Feb 26 11:47:31 crc kubenswrapper[4724]: I0226 11:47:31.247741 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdhg9" event={"ID":"7822292b-3d2e-4b41-8d69-c47458de19cf","Type":"ContainerStarted","Data":"b3712776357af7fc4e65a0e5d9de4dcf70f9b6894cd92f69e97810680ad14ecf"} Feb 26 11:47:31 crc kubenswrapper[4724]: I0226 11:47:31.272601 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qdhg9" podStartSLOduration=2.468730648 podStartE2EDuration="8.27258468s" podCreationTimestamp="2026-02-26 11:47:23 +0000 UTC" firstStartedPulling="2026-02-26 11:47:25.189931727 +0000 UTC m=+2511.845670852" lastFinishedPulling="2026-02-26 11:47:30.993785769 +0000 UTC m=+2517.649524884" observedRunningTime="2026-02-26 11:47:31.268718122 +0000 UTC m=+2517.924457247" watchObservedRunningTime="2026-02-26 11:47:31.27258468 +0000 UTC m=+2517.928323805" Feb 26 11:47:34 crc kubenswrapper[4724]: I0226 11:47:34.221459 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:34 crc kubenswrapper[4724]: I0226 11:47:34.221817 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:34 crc kubenswrapper[4724]: I0226 11:47:34.269038 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:43 crc kubenswrapper[4724]: I0226 11:47:43.984871 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:47:43 crc kubenswrapper[4724]: E0226 11:47:43.985916 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:47:44 crc kubenswrapper[4724]: I0226 11:47:44.271697 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:44 crc kubenswrapper[4724]: I0226 11:47:44.353258 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdhg9"] Feb 26 11:47:44 crc kubenswrapper[4724]: I0226 11:47:44.353490 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qdhg9" podUID="7822292b-3d2e-4b41-8d69-c47458de19cf" containerName="registry-server" containerID="cri-o://b3712776357af7fc4e65a0e5d9de4dcf70f9b6894cd92f69e97810680ad14ecf" gracePeriod=2 Feb 26 11:47:44 crc kubenswrapper[4724]: I0226 11:47:44.841935 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.014209 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7822292b-3d2e-4b41-8d69-c47458de19cf-utilities\") pod \"7822292b-3d2e-4b41-8d69-c47458de19cf\" (UID: \"7822292b-3d2e-4b41-8d69-c47458de19cf\") " Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.014544 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcbc5\" (UniqueName: \"kubernetes.io/projected/7822292b-3d2e-4b41-8d69-c47458de19cf-kube-api-access-pcbc5\") pod \"7822292b-3d2e-4b41-8d69-c47458de19cf\" (UID: \"7822292b-3d2e-4b41-8d69-c47458de19cf\") " Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.014589 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7822292b-3d2e-4b41-8d69-c47458de19cf-catalog-content\") pod \"7822292b-3d2e-4b41-8d69-c47458de19cf\" (UID: \"7822292b-3d2e-4b41-8d69-c47458de19cf\") " Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.015063 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7822292b-3d2e-4b41-8d69-c47458de19cf-utilities" (OuterVolumeSpecName: "utilities") pod "7822292b-3d2e-4b41-8d69-c47458de19cf" (UID: "7822292b-3d2e-4b41-8d69-c47458de19cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.017313 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7822292b-3d2e-4b41-8d69-c47458de19cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.028456 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7822292b-3d2e-4b41-8d69-c47458de19cf-kube-api-access-pcbc5" (OuterVolumeSpecName: "kube-api-access-pcbc5") pod "7822292b-3d2e-4b41-8d69-c47458de19cf" (UID: "7822292b-3d2e-4b41-8d69-c47458de19cf"). InnerVolumeSpecName "kube-api-access-pcbc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.069411 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7822292b-3d2e-4b41-8d69-c47458de19cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7822292b-3d2e-4b41-8d69-c47458de19cf" (UID: "7822292b-3d2e-4b41-8d69-c47458de19cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.119824 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcbc5\" (UniqueName: \"kubernetes.io/projected/7822292b-3d2e-4b41-8d69-c47458de19cf-kube-api-access-pcbc5\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.119867 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7822292b-3d2e-4b41-8d69-c47458de19cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.365670 4724 generic.go:334] "Generic (PLEG): container finished" podID="7822292b-3d2e-4b41-8d69-c47458de19cf" containerID="b3712776357af7fc4e65a0e5d9de4dcf70f9b6894cd92f69e97810680ad14ecf" exitCode=0 Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.365723 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdhg9" event={"ID":"7822292b-3d2e-4b41-8d69-c47458de19cf","Type":"ContainerDied","Data":"b3712776357af7fc4e65a0e5d9de4dcf70f9b6894cd92f69e97810680ad14ecf"} Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.365755 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdhg9" event={"ID":"7822292b-3d2e-4b41-8d69-c47458de19cf","Type":"ContainerDied","Data":"bcd9c0236c6904056f9dec71009b4c9c01134741d08e76a1eb5a40a61e9001ed"} Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.365773 4724 scope.go:117] "RemoveContainer" containerID="b3712776357af7fc4e65a0e5d9de4dcf70f9b6894cd92f69e97810680ad14ecf" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.365782 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdhg9" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.391263 4724 scope.go:117] "RemoveContainer" containerID="2d49dbcb2115470a097541a39fd2a4f0cc00d0e2bcd96137bf68de63c33fdf16" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.416692 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdhg9"] Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.423281 4724 scope.go:117] "RemoveContainer" containerID="e399be3d6e5aa39031b331673f6ad83db1c4cd4f1e8f188f1305a37af9fa3f22" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.431333 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qdhg9"] Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.472941 4724 scope.go:117] "RemoveContainer" containerID="b3712776357af7fc4e65a0e5d9de4dcf70f9b6894cd92f69e97810680ad14ecf" Feb 26 11:47:45 crc kubenswrapper[4724]: E0226 11:47:45.473452 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3712776357af7fc4e65a0e5d9de4dcf70f9b6894cd92f69e97810680ad14ecf\": container with ID starting with b3712776357af7fc4e65a0e5d9de4dcf70f9b6894cd92f69e97810680ad14ecf not found: ID does not exist" containerID="b3712776357af7fc4e65a0e5d9de4dcf70f9b6894cd92f69e97810680ad14ecf" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.473630 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3712776357af7fc4e65a0e5d9de4dcf70f9b6894cd92f69e97810680ad14ecf"} err="failed to get container status \"b3712776357af7fc4e65a0e5d9de4dcf70f9b6894cd92f69e97810680ad14ecf\": rpc error: code = NotFound desc = could not find container \"b3712776357af7fc4e65a0e5d9de4dcf70f9b6894cd92f69e97810680ad14ecf\": container with ID starting with b3712776357af7fc4e65a0e5d9de4dcf70f9b6894cd92f69e97810680ad14ecf not found: ID does not exist" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.473747 4724 scope.go:117] "RemoveContainer" containerID="2d49dbcb2115470a097541a39fd2a4f0cc00d0e2bcd96137bf68de63c33fdf16" Feb 26 11:47:45 crc kubenswrapper[4724]: E0226 11:47:45.474127 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d49dbcb2115470a097541a39fd2a4f0cc00d0e2bcd96137bf68de63c33fdf16\": container with ID starting with 2d49dbcb2115470a097541a39fd2a4f0cc00d0e2bcd96137bf68de63c33fdf16 not found: ID does not exist" containerID="2d49dbcb2115470a097541a39fd2a4f0cc00d0e2bcd96137bf68de63c33fdf16" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.474243 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d49dbcb2115470a097541a39fd2a4f0cc00d0e2bcd96137bf68de63c33fdf16"} err="failed to get container status \"2d49dbcb2115470a097541a39fd2a4f0cc00d0e2bcd96137bf68de63c33fdf16\": rpc error: code = NotFound desc = could not find container \"2d49dbcb2115470a097541a39fd2a4f0cc00d0e2bcd96137bf68de63c33fdf16\": container with ID starting with 2d49dbcb2115470a097541a39fd2a4f0cc00d0e2bcd96137bf68de63c33fdf16 not found: ID does not exist" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.474333 4724 scope.go:117] "RemoveContainer" containerID="e399be3d6e5aa39031b331673f6ad83db1c4cd4f1e8f188f1305a37af9fa3f22" Feb 26 11:47:45 crc kubenswrapper[4724]: E0226 11:47:45.474647 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e399be3d6e5aa39031b331673f6ad83db1c4cd4f1e8f188f1305a37af9fa3f22\": container with ID starting with e399be3d6e5aa39031b331673f6ad83db1c4cd4f1e8f188f1305a37af9fa3f22 not found: ID does not exist" containerID="e399be3d6e5aa39031b331673f6ad83db1c4cd4f1e8f188f1305a37af9fa3f22" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.474681 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e399be3d6e5aa39031b331673f6ad83db1c4cd4f1e8f188f1305a37af9fa3f22"} err="failed to get container status \"e399be3d6e5aa39031b331673f6ad83db1c4cd4f1e8f188f1305a37af9fa3f22\": rpc error: code = NotFound desc = could not find container \"e399be3d6e5aa39031b331673f6ad83db1c4cd4f1e8f188f1305a37af9fa3f22\": container with ID starting with e399be3d6e5aa39031b331673f6ad83db1c4cd4f1e8f188f1305a37af9fa3f22 not found: ID does not exist" Feb 26 11:47:45 crc kubenswrapper[4724]: I0226 11:47:45.987326 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7822292b-3d2e-4b41-8d69-c47458de19cf" path="/var/lib/kubelet/pods/7822292b-3d2e-4b41-8d69-c47458de19cf/volumes" Feb 26 11:47:49 crc kubenswrapper[4724]: I0226 11:47:49.400429 4724 generic.go:334] "Generic (PLEG): container finished" podID="5f7c705e-b14f-49dc-9510-4c4b71838bbf" containerID="a22924d3ac4a2f6772dedaf88ff78ab3e87633e52c5ee3c1b70f11a51976839d" exitCode=0 Feb 26 11:47:49 crc kubenswrapper[4724]: I0226 11:47:49.400470 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" event={"ID":"5f7c705e-b14f-49dc-9510-4c4b71838bbf","Type":"ContainerDied","Data":"a22924d3ac4a2f6772dedaf88ff78ab3e87633e52c5ee3c1b70f11a51976839d"} Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.795974 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.827039 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-telemetry-combined-ca-bundle\") pod \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.827091 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-ovn-combined-ca-bundle\") pod \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.827137 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-libvirt-combined-ca-bundle\") pod \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.827201 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-ssh-key-openstack-edpm-ipam\") pod \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.827240 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.827265 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.827325 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.827466 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-nova-combined-ca-bundle\") pod \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.827777 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-bootstrap-combined-ca-bundle\") pod \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.827817 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-neutron-metadata-combined-ca-bundle\") pod \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.828097 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.828226 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-repo-setup-combined-ca-bundle\") pod \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.828251 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-inventory\") pod \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.828269 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfd8d\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-kube-api-access-sfd8d\") pod \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\" (UID: \"5f7c705e-b14f-49dc-9510-4c4b71838bbf\") " Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.836128 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "5f7c705e-b14f-49dc-9510-4c4b71838bbf" (UID: "5f7c705e-b14f-49dc-9510-4c4b71838bbf"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.836763 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "5f7c705e-b14f-49dc-9510-4c4b71838bbf" (UID: "5f7c705e-b14f-49dc-9510-4c4b71838bbf"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.837952 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5f7c705e-b14f-49dc-9510-4c4b71838bbf" (UID: "5f7c705e-b14f-49dc-9510-4c4b71838bbf"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.840304 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "5f7c705e-b14f-49dc-9510-4c4b71838bbf" (UID: "5f7c705e-b14f-49dc-9510-4c4b71838bbf"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.843499 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "5f7c705e-b14f-49dc-9510-4c4b71838bbf" (UID: "5f7c705e-b14f-49dc-9510-4c4b71838bbf"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.845768 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "5f7c705e-b14f-49dc-9510-4c4b71838bbf" (UID: "5f7c705e-b14f-49dc-9510-4c4b71838bbf"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.845837 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "5f7c705e-b14f-49dc-9510-4c4b71838bbf" (UID: "5f7c705e-b14f-49dc-9510-4c4b71838bbf"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.850157 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5f7c705e-b14f-49dc-9510-4c4b71838bbf" (UID: "5f7c705e-b14f-49dc-9510-4c4b71838bbf"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.854319 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-kube-api-access-sfd8d" (OuterVolumeSpecName: "kube-api-access-sfd8d") pod "5f7c705e-b14f-49dc-9510-4c4b71838bbf" (UID: "5f7c705e-b14f-49dc-9510-4c4b71838bbf"). InnerVolumeSpecName "kube-api-access-sfd8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.854474 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "5f7c705e-b14f-49dc-9510-4c4b71838bbf" (UID: "5f7c705e-b14f-49dc-9510-4c4b71838bbf"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.857556 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "5f7c705e-b14f-49dc-9510-4c4b71838bbf" (UID: "5f7c705e-b14f-49dc-9510-4c4b71838bbf"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.862398 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "5f7c705e-b14f-49dc-9510-4c4b71838bbf" (UID: "5f7c705e-b14f-49dc-9510-4c4b71838bbf"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.873341 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-inventory" (OuterVolumeSpecName: "inventory") pod "5f7c705e-b14f-49dc-9510-4c4b71838bbf" (UID: "5f7c705e-b14f-49dc-9510-4c4b71838bbf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.882142 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5f7c705e-b14f-49dc-9510-4c4b71838bbf" (UID: "5f7c705e-b14f-49dc-9510-4c4b71838bbf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.930714 4724 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.930747 4724 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.930758 4724 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.930770 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.930782 4724 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.930793 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfd8d\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-kube-api-access-sfd8d\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.930806 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.930815 4724 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.930826 4724 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.930834 4724 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.930841 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f7c705e-b14f-49dc-9510-4c4b71838bbf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.930851 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.930861 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:50 crc kubenswrapper[4724]: I0226 11:47:50.930871 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5f7c705e-b14f-49dc-9510-4c4b71838bbf-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.417825 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" event={"ID":"5f7c705e-b14f-49dc-9510-4c4b71838bbf","Type":"ContainerDied","Data":"c5873471599a6de48ba9f42ac112a46c203c500694fae77ef53da53af46d8ab5"} Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.417861 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5873471599a6de48ba9f42ac112a46c203c500694fae77ef53da53af46d8ab5" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.417873 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.518014 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n"] Feb 26 11:47:51 crc kubenswrapper[4724]: E0226 11:47:51.518423 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f7c705e-b14f-49dc-9510-4c4b71838bbf" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.518442 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f7c705e-b14f-49dc-9510-4c4b71838bbf" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 11:47:51 crc kubenswrapper[4724]: E0226 11:47:51.518465 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7822292b-3d2e-4b41-8d69-c47458de19cf" containerName="extract-content" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.518475 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7822292b-3d2e-4b41-8d69-c47458de19cf" containerName="extract-content" Feb 26 11:47:51 crc kubenswrapper[4724]: E0226 11:47:51.518508 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7822292b-3d2e-4b41-8d69-c47458de19cf" containerName="extract-utilities" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.518515 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7822292b-3d2e-4b41-8d69-c47458de19cf" containerName="extract-utilities" Feb 26 11:47:51 crc kubenswrapper[4724]: E0226 11:47:51.518528 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7822292b-3d2e-4b41-8d69-c47458de19cf" containerName="registry-server" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.518536 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7822292b-3d2e-4b41-8d69-c47458de19cf" containerName="registry-server" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.518757 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7822292b-3d2e-4b41-8d69-c47458de19cf" containerName="registry-server" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.518781 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f7c705e-b14f-49dc-9510-4c4b71838bbf" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.519522 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.522120 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.522693 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.522945 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.524494 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.526229 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.573306 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n"] Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.641689 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qw42n\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.641728 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qw42n\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.641756 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qw42n\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.641801 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s754\" (UniqueName: \"kubernetes.io/projected/33c4673e-f3b9-4bbf-a97d-39412344f6c8-kube-api-access-2s754\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qw42n\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.641835 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qw42n\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.744562 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qw42n\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.744611 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qw42n\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.744635 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qw42n\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.744662 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s754\" (UniqueName: \"kubernetes.io/projected/33c4673e-f3b9-4bbf-a97d-39412344f6c8-kube-api-access-2s754\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qw42n\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.744699 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qw42n\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.745853 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qw42n\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.755322 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qw42n\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.757610 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qw42n\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.759944 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qw42n\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.760524 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s754\" (UniqueName: \"kubernetes.io/projected/33c4673e-f3b9-4bbf-a97d-39412344f6c8-kube-api-access-2s754\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qw42n\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:51 crc kubenswrapper[4724]: I0226 11:47:51.835748 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:47:52 crc kubenswrapper[4724]: I0226 11:47:52.449270 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" event={"ID":"33c4673e-f3b9-4bbf-a97d-39412344f6c8","Type":"ContainerStarted","Data":"e506e4af8868300c2b3c3032dd93b5bb3129c9e52a2b7f3a7bb9100838872b0d"} Feb 26 11:47:52 crc kubenswrapper[4724]: I0226 11:47:52.460203 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n"] Feb 26 11:47:54 crc kubenswrapper[4724]: I0226 11:47:54.466279 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" event={"ID":"33c4673e-f3b9-4bbf-a97d-39412344f6c8","Type":"ContainerStarted","Data":"d90c7efb81f77658b7ac2955e95136b6c1fd96b2def397b4dd851fc714a0e0be"} Feb 26 11:47:54 crc kubenswrapper[4724]: I0226 11:47:54.485797 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" podStartSLOduration=2.360176461 podStartE2EDuration="3.485778479s" podCreationTimestamp="2026-02-26 11:47:51 +0000 UTC" firstStartedPulling="2026-02-26 11:47:52.424097335 +0000 UTC m=+2539.079836450" lastFinishedPulling="2026-02-26 11:47:53.549699353 +0000 UTC m=+2540.205438468" observedRunningTime="2026-02-26 11:47:54.482862075 +0000 UTC m=+2541.138601210" watchObservedRunningTime="2026-02-26 11:47:54.485778479 +0000 UTC m=+2541.141517594" Feb 26 11:47:57 crc kubenswrapper[4724]: I0226 11:47:57.976137 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:47:57 crc kubenswrapper[4724]: E0226 11:47:57.976932 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:48:00 crc kubenswrapper[4724]: I0226 11:48:00.150421 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535108-mhgbs"] Feb 26 11:48:00 crc kubenswrapper[4724]: I0226 11:48:00.153755 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535108-mhgbs" Feb 26 11:48:00 crc kubenswrapper[4724]: I0226 11:48:00.156451 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:48:00 crc kubenswrapper[4724]: I0226 11:48:00.157318 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:48:00 crc kubenswrapper[4724]: I0226 11:48:00.158065 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:48:00 crc kubenswrapper[4724]: I0226 11:48:00.176600 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535108-mhgbs"] Feb 26 11:48:00 crc kubenswrapper[4724]: I0226 11:48:00.207246 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8qhs\" (UniqueName: \"kubernetes.io/projected/612de9a5-9cf9-412f-823d-8be9cd1ebbdf-kube-api-access-h8qhs\") pod \"auto-csr-approver-29535108-mhgbs\" (UID: \"612de9a5-9cf9-412f-823d-8be9cd1ebbdf\") " pod="openshift-infra/auto-csr-approver-29535108-mhgbs" Feb 26 11:48:00 crc kubenswrapper[4724]: I0226 11:48:00.308802 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8qhs\" (UniqueName: \"kubernetes.io/projected/612de9a5-9cf9-412f-823d-8be9cd1ebbdf-kube-api-access-h8qhs\") pod \"auto-csr-approver-29535108-mhgbs\" (UID: \"612de9a5-9cf9-412f-823d-8be9cd1ebbdf\") " pod="openshift-infra/auto-csr-approver-29535108-mhgbs" Feb 26 11:48:00 crc kubenswrapper[4724]: I0226 11:48:00.327992 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8qhs\" (UniqueName: \"kubernetes.io/projected/612de9a5-9cf9-412f-823d-8be9cd1ebbdf-kube-api-access-h8qhs\") pod \"auto-csr-approver-29535108-mhgbs\" (UID: \"612de9a5-9cf9-412f-823d-8be9cd1ebbdf\") " pod="openshift-infra/auto-csr-approver-29535108-mhgbs" Feb 26 11:48:00 crc kubenswrapper[4724]: I0226 11:48:00.478781 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535108-mhgbs" Feb 26 11:48:01 crc kubenswrapper[4724]: I0226 11:48:01.130238 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535108-mhgbs"] Feb 26 11:48:01 crc kubenswrapper[4724]: I0226 11:48:01.551039 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535108-mhgbs" event={"ID":"612de9a5-9cf9-412f-823d-8be9cd1ebbdf","Type":"ContainerStarted","Data":"bd9e0cae9b19e8fab9e8b90e6f126a693c9eba4282b1728ba9125b242c370a8c"} Feb 26 11:48:03 crc kubenswrapper[4724]: I0226 11:48:03.571498 4724 generic.go:334] "Generic (PLEG): container finished" podID="612de9a5-9cf9-412f-823d-8be9cd1ebbdf" containerID="477cdce3d35da0d80f5e894983da9dd0f3edbfc216a97bcb67792032f8c97dcf" exitCode=0 Feb 26 11:48:03 crc kubenswrapper[4724]: I0226 11:48:03.571975 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535108-mhgbs" event={"ID":"612de9a5-9cf9-412f-823d-8be9cd1ebbdf","Type":"ContainerDied","Data":"477cdce3d35da0d80f5e894983da9dd0f3edbfc216a97bcb67792032f8c97dcf"} Feb 26 11:48:04 crc kubenswrapper[4724]: I0226 11:48:04.936225 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535108-mhgbs" Feb 26 11:48:05 crc kubenswrapper[4724]: I0226 11:48:05.040611 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8qhs\" (UniqueName: \"kubernetes.io/projected/612de9a5-9cf9-412f-823d-8be9cd1ebbdf-kube-api-access-h8qhs\") pod \"612de9a5-9cf9-412f-823d-8be9cd1ebbdf\" (UID: \"612de9a5-9cf9-412f-823d-8be9cd1ebbdf\") " Feb 26 11:48:05 crc kubenswrapper[4724]: I0226 11:48:05.051310 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612de9a5-9cf9-412f-823d-8be9cd1ebbdf-kube-api-access-h8qhs" (OuterVolumeSpecName: "kube-api-access-h8qhs") pod "612de9a5-9cf9-412f-823d-8be9cd1ebbdf" (UID: "612de9a5-9cf9-412f-823d-8be9cd1ebbdf"). InnerVolumeSpecName "kube-api-access-h8qhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:48:05 crc kubenswrapper[4724]: I0226 11:48:05.143019 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8qhs\" (UniqueName: \"kubernetes.io/projected/612de9a5-9cf9-412f-823d-8be9cd1ebbdf-kube-api-access-h8qhs\") on node \"crc\" DevicePath \"\"" Feb 26 11:48:05 crc kubenswrapper[4724]: I0226 11:48:05.588662 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535108-mhgbs" event={"ID":"612de9a5-9cf9-412f-823d-8be9cd1ebbdf","Type":"ContainerDied","Data":"bd9e0cae9b19e8fab9e8b90e6f126a693c9eba4282b1728ba9125b242c370a8c"} Feb 26 11:48:05 crc kubenswrapper[4724]: I0226 11:48:05.588710 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd9e0cae9b19e8fab9e8b90e6f126a693c9eba4282b1728ba9125b242c370a8c" Feb 26 11:48:05 crc kubenswrapper[4724]: I0226 11:48:05.588749 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535108-mhgbs" Feb 26 11:48:06 crc kubenswrapper[4724]: I0226 11:48:06.009048 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535102-5hzg5"] Feb 26 11:48:06 crc kubenswrapper[4724]: I0226 11:48:06.017061 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535102-5hzg5"] Feb 26 11:48:07 crc kubenswrapper[4724]: I0226 11:48:07.990980 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64123437-3525-406b-b430-90dcfb4aaecb" path="/var/lib/kubelet/pods/64123437-3525-406b-b430-90dcfb4aaecb/volumes" Feb 26 11:48:08 crc kubenswrapper[4724]: I0226 11:48:08.975640 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:48:08 crc kubenswrapper[4724]: E0226 11:48:08.975961 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:48:20 crc kubenswrapper[4724]: I0226 11:48:20.976098 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:48:20 crc kubenswrapper[4724]: E0226 11:48:20.977234 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:48:31 crc kubenswrapper[4724]: I0226 11:48:31.977073 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:48:31 crc kubenswrapper[4724]: E0226 11:48:31.977834 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:48:46 crc kubenswrapper[4724]: I0226 11:48:46.975520 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:48:46 crc kubenswrapper[4724]: E0226 11:48:46.976292 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:48:52 crc kubenswrapper[4724]: I0226 11:48:52.063435 4724 generic.go:334] "Generic (PLEG): container finished" podID="33c4673e-f3b9-4bbf-a97d-39412344f6c8" containerID="d90c7efb81f77658b7ac2955e95136b6c1fd96b2def397b4dd851fc714a0e0be" exitCode=0 Feb 26 11:48:52 crc kubenswrapper[4724]: I0226 11:48:52.063707 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" event={"ID":"33c4673e-f3b9-4bbf-a97d-39412344f6c8","Type":"ContainerDied","Data":"d90c7efb81f77658b7ac2955e95136b6c1fd96b2def397b4dd851fc714a0e0be"} Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.427595 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.548661 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-inventory\") pod \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.548741 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ssh-key-openstack-edpm-ipam\") pod \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.548925 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s754\" (UniqueName: \"kubernetes.io/projected/33c4673e-f3b9-4bbf-a97d-39412344f6c8-kube-api-access-2s754\") pod \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.548990 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ovncontroller-config-0\") pod \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.549029 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ovn-combined-ca-bundle\") pod \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\" (UID: \"33c4673e-f3b9-4bbf-a97d-39412344f6c8\") " Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.556435 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33c4673e-f3b9-4bbf-a97d-39412344f6c8-kube-api-access-2s754" (OuterVolumeSpecName: "kube-api-access-2s754") pod "33c4673e-f3b9-4bbf-a97d-39412344f6c8" (UID: "33c4673e-f3b9-4bbf-a97d-39412344f6c8"). InnerVolumeSpecName "kube-api-access-2s754". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.570452 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "33c4673e-f3b9-4bbf-a97d-39412344f6c8" (UID: "33c4673e-f3b9-4bbf-a97d-39412344f6c8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.583522 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-inventory" (OuterVolumeSpecName: "inventory") pod "33c4673e-f3b9-4bbf-a97d-39412344f6c8" (UID: "33c4673e-f3b9-4bbf-a97d-39412344f6c8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.585161 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "33c4673e-f3b9-4bbf-a97d-39412344f6c8" (UID: "33c4673e-f3b9-4bbf-a97d-39412344f6c8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.593159 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "33c4673e-f3b9-4bbf-a97d-39412344f6c8" (UID: "33c4673e-f3b9-4bbf-a97d-39412344f6c8"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.651452 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s754\" (UniqueName: \"kubernetes.io/projected/33c4673e-f3b9-4bbf-a97d-39412344f6c8-kube-api-access-2s754\") on node \"crc\" DevicePath \"\"" Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.651485 4724 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.651496 4724 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.651506 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:48:53 crc kubenswrapper[4724]: I0226 11:48:53.651515 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/33c4673e-f3b9-4bbf-a97d-39412344f6c8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.082679 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" event={"ID":"33c4673e-f3b9-4bbf-a97d-39412344f6c8","Type":"ContainerDied","Data":"e506e4af8868300c2b3c3032dd93b5bb3129c9e52a2b7f3a7bb9100838872b0d"} Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.082729 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qw42n" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.082736 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e506e4af8868300c2b3c3032dd93b5bb3129c9e52a2b7f3a7bb9100838872b0d" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.177575 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5"] Feb 26 11:48:54 crc kubenswrapper[4724]: E0226 11:48:54.178191 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c4673e-f3b9-4bbf-a97d-39412344f6c8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.178206 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c4673e-f3b9-4bbf-a97d-39412344f6c8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 11:48:54 crc kubenswrapper[4724]: E0226 11:48:54.178215 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612de9a5-9cf9-412f-823d-8be9cd1ebbdf" containerName="oc" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.178222 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="612de9a5-9cf9-412f-823d-8be9cd1ebbdf" containerName="oc" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.178399 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="612de9a5-9cf9-412f-823d-8be9cd1ebbdf" containerName="oc" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.178420 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="33c4673e-f3b9-4bbf-a97d-39412344f6c8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.179008 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.182824 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.182859 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.183061 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.183145 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.183155 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.186656 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.192003 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5"] Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.264208 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wd5k\" (UniqueName: \"kubernetes.io/projected/d044f276-fe55-46c7-ba3f-e566a7f73e5b-kube-api-access-4wd5k\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.264272 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.264298 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.264575 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.264751 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.264835 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.366688 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.366790 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wd5k\" (UniqueName: \"kubernetes.io/projected/d044f276-fe55-46c7-ba3f-e566a7f73e5b-kube-api-access-4wd5k\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.366824 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.366844 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.366931 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.367033 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.372921 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.378511 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.378822 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.379299 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.380468 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.384419 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wd5k\" (UniqueName: \"kubernetes.io/projected/d044f276-fe55-46c7-ba3f-e566a7f73e5b-kube-api-access-4wd5k\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:54 crc kubenswrapper[4724]: I0226 11:48:54.500130 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:48:55 crc kubenswrapper[4724]: I0226 11:48:55.077709 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5"] Feb 26 11:48:56 crc kubenswrapper[4724]: I0226 11:48:56.128557 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" event={"ID":"d044f276-fe55-46c7-ba3f-e566a7f73e5b","Type":"ContainerStarted","Data":"e22ed84e32adca803978113dd1e864944086385164f2b4be9e1e24aca5ae258d"} Feb 26 11:48:56 crc kubenswrapper[4724]: I0226 11:48:56.129130 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" event={"ID":"d044f276-fe55-46c7-ba3f-e566a7f73e5b","Type":"ContainerStarted","Data":"f3de883092c5f1f64570faf2be8de593264957a542ad7c9e82922cd286c857aa"} Feb 26 11:48:56 crc kubenswrapper[4724]: I0226 11:48:56.162677 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" podStartSLOduration=1.665722108 podStartE2EDuration="2.162655172s" podCreationTimestamp="2026-02-26 11:48:54 +0000 UTC" firstStartedPulling="2026-02-26 11:48:55.098306165 +0000 UTC m=+2601.754045280" lastFinishedPulling="2026-02-26 11:48:55.595239229 +0000 UTC m=+2602.250978344" observedRunningTime="2026-02-26 11:48:56.154069883 +0000 UTC m=+2602.809808998" watchObservedRunningTime="2026-02-26 11:48:56.162655172 +0000 UTC m=+2602.818394287" Feb 26 11:48:59 crc kubenswrapper[4724]: I0226 11:48:59.847556 4724 scope.go:117] "RemoveContainer" containerID="6bcd250dea32e3bfdcfe5956287d51c5be020c332108d1700230774b3d75897c" Feb 26 11:49:00 crc kubenswrapper[4724]: I0226 11:49:00.975937 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:49:00 crc kubenswrapper[4724]: E0226 11:49:00.976514 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:49:10 crc kubenswrapper[4724]: I0226 11:49:10.659539 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x869l"] Feb 26 11:49:10 crc kubenswrapper[4724]: I0226 11:49:10.668976 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:10 crc kubenswrapper[4724]: I0226 11:49:10.678357 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x869l"] Feb 26 11:49:10 crc kubenswrapper[4724]: I0226 11:49:10.799247 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t7z8\" (UniqueName: \"kubernetes.io/projected/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-kube-api-access-9t7z8\") pod \"redhat-marketplace-x869l\" (UID: \"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c\") " pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:10 crc kubenswrapper[4724]: I0226 11:49:10.799727 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-catalog-content\") pod \"redhat-marketplace-x869l\" (UID: \"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c\") " pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:10 crc kubenswrapper[4724]: I0226 11:49:10.799859 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-utilities\") pod \"redhat-marketplace-x869l\" (UID: \"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c\") " pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:10 crc kubenswrapper[4724]: I0226 11:49:10.902992 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t7z8\" (UniqueName: \"kubernetes.io/projected/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-kube-api-access-9t7z8\") pod \"redhat-marketplace-x869l\" (UID: \"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c\") " pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:10 crc kubenswrapper[4724]: I0226 11:49:10.903096 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-catalog-content\") pod \"redhat-marketplace-x869l\" (UID: \"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c\") " pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:10 crc kubenswrapper[4724]: I0226 11:49:10.903191 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-utilities\") pod \"redhat-marketplace-x869l\" (UID: \"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c\") " pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:10 crc kubenswrapper[4724]: I0226 11:49:10.903663 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-utilities\") pod \"redhat-marketplace-x869l\" (UID: \"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c\") " pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:10 crc kubenswrapper[4724]: I0226 11:49:10.904170 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-catalog-content\") pod \"redhat-marketplace-x869l\" (UID: \"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c\") " pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:10 crc kubenswrapper[4724]: I0226 11:49:10.945728 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t7z8\" (UniqueName: \"kubernetes.io/projected/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-kube-api-access-9t7z8\") pod \"redhat-marketplace-x869l\" (UID: \"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c\") " pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:11 crc kubenswrapper[4724]: I0226 11:49:11.006297 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:12 crc kubenswrapper[4724]: I0226 11:49:11.526363 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x869l"] Feb 26 11:49:12 crc kubenswrapper[4724]: I0226 11:49:12.280911 4724 generic.go:334] "Generic (PLEG): container finished" podID="0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c" containerID="be6def91a19cb4f37f507491967b2ca2f8e7c7e6aa63b4974f0e9d3b738aa088" exitCode=0 Feb 26 11:49:12 crc kubenswrapper[4724]: I0226 11:49:12.281049 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x869l" event={"ID":"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c","Type":"ContainerDied","Data":"be6def91a19cb4f37f507491967b2ca2f8e7c7e6aa63b4974f0e9d3b738aa088"} Feb 26 11:49:12 crc kubenswrapper[4724]: I0226 11:49:12.281192 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x869l" event={"ID":"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c","Type":"ContainerStarted","Data":"ce772a288ea48bc9efc1e95798698c54a50473283c9e45b587e4efe2e69e5d21"} Feb 26 11:49:14 crc kubenswrapper[4724]: I0226 11:49:14.301652 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x869l" event={"ID":"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c","Type":"ContainerStarted","Data":"7374c60d072e6107aa40fa2870cab45a15a913393527dcb11d52facbc35ee158"} Feb 26 11:49:14 crc kubenswrapper[4724]: I0226 11:49:14.975379 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:49:14 crc kubenswrapper[4724]: E0226 11:49:14.975779 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:49:15 crc kubenswrapper[4724]: I0226 11:49:15.318570 4724 generic.go:334] "Generic (PLEG): container finished" podID="0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c" containerID="7374c60d072e6107aa40fa2870cab45a15a913393527dcb11d52facbc35ee158" exitCode=0 Feb 26 11:49:15 crc kubenswrapper[4724]: I0226 11:49:15.318751 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x869l" event={"ID":"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c","Type":"ContainerDied","Data":"7374c60d072e6107aa40fa2870cab45a15a913393527dcb11d52facbc35ee158"} Feb 26 11:49:16 crc kubenswrapper[4724]: I0226 11:49:16.387735 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x869l" event={"ID":"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c","Type":"ContainerStarted","Data":"32a1cee88bab7876bfc9a76bbd2e3a828c5eb15b14568e281242665cb20f4ec0"} Feb 26 11:49:16 crc kubenswrapper[4724]: I0226 11:49:16.417612 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x869l" podStartSLOduration=2.9773640329999997 podStartE2EDuration="6.417587608s" podCreationTimestamp="2026-02-26 11:49:10 +0000 UTC" firstStartedPulling="2026-02-26 11:49:12.282542561 +0000 UTC m=+2618.938281676" lastFinishedPulling="2026-02-26 11:49:15.722766136 +0000 UTC m=+2622.378505251" observedRunningTime="2026-02-26 11:49:16.405229802 +0000 UTC m=+2623.060968927" watchObservedRunningTime="2026-02-26 11:49:16.417587608 +0000 UTC m=+2623.073326743" Feb 26 11:49:21 crc kubenswrapper[4724]: I0226 11:49:21.007192 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:21 crc kubenswrapper[4724]: I0226 11:49:21.007738 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:21 crc kubenswrapper[4724]: I0226 11:49:21.063355 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:21 crc kubenswrapper[4724]: I0226 11:49:21.566642 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:21 crc kubenswrapper[4724]: I0226 11:49:21.674155 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x869l"] Feb 26 11:49:23 crc kubenswrapper[4724]: I0226 11:49:23.448897 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x869l" podUID="0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c" containerName="registry-server" containerID="cri-o://32a1cee88bab7876bfc9a76bbd2e3a828c5eb15b14568e281242665cb20f4ec0" gracePeriod=2 Feb 26 11:49:23 crc kubenswrapper[4724]: I0226 11:49:23.926011 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.010191 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-catalog-content\") pod \"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c\" (UID: \"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c\") " Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.010359 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t7z8\" (UniqueName: \"kubernetes.io/projected/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-kube-api-access-9t7z8\") pod \"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c\" (UID: \"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c\") " Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.010575 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-utilities\") pod \"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c\" (UID: \"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c\") " Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.014073 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-utilities" (OuterVolumeSpecName: "utilities") pod "0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c" (UID: "0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.024339 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-kube-api-access-9t7z8" (OuterVolumeSpecName: "kube-api-access-9t7z8") pod "0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c" (UID: "0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c"). InnerVolumeSpecName "kube-api-access-9t7z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.041074 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c" (UID: "0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.113888 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t7z8\" (UniqueName: \"kubernetes.io/projected/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-kube-api-access-9t7z8\") on node \"crc\" DevicePath \"\"" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.113938 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.113953 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.460298 4724 generic.go:334] "Generic (PLEG): container finished" podID="0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c" containerID="32a1cee88bab7876bfc9a76bbd2e3a828c5eb15b14568e281242665cb20f4ec0" exitCode=0 Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.460394 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x869l" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.460399 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x869l" event={"ID":"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c","Type":"ContainerDied","Data":"32a1cee88bab7876bfc9a76bbd2e3a828c5eb15b14568e281242665cb20f4ec0"} Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.460795 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x869l" event={"ID":"0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c","Type":"ContainerDied","Data":"ce772a288ea48bc9efc1e95798698c54a50473283c9e45b587e4efe2e69e5d21"} Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.460828 4724 scope.go:117] "RemoveContainer" containerID="32a1cee88bab7876bfc9a76bbd2e3a828c5eb15b14568e281242665cb20f4ec0" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.485243 4724 scope.go:117] "RemoveContainer" containerID="7374c60d072e6107aa40fa2870cab45a15a913393527dcb11d52facbc35ee158" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.505223 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x869l"] Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.512318 4724 scope.go:117] "RemoveContainer" containerID="be6def91a19cb4f37f507491967b2ca2f8e7c7e6aa63b4974f0e9d3b738aa088" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.518677 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x869l"] Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.561086 4724 scope.go:117] "RemoveContainer" containerID="32a1cee88bab7876bfc9a76bbd2e3a828c5eb15b14568e281242665cb20f4ec0" Feb 26 11:49:24 crc kubenswrapper[4724]: E0226 11:49:24.561526 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32a1cee88bab7876bfc9a76bbd2e3a828c5eb15b14568e281242665cb20f4ec0\": container with ID starting with 32a1cee88bab7876bfc9a76bbd2e3a828c5eb15b14568e281242665cb20f4ec0 not found: ID does not exist" containerID="32a1cee88bab7876bfc9a76bbd2e3a828c5eb15b14568e281242665cb20f4ec0" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.561560 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32a1cee88bab7876bfc9a76bbd2e3a828c5eb15b14568e281242665cb20f4ec0"} err="failed to get container status \"32a1cee88bab7876bfc9a76bbd2e3a828c5eb15b14568e281242665cb20f4ec0\": rpc error: code = NotFound desc = could not find container \"32a1cee88bab7876bfc9a76bbd2e3a828c5eb15b14568e281242665cb20f4ec0\": container with ID starting with 32a1cee88bab7876bfc9a76bbd2e3a828c5eb15b14568e281242665cb20f4ec0 not found: ID does not exist" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.561583 4724 scope.go:117] "RemoveContainer" containerID="7374c60d072e6107aa40fa2870cab45a15a913393527dcb11d52facbc35ee158" Feb 26 11:49:24 crc kubenswrapper[4724]: E0226 11:49:24.562067 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7374c60d072e6107aa40fa2870cab45a15a913393527dcb11d52facbc35ee158\": container with ID starting with 7374c60d072e6107aa40fa2870cab45a15a913393527dcb11d52facbc35ee158 not found: ID does not exist" containerID="7374c60d072e6107aa40fa2870cab45a15a913393527dcb11d52facbc35ee158" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.562124 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7374c60d072e6107aa40fa2870cab45a15a913393527dcb11d52facbc35ee158"} err="failed to get container status \"7374c60d072e6107aa40fa2870cab45a15a913393527dcb11d52facbc35ee158\": rpc error: code = NotFound desc = could not find container \"7374c60d072e6107aa40fa2870cab45a15a913393527dcb11d52facbc35ee158\": container with ID starting with 7374c60d072e6107aa40fa2870cab45a15a913393527dcb11d52facbc35ee158 not found: ID does not exist" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.562153 4724 scope.go:117] "RemoveContainer" containerID="be6def91a19cb4f37f507491967b2ca2f8e7c7e6aa63b4974f0e9d3b738aa088" Feb 26 11:49:24 crc kubenswrapper[4724]: E0226 11:49:24.562491 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be6def91a19cb4f37f507491967b2ca2f8e7c7e6aa63b4974f0e9d3b738aa088\": container with ID starting with be6def91a19cb4f37f507491967b2ca2f8e7c7e6aa63b4974f0e9d3b738aa088 not found: ID does not exist" containerID="be6def91a19cb4f37f507491967b2ca2f8e7c7e6aa63b4974f0e9d3b738aa088" Feb 26 11:49:24 crc kubenswrapper[4724]: I0226 11:49:24.562517 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be6def91a19cb4f37f507491967b2ca2f8e7c7e6aa63b4974f0e9d3b738aa088"} err="failed to get container status \"be6def91a19cb4f37f507491967b2ca2f8e7c7e6aa63b4974f0e9d3b738aa088\": rpc error: code = NotFound desc = could not find container \"be6def91a19cb4f37f507491967b2ca2f8e7c7e6aa63b4974f0e9d3b738aa088\": container with ID starting with be6def91a19cb4f37f507491967b2ca2f8e7c7e6aa63b4974f0e9d3b738aa088 not found: ID does not exist" Feb 26 11:49:25 crc kubenswrapper[4724]: I0226 11:49:25.985626 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c" path="/var/lib/kubelet/pods/0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c/volumes" Feb 26 11:49:28 crc kubenswrapper[4724]: I0226 11:49:28.976602 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:49:28 crc kubenswrapper[4724]: E0226 11:49:28.977121 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:49:39 crc kubenswrapper[4724]: I0226 11:49:39.976196 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:49:39 crc kubenswrapper[4724]: E0226 11:49:39.976912 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:49:40 crc kubenswrapper[4724]: I0226 11:49:40.604585 4724 generic.go:334] "Generic (PLEG): container finished" podID="d044f276-fe55-46c7-ba3f-e566a7f73e5b" containerID="e22ed84e32adca803978113dd1e864944086385164f2b4be9e1e24aca5ae258d" exitCode=0 Feb 26 11:49:40 crc kubenswrapper[4724]: I0226 11:49:40.604669 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" event={"ID":"d044f276-fe55-46c7-ba3f-e566a7f73e5b","Type":"ContainerDied","Data":"e22ed84e32adca803978113dd1e864944086385164f2b4be9e1e24aca5ae258d"} Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.028600 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.191115 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-inventory\") pod \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.191158 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.191209 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wd5k\" (UniqueName: \"kubernetes.io/projected/d044f276-fe55-46c7-ba3f-e566a7f73e5b-kube-api-access-4wd5k\") pod \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.191273 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-neutron-metadata-combined-ca-bundle\") pod \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.192036 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-ssh-key-openstack-edpm-ipam\") pod \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.192154 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-nova-metadata-neutron-config-0\") pod \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\" (UID: \"d044f276-fe55-46c7-ba3f-e566a7f73e5b\") " Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.196886 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d044f276-fe55-46c7-ba3f-e566a7f73e5b" (UID: "d044f276-fe55-46c7-ba3f-e566a7f73e5b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.199466 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d044f276-fe55-46c7-ba3f-e566a7f73e5b-kube-api-access-4wd5k" (OuterVolumeSpecName: "kube-api-access-4wd5k") pod "d044f276-fe55-46c7-ba3f-e566a7f73e5b" (UID: "d044f276-fe55-46c7-ba3f-e566a7f73e5b"). InnerVolumeSpecName "kube-api-access-4wd5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.228562 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "d044f276-fe55-46c7-ba3f-e566a7f73e5b" (UID: "d044f276-fe55-46c7-ba3f-e566a7f73e5b"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.228672 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "d044f276-fe55-46c7-ba3f-e566a7f73e5b" (UID: "d044f276-fe55-46c7-ba3f-e566a7f73e5b"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.228809 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-inventory" (OuterVolumeSpecName: "inventory") pod "d044f276-fe55-46c7-ba3f-e566a7f73e5b" (UID: "d044f276-fe55-46c7-ba3f-e566a7f73e5b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.229727 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d044f276-fe55-46c7-ba3f-e566a7f73e5b" (UID: "d044f276-fe55-46c7-ba3f-e566a7f73e5b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.295610 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.295657 4724 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.295676 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wd5k\" (UniqueName: \"kubernetes.io/projected/d044f276-fe55-46c7-ba3f-e566a7f73e5b-kube-api-access-4wd5k\") on node \"crc\" DevicePath \"\"" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.295695 4724 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.295710 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.295721 4724 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d044f276-fe55-46c7-ba3f-e566a7f73e5b-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.625843 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" event={"ID":"d044f276-fe55-46c7-ba3f-e566a7f73e5b","Type":"ContainerDied","Data":"f3de883092c5f1f64570faf2be8de593264957a542ad7c9e82922cd286c857aa"} Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.625882 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.625889 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3de883092c5f1f64570faf2be8de593264957a542ad7c9e82922cd286c857aa" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.769810 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97"] Feb 26 11:49:42 crc kubenswrapper[4724]: E0226 11:49:42.771813 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c" containerName="extract-utilities" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.771845 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c" containerName="extract-utilities" Feb 26 11:49:42 crc kubenswrapper[4724]: E0226 11:49:42.771871 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d044f276-fe55-46c7-ba3f-e566a7f73e5b" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.771882 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d044f276-fe55-46c7-ba3f-e566a7f73e5b" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 11:49:42 crc kubenswrapper[4724]: E0226 11:49:42.771911 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c" containerName="extract-content" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.771920 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c" containerName="extract-content" Feb 26 11:49:42 crc kubenswrapper[4724]: E0226 11:49:42.771948 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c" containerName="registry-server" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.771955 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c" containerName="registry-server" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.772266 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e6c8b07-d02d-40f3-a1dc-ec729b9b0f3c" containerName="registry-server" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.772301 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="d044f276-fe55-46c7-ba3f-e566a7f73e5b" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.773125 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.780348 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.780484 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.780563 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.780724 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.782002 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.783979 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97"] Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.803906 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwr97\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.803969 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwr97\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.804018 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5hgc\" (UniqueName: \"kubernetes.io/projected/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-kube-api-access-f5hgc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwr97\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.804089 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwr97\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.804153 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwr97\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.905399 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5hgc\" (UniqueName: \"kubernetes.io/projected/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-kube-api-access-f5hgc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwr97\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.905492 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwr97\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.905529 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwr97\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.905577 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwr97\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.905616 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwr97\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.911942 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwr97\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.912207 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwr97\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.915720 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwr97\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.924815 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwr97\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:42 crc kubenswrapper[4724]: I0226 11:49:42.926040 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5hgc\" (UniqueName: \"kubernetes.io/projected/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-kube-api-access-f5hgc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwr97\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:43 crc kubenswrapper[4724]: I0226 11:49:43.094144 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:49:43 crc kubenswrapper[4724]: I0226 11:49:43.658274 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97"] Feb 26 11:49:44 crc kubenswrapper[4724]: I0226 11:49:44.648535 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" event={"ID":"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e","Type":"ContainerStarted","Data":"a640a6bf4553f57f1c5982122b6971027cf3eed453d7bb6ea392379d8658bca8"} Feb 26 11:49:44 crc kubenswrapper[4724]: I0226 11:49:44.648899 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" event={"ID":"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e","Type":"ContainerStarted","Data":"c8af012504aa1efab4f13739cd18ce60ec28cdb9be958778449d79de1f674163"} Feb 26 11:49:44 crc kubenswrapper[4724]: I0226 11:49:44.672367 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" podStartSLOduration=2.013594234 podStartE2EDuration="2.672346156s" podCreationTimestamp="2026-02-26 11:49:42 +0000 UTC" firstStartedPulling="2026-02-26 11:49:43.6629089 +0000 UTC m=+2650.318648015" lastFinishedPulling="2026-02-26 11:49:44.321660822 +0000 UTC m=+2650.977399937" observedRunningTime="2026-02-26 11:49:44.664219359 +0000 UTC m=+2651.319958484" watchObservedRunningTime="2026-02-26 11:49:44.672346156 +0000 UTC m=+2651.328085271" Feb 26 11:49:51 crc kubenswrapper[4724]: I0226 11:49:51.975597 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:49:51 crc kubenswrapper[4724]: E0226 11:49:51.976265 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:50:00 crc kubenswrapper[4724]: I0226 11:50:00.133976 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535110-jx99k"] Feb 26 11:50:00 crc kubenswrapper[4724]: I0226 11:50:00.136918 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535110-jx99k" Feb 26 11:50:00 crc kubenswrapper[4724]: I0226 11:50:00.139232 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:50:00 crc kubenswrapper[4724]: I0226 11:50:00.141692 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:50:00 crc kubenswrapper[4724]: I0226 11:50:00.142781 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:50:00 crc kubenswrapper[4724]: I0226 11:50:00.150719 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535110-jx99k"] Feb 26 11:50:00 crc kubenswrapper[4724]: I0226 11:50:00.307214 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4jlh\" (UniqueName: \"kubernetes.io/projected/c7e81268-2152-485e-ab23-331c1e0e738e-kube-api-access-n4jlh\") pod \"auto-csr-approver-29535110-jx99k\" (UID: \"c7e81268-2152-485e-ab23-331c1e0e738e\") " pod="openshift-infra/auto-csr-approver-29535110-jx99k" Feb 26 11:50:00 crc kubenswrapper[4724]: I0226 11:50:00.409356 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4jlh\" (UniqueName: \"kubernetes.io/projected/c7e81268-2152-485e-ab23-331c1e0e738e-kube-api-access-n4jlh\") pod \"auto-csr-approver-29535110-jx99k\" (UID: \"c7e81268-2152-485e-ab23-331c1e0e738e\") " pod="openshift-infra/auto-csr-approver-29535110-jx99k" Feb 26 11:50:00 crc kubenswrapper[4724]: I0226 11:50:00.432078 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4jlh\" (UniqueName: \"kubernetes.io/projected/c7e81268-2152-485e-ab23-331c1e0e738e-kube-api-access-n4jlh\") pod \"auto-csr-approver-29535110-jx99k\" (UID: \"c7e81268-2152-485e-ab23-331c1e0e738e\") " pod="openshift-infra/auto-csr-approver-29535110-jx99k" Feb 26 11:50:00 crc kubenswrapper[4724]: I0226 11:50:00.463357 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535110-jx99k" Feb 26 11:50:00 crc kubenswrapper[4724]: I0226 11:50:00.947220 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535110-jx99k"] Feb 26 11:50:01 crc kubenswrapper[4724]: I0226 11:50:01.231540 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535110-jx99k" event={"ID":"c7e81268-2152-485e-ab23-331c1e0e738e","Type":"ContainerStarted","Data":"fac07ab2184fb6fca18dafe81b347ebee6a2bce1f803109b8c5456b2dd2465d5"} Feb 26 11:50:02 crc kubenswrapper[4724]: I0226 11:50:02.974850 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:50:02 crc kubenswrapper[4724]: E0226 11:50:02.975459 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:50:04 crc kubenswrapper[4724]: I0226 11:50:04.260664 4724 generic.go:334] "Generic (PLEG): container finished" podID="c7e81268-2152-485e-ab23-331c1e0e738e" containerID="c001fb455ec7730566a14e4ab9b7d520719db426453d6fbc9e881ff5769b128c" exitCode=0 Feb 26 11:50:04 crc kubenswrapper[4724]: I0226 11:50:04.260747 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535110-jx99k" event={"ID":"c7e81268-2152-485e-ab23-331c1e0e738e","Type":"ContainerDied","Data":"c001fb455ec7730566a14e4ab9b7d520719db426453d6fbc9e881ff5769b128c"} Feb 26 11:50:05 crc kubenswrapper[4724]: I0226 11:50:05.638767 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535110-jx99k" Feb 26 11:50:05 crc kubenswrapper[4724]: I0226 11:50:05.822872 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4jlh\" (UniqueName: \"kubernetes.io/projected/c7e81268-2152-485e-ab23-331c1e0e738e-kube-api-access-n4jlh\") pod \"c7e81268-2152-485e-ab23-331c1e0e738e\" (UID: \"c7e81268-2152-485e-ab23-331c1e0e738e\") " Feb 26 11:50:05 crc kubenswrapper[4724]: I0226 11:50:05.828948 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e81268-2152-485e-ab23-331c1e0e738e-kube-api-access-n4jlh" (OuterVolumeSpecName: "kube-api-access-n4jlh") pod "c7e81268-2152-485e-ab23-331c1e0e738e" (UID: "c7e81268-2152-485e-ab23-331c1e0e738e"). InnerVolumeSpecName "kube-api-access-n4jlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:50:05 crc kubenswrapper[4724]: I0226 11:50:05.925087 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4jlh\" (UniqueName: \"kubernetes.io/projected/c7e81268-2152-485e-ab23-331c1e0e738e-kube-api-access-n4jlh\") on node \"crc\" DevicePath \"\"" Feb 26 11:50:06 crc kubenswrapper[4724]: I0226 11:50:06.279937 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535110-jx99k" event={"ID":"c7e81268-2152-485e-ab23-331c1e0e738e","Type":"ContainerDied","Data":"fac07ab2184fb6fca18dafe81b347ebee6a2bce1f803109b8c5456b2dd2465d5"} Feb 26 11:50:06 crc kubenswrapper[4724]: I0226 11:50:06.279986 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fac07ab2184fb6fca18dafe81b347ebee6a2bce1f803109b8c5456b2dd2465d5" Feb 26 11:50:06 crc kubenswrapper[4724]: I0226 11:50:06.280031 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535110-jx99k" Feb 26 11:50:06 crc kubenswrapper[4724]: I0226 11:50:06.725644 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535104-nhbvs"] Feb 26 11:50:06 crc kubenswrapper[4724]: I0226 11:50:06.734618 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535104-nhbvs"] Feb 26 11:50:07 crc kubenswrapper[4724]: I0226 11:50:07.987548 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b153bf-7f6f-4454-bf64-bba111ce8391" path="/var/lib/kubelet/pods/12b153bf-7f6f-4454-bf64-bba111ce8391/volumes" Feb 26 11:50:13 crc kubenswrapper[4724]: I0226 11:50:13.983300 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:50:13 crc kubenswrapper[4724]: E0226 11:50:13.984161 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:50:27 crc kubenswrapper[4724]: I0226 11:50:27.895820 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n5nts"] Feb 26 11:50:27 crc kubenswrapper[4724]: E0226 11:50:27.896763 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e81268-2152-485e-ab23-331c1e0e738e" containerName="oc" Feb 26 11:50:27 crc kubenswrapper[4724]: I0226 11:50:27.896779 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e81268-2152-485e-ab23-331c1e0e738e" containerName="oc" Feb 26 11:50:27 crc kubenswrapper[4724]: I0226 11:50:27.896951 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e81268-2152-485e-ab23-331c1e0e738e" containerName="oc" Feb 26 11:50:27 crc kubenswrapper[4724]: I0226 11:50:27.898548 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:27 crc kubenswrapper[4724]: I0226 11:50:27.922746 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n5nts"] Feb 26 11:50:27 crc kubenswrapper[4724]: I0226 11:50:27.977582 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8099f6d-1f0f-433f-a964-eaa13db2daef-catalog-content\") pod \"community-operators-n5nts\" (UID: \"a8099f6d-1f0f-433f-a964-eaa13db2daef\") " pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:27 crc kubenswrapper[4724]: I0226 11:50:27.977640 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cl66\" (UniqueName: \"kubernetes.io/projected/a8099f6d-1f0f-433f-a964-eaa13db2daef-kube-api-access-5cl66\") pod \"community-operators-n5nts\" (UID: \"a8099f6d-1f0f-433f-a964-eaa13db2daef\") " pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:27 crc kubenswrapper[4724]: I0226 11:50:27.978075 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8099f6d-1f0f-433f-a964-eaa13db2daef-utilities\") pod \"community-operators-n5nts\" (UID: \"a8099f6d-1f0f-433f-a964-eaa13db2daef\") " pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:28 crc kubenswrapper[4724]: I0226 11:50:28.079977 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8099f6d-1f0f-433f-a964-eaa13db2daef-utilities\") pod \"community-operators-n5nts\" (UID: \"a8099f6d-1f0f-433f-a964-eaa13db2daef\") " pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:28 crc kubenswrapper[4724]: I0226 11:50:28.080334 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8099f6d-1f0f-433f-a964-eaa13db2daef-catalog-content\") pod \"community-operators-n5nts\" (UID: \"a8099f6d-1f0f-433f-a964-eaa13db2daef\") " pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:28 crc kubenswrapper[4724]: I0226 11:50:28.080423 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cl66\" (UniqueName: \"kubernetes.io/projected/a8099f6d-1f0f-433f-a964-eaa13db2daef-kube-api-access-5cl66\") pod \"community-operators-n5nts\" (UID: \"a8099f6d-1f0f-433f-a964-eaa13db2daef\") " pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:28 crc kubenswrapper[4724]: I0226 11:50:28.080500 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8099f6d-1f0f-433f-a964-eaa13db2daef-utilities\") pod \"community-operators-n5nts\" (UID: \"a8099f6d-1f0f-433f-a964-eaa13db2daef\") " pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:28 crc kubenswrapper[4724]: I0226 11:50:28.080799 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8099f6d-1f0f-433f-a964-eaa13db2daef-catalog-content\") pod \"community-operators-n5nts\" (UID: \"a8099f6d-1f0f-433f-a964-eaa13db2daef\") " pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:28 crc kubenswrapper[4724]: I0226 11:50:28.101295 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cl66\" (UniqueName: \"kubernetes.io/projected/a8099f6d-1f0f-433f-a964-eaa13db2daef-kube-api-access-5cl66\") pod \"community-operators-n5nts\" (UID: \"a8099f6d-1f0f-433f-a964-eaa13db2daef\") " pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:28 crc kubenswrapper[4724]: I0226 11:50:28.217537 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:28 crc kubenswrapper[4724]: I0226 11:50:28.633880 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n5nts"] Feb 26 11:50:28 crc kubenswrapper[4724]: I0226 11:50:28.976145 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:50:28 crc kubenswrapper[4724]: E0226 11:50:28.976494 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:50:29 crc kubenswrapper[4724]: I0226 11:50:29.494165 4724 generic.go:334] "Generic (PLEG): container finished" podID="a8099f6d-1f0f-433f-a964-eaa13db2daef" containerID="b284dabed9ee4e2ec0feecd3eb34d2a4329d520035fd6f26ba305d714a87a658" exitCode=0 Feb 26 11:50:29 crc kubenswrapper[4724]: I0226 11:50:29.494302 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5nts" event={"ID":"a8099f6d-1f0f-433f-a964-eaa13db2daef","Type":"ContainerDied","Data":"b284dabed9ee4e2ec0feecd3eb34d2a4329d520035fd6f26ba305d714a87a658"} Feb 26 11:50:29 crc kubenswrapper[4724]: I0226 11:50:29.494587 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5nts" event={"ID":"a8099f6d-1f0f-433f-a964-eaa13db2daef","Type":"ContainerStarted","Data":"97d8011405384a0cde378bd37ff21db735a770f5c63cb1e582a100b15db94761"} Feb 26 11:50:30 crc kubenswrapper[4724]: I0226 11:50:30.505150 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5nts" event={"ID":"a8099f6d-1f0f-433f-a964-eaa13db2daef","Type":"ContainerStarted","Data":"cd5e04079b1654557225f939e8dd4927da53f12532ece3a30c692d81d79ee17d"} Feb 26 11:50:32 crc kubenswrapper[4724]: I0226 11:50:32.525770 4724 generic.go:334] "Generic (PLEG): container finished" podID="a8099f6d-1f0f-433f-a964-eaa13db2daef" containerID="cd5e04079b1654557225f939e8dd4927da53f12532ece3a30c692d81d79ee17d" exitCode=0 Feb 26 11:50:32 crc kubenswrapper[4724]: I0226 11:50:32.525843 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5nts" event={"ID":"a8099f6d-1f0f-433f-a964-eaa13db2daef","Type":"ContainerDied","Data":"cd5e04079b1654557225f939e8dd4927da53f12532ece3a30c692d81d79ee17d"} Feb 26 11:50:33 crc kubenswrapper[4724]: I0226 11:50:33.542705 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5nts" event={"ID":"a8099f6d-1f0f-433f-a964-eaa13db2daef","Type":"ContainerStarted","Data":"affe6ba3c9794cd125518742d63793f79a304df3c149c6931bd16c307ef6135c"} Feb 26 11:50:33 crc kubenswrapper[4724]: I0226 11:50:33.570165 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n5nts" podStartSLOduration=3.07565678 podStartE2EDuration="6.57014431s" podCreationTimestamp="2026-02-26 11:50:27 +0000 UTC" firstStartedPulling="2026-02-26 11:50:29.496501859 +0000 UTC m=+2696.152240984" lastFinishedPulling="2026-02-26 11:50:32.990989399 +0000 UTC m=+2699.646728514" observedRunningTime="2026-02-26 11:50:33.55994154 +0000 UTC m=+2700.215680655" watchObservedRunningTime="2026-02-26 11:50:33.57014431 +0000 UTC m=+2700.225883425" Feb 26 11:50:38 crc kubenswrapper[4724]: I0226 11:50:38.218459 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:38 crc kubenswrapper[4724]: I0226 11:50:38.219057 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:38 crc kubenswrapper[4724]: I0226 11:50:38.271113 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:38 crc kubenswrapper[4724]: I0226 11:50:38.638334 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:38 crc kubenswrapper[4724]: I0226 11:50:38.695675 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n5nts"] Feb 26 11:50:40 crc kubenswrapper[4724]: I0226 11:50:40.602835 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n5nts" podUID="a8099f6d-1f0f-433f-a964-eaa13db2daef" containerName="registry-server" containerID="cri-o://affe6ba3c9794cd125518742d63793f79a304df3c149c6931bd16c307ef6135c" gracePeriod=2 Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.079592 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.261871 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8099f6d-1f0f-433f-a964-eaa13db2daef-utilities\") pod \"a8099f6d-1f0f-433f-a964-eaa13db2daef\" (UID: \"a8099f6d-1f0f-433f-a964-eaa13db2daef\") " Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.262503 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cl66\" (UniqueName: \"kubernetes.io/projected/a8099f6d-1f0f-433f-a964-eaa13db2daef-kube-api-access-5cl66\") pod \"a8099f6d-1f0f-433f-a964-eaa13db2daef\" (UID: \"a8099f6d-1f0f-433f-a964-eaa13db2daef\") " Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.262604 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8099f6d-1f0f-433f-a964-eaa13db2daef-catalog-content\") pod \"a8099f6d-1f0f-433f-a964-eaa13db2daef\" (UID: \"a8099f6d-1f0f-433f-a964-eaa13db2daef\") " Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.263237 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8099f6d-1f0f-433f-a964-eaa13db2daef-utilities" (OuterVolumeSpecName: "utilities") pod "a8099f6d-1f0f-433f-a964-eaa13db2daef" (UID: "a8099f6d-1f0f-433f-a964-eaa13db2daef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.275982 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8099f6d-1f0f-433f-a964-eaa13db2daef-kube-api-access-5cl66" (OuterVolumeSpecName: "kube-api-access-5cl66") pod "a8099f6d-1f0f-433f-a964-eaa13db2daef" (UID: "a8099f6d-1f0f-433f-a964-eaa13db2daef"). InnerVolumeSpecName "kube-api-access-5cl66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.319851 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8099f6d-1f0f-433f-a964-eaa13db2daef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8099f6d-1f0f-433f-a964-eaa13db2daef" (UID: "a8099f6d-1f0f-433f-a964-eaa13db2daef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.364623 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cl66\" (UniqueName: \"kubernetes.io/projected/a8099f6d-1f0f-433f-a964-eaa13db2daef-kube-api-access-5cl66\") on node \"crc\" DevicePath \"\"" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.364669 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8099f6d-1f0f-433f-a964-eaa13db2daef-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.364683 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8099f6d-1f0f-433f-a964-eaa13db2daef-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.616573 4724 generic.go:334] "Generic (PLEG): container finished" podID="a8099f6d-1f0f-433f-a964-eaa13db2daef" containerID="affe6ba3c9794cd125518742d63793f79a304df3c149c6931bd16c307ef6135c" exitCode=0 Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.616643 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5nts" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.616658 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5nts" event={"ID":"a8099f6d-1f0f-433f-a964-eaa13db2daef","Type":"ContainerDied","Data":"affe6ba3c9794cd125518742d63793f79a304df3c149c6931bd16c307ef6135c"} Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.617749 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5nts" event={"ID":"a8099f6d-1f0f-433f-a964-eaa13db2daef","Type":"ContainerDied","Data":"97d8011405384a0cde378bd37ff21db735a770f5c63cb1e582a100b15db94761"} Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.617774 4724 scope.go:117] "RemoveContainer" containerID="affe6ba3c9794cd125518742d63793f79a304df3c149c6931bd16c307ef6135c" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.657168 4724 scope.go:117] "RemoveContainer" containerID="cd5e04079b1654557225f939e8dd4927da53f12532ece3a30c692d81d79ee17d" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.660461 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n5nts"] Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.669934 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n5nts"] Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.693012 4724 scope.go:117] "RemoveContainer" containerID="b284dabed9ee4e2ec0feecd3eb34d2a4329d520035fd6f26ba305d714a87a658" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.722970 4724 scope.go:117] "RemoveContainer" containerID="affe6ba3c9794cd125518742d63793f79a304df3c149c6931bd16c307ef6135c" Feb 26 11:50:41 crc kubenswrapper[4724]: E0226 11:50:41.723691 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"affe6ba3c9794cd125518742d63793f79a304df3c149c6931bd16c307ef6135c\": container with ID starting with affe6ba3c9794cd125518742d63793f79a304df3c149c6931bd16c307ef6135c not found: ID does not exist" containerID="affe6ba3c9794cd125518742d63793f79a304df3c149c6931bd16c307ef6135c" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.723801 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"affe6ba3c9794cd125518742d63793f79a304df3c149c6931bd16c307ef6135c"} err="failed to get container status \"affe6ba3c9794cd125518742d63793f79a304df3c149c6931bd16c307ef6135c\": rpc error: code = NotFound desc = could not find container \"affe6ba3c9794cd125518742d63793f79a304df3c149c6931bd16c307ef6135c\": container with ID starting with affe6ba3c9794cd125518742d63793f79a304df3c149c6931bd16c307ef6135c not found: ID does not exist" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.723896 4724 scope.go:117] "RemoveContainer" containerID="cd5e04079b1654557225f939e8dd4927da53f12532ece3a30c692d81d79ee17d" Feb 26 11:50:41 crc kubenswrapper[4724]: E0226 11:50:41.724222 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd5e04079b1654557225f939e8dd4927da53f12532ece3a30c692d81d79ee17d\": container with ID starting with cd5e04079b1654557225f939e8dd4927da53f12532ece3a30c692d81d79ee17d not found: ID does not exist" containerID="cd5e04079b1654557225f939e8dd4927da53f12532ece3a30c692d81d79ee17d" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.724320 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd5e04079b1654557225f939e8dd4927da53f12532ece3a30c692d81d79ee17d"} err="failed to get container status \"cd5e04079b1654557225f939e8dd4927da53f12532ece3a30c692d81d79ee17d\": rpc error: code = NotFound desc = could not find container \"cd5e04079b1654557225f939e8dd4927da53f12532ece3a30c692d81d79ee17d\": container with ID starting with cd5e04079b1654557225f939e8dd4927da53f12532ece3a30c692d81d79ee17d not found: ID does not exist" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.724409 4724 scope.go:117] "RemoveContainer" containerID="b284dabed9ee4e2ec0feecd3eb34d2a4329d520035fd6f26ba305d714a87a658" Feb 26 11:50:41 crc kubenswrapper[4724]: E0226 11:50:41.724702 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b284dabed9ee4e2ec0feecd3eb34d2a4329d520035fd6f26ba305d714a87a658\": container with ID starting with b284dabed9ee4e2ec0feecd3eb34d2a4329d520035fd6f26ba305d714a87a658 not found: ID does not exist" containerID="b284dabed9ee4e2ec0feecd3eb34d2a4329d520035fd6f26ba305d714a87a658" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.724805 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b284dabed9ee4e2ec0feecd3eb34d2a4329d520035fd6f26ba305d714a87a658"} err="failed to get container status \"b284dabed9ee4e2ec0feecd3eb34d2a4329d520035fd6f26ba305d714a87a658\": rpc error: code = NotFound desc = could not find container \"b284dabed9ee4e2ec0feecd3eb34d2a4329d520035fd6f26ba305d714a87a658\": container with ID starting with b284dabed9ee4e2ec0feecd3eb34d2a4329d520035fd6f26ba305d714a87a658 not found: ID does not exist" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.976655 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:50:41 crc kubenswrapper[4724]: E0226 11:50:41.976878 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:50:41 crc kubenswrapper[4724]: I0226 11:50:41.989585 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8099f6d-1f0f-433f-a964-eaa13db2daef" path="/var/lib/kubelet/pods/a8099f6d-1f0f-433f-a964-eaa13db2daef/volumes" Feb 26 11:50:52 crc kubenswrapper[4724]: I0226 11:50:52.975523 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:50:52 crc kubenswrapper[4724]: E0226 11:50:52.976457 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:50:59 crc kubenswrapper[4724]: I0226 11:50:59.950095 4724 scope.go:117] "RemoveContainer" containerID="7bfaece8e0c9084d55dc97f998815ba5c2cfe537e8859c4ab4a011b4beee29b9" Feb 26 11:51:07 crc kubenswrapper[4724]: I0226 11:51:07.975986 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:51:07 crc kubenswrapper[4724]: E0226 11:51:07.976881 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.395766 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h2z6p"] Feb 26 11:51:13 crc kubenswrapper[4724]: E0226 11:51:13.397772 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8099f6d-1f0f-433f-a964-eaa13db2daef" containerName="extract-utilities" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.397875 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8099f6d-1f0f-433f-a964-eaa13db2daef" containerName="extract-utilities" Feb 26 11:51:13 crc kubenswrapper[4724]: E0226 11:51:13.397946 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8099f6d-1f0f-433f-a964-eaa13db2daef" containerName="registry-server" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.397997 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8099f6d-1f0f-433f-a964-eaa13db2daef" containerName="registry-server" Feb 26 11:51:13 crc kubenswrapper[4724]: E0226 11:51:13.398072 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8099f6d-1f0f-433f-a964-eaa13db2daef" containerName="extract-content" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.398124 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8099f6d-1f0f-433f-a964-eaa13db2daef" containerName="extract-content" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.398509 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8099f6d-1f0f-433f-a964-eaa13db2daef" containerName="registry-server" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.399996 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.403524 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h2z6p"] Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.493654 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzhrx\" (UniqueName: \"kubernetes.io/projected/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-kube-api-access-bzhrx\") pod \"redhat-operators-h2z6p\" (UID: \"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4\") " pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.493799 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-catalog-content\") pod \"redhat-operators-h2z6p\" (UID: \"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4\") " pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.493882 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-utilities\") pod \"redhat-operators-h2z6p\" (UID: \"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4\") " pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.595102 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-utilities\") pod \"redhat-operators-h2z6p\" (UID: \"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4\") " pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.595454 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzhrx\" (UniqueName: \"kubernetes.io/projected/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-kube-api-access-bzhrx\") pod \"redhat-operators-h2z6p\" (UID: \"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4\") " pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.595648 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-utilities\") pod \"redhat-operators-h2z6p\" (UID: \"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4\") " pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.596047 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-catalog-content\") pod \"redhat-operators-h2z6p\" (UID: \"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4\") " pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.596162 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-catalog-content\") pod \"redhat-operators-h2z6p\" (UID: \"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4\") " pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.623153 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzhrx\" (UniqueName: \"kubernetes.io/projected/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-kube-api-access-bzhrx\") pod \"redhat-operators-h2z6p\" (UID: \"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4\") " pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:51:13 crc kubenswrapper[4724]: I0226 11:51:13.724503 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:51:14 crc kubenswrapper[4724]: I0226 11:51:14.576497 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h2z6p"] Feb 26 11:51:14 crc kubenswrapper[4724]: I0226 11:51:14.927994 4724 generic.go:334] "Generic (PLEG): container finished" podID="bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" containerID="3fc334dda42b6854da5e1ae8b80c0276da1aa3357e3a7def245f4457ce575af5" exitCode=0 Feb 26 11:51:14 crc kubenswrapper[4724]: I0226 11:51:14.928054 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2z6p" event={"ID":"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4","Type":"ContainerDied","Data":"3fc334dda42b6854da5e1ae8b80c0276da1aa3357e3a7def245f4457ce575af5"} Feb 26 11:51:14 crc kubenswrapper[4724]: I0226 11:51:14.928123 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2z6p" event={"ID":"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4","Type":"ContainerStarted","Data":"dc0cf7b2a02a4fa9043c5a563a07fea2a8553b0e957e36093a97b1009c87be00"} Feb 26 11:51:15 crc kubenswrapper[4724]: I0226 11:51:15.940363 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2z6p" event={"ID":"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4","Type":"ContainerStarted","Data":"a5d0b2efe9d63b86477ff4fb8efb3333b52fb30283d065b904fd674a08754f88"} Feb 26 11:51:20 crc kubenswrapper[4724]: I0226 11:51:20.976822 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:51:20 crc kubenswrapper[4724]: E0226 11:51:20.977597 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:51:25 crc kubenswrapper[4724]: I0226 11:51:25.058991 4724 generic.go:334] "Generic (PLEG): container finished" podID="bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" containerID="a5d0b2efe9d63b86477ff4fb8efb3333b52fb30283d065b904fd674a08754f88" exitCode=0 Feb 26 11:51:25 crc kubenswrapper[4724]: I0226 11:51:25.059528 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2z6p" event={"ID":"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4","Type":"ContainerDied","Data":"a5d0b2efe9d63b86477ff4fb8efb3333b52fb30283d065b904fd674a08754f88"} Feb 26 11:51:28 crc kubenswrapper[4724]: I0226 11:51:28.095617 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2z6p" event={"ID":"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4","Type":"ContainerStarted","Data":"2b43f15feda8ac6835341a30cc2133abe07d1f70383f2bdceee06c4c5555459b"} Feb 26 11:51:28 crc kubenswrapper[4724]: I0226 11:51:28.118744 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h2z6p" podStartSLOduration=3.404266073 podStartE2EDuration="15.118724059s" podCreationTimestamp="2026-02-26 11:51:13 +0000 UTC" firstStartedPulling="2026-02-26 11:51:14.929708605 +0000 UTC m=+2741.585447720" lastFinishedPulling="2026-02-26 11:51:26.644166571 +0000 UTC m=+2753.299905706" observedRunningTime="2026-02-26 11:51:28.111374832 +0000 UTC m=+2754.767113957" watchObservedRunningTime="2026-02-26 11:51:28.118724059 +0000 UTC m=+2754.774463174" Feb 26 11:51:33 crc kubenswrapper[4724]: I0226 11:51:33.724845 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:51:33 crc kubenswrapper[4724]: I0226 11:51:33.725404 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:51:34 crc kubenswrapper[4724]: I0226 11:51:34.778757 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h2z6p" podUID="bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" containerName="registry-server" probeResult="failure" output=< Feb 26 11:51:34 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:51:34 crc kubenswrapper[4724]: > Feb 26 11:51:34 crc kubenswrapper[4724]: I0226 11:51:34.975756 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:51:34 crc kubenswrapper[4724]: E0226 11:51:34.976334 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:51:44 crc kubenswrapper[4724]: I0226 11:51:44.766977 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h2z6p" podUID="bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" containerName="registry-server" probeResult="failure" output=< Feb 26 11:51:44 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:51:44 crc kubenswrapper[4724]: > Feb 26 11:51:46 crc kubenswrapper[4724]: I0226 11:51:46.975537 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:51:46 crc kubenswrapper[4724]: E0226 11:51:46.976056 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:51:54 crc kubenswrapper[4724]: I0226 11:51:54.769773 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h2z6p" podUID="bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" containerName="registry-server" probeResult="failure" output=< Feb 26 11:51:54 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:51:54 crc kubenswrapper[4724]: > Feb 26 11:51:57 crc kubenswrapper[4724]: I0226 11:51:57.976022 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:51:57 crc kubenswrapper[4724]: E0226 11:51:57.977712 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:52:00 crc kubenswrapper[4724]: I0226 11:52:00.150222 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535112-tpht5"] Feb 26 11:52:00 crc kubenswrapper[4724]: I0226 11:52:00.152147 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535112-tpht5" Feb 26 11:52:00 crc kubenswrapper[4724]: I0226 11:52:00.155158 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:52:00 crc kubenswrapper[4724]: I0226 11:52:00.159227 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:52:00 crc kubenswrapper[4724]: I0226 11:52:00.159404 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:52:00 crc kubenswrapper[4724]: I0226 11:52:00.166404 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535112-tpht5"] Feb 26 11:52:00 crc kubenswrapper[4724]: I0226 11:52:00.184457 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k57mp\" (UniqueName: \"kubernetes.io/projected/a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce-kube-api-access-k57mp\") pod \"auto-csr-approver-29535112-tpht5\" (UID: \"a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce\") " pod="openshift-infra/auto-csr-approver-29535112-tpht5" Feb 26 11:52:00 crc kubenswrapper[4724]: I0226 11:52:00.289309 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k57mp\" (UniqueName: \"kubernetes.io/projected/a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce-kube-api-access-k57mp\") pod \"auto-csr-approver-29535112-tpht5\" (UID: \"a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce\") " pod="openshift-infra/auto-csr-approver-29535112-tpht5" Feb 26 11:52:00 crc kubenswrapper[4724]: I0226 11:52:00.315844 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k57mp\" (UniqueName: \"kubernetes.io/projected/a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce-kube-api-access-k57mp\") pod \"auto-csr-approver-29535112-tpht5\" (UID: \"a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce\") " pod="openshift-infra/auto-csr-approver-29535112-tpht5" Feb 26 11:52:00 crc kubenswrapper[4724]: I0226 11:52:00.481872 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535112-tpht5" Feb 26 11:52:00 crc kubenswrapper[4724]: I0226 11:52:00.935453 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535112-tpht5"] Feb 26 11:52:00 crc kubenswrapper[4724]: I0226 11:52:00.952059 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:52:01 crc kubenswrapper[4724]: I0226 11:52:01.421885 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535112-tpht5" event={"ID":"a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce","Type":"ContainerStarted","Data":"fe7c78653fb677adbb4c886e4539a1551b0356b4531c9c3c9bd1b1dbff6fa017"} Feb 26 11:52:02 crc kubenswrapper[4724]: I0226 11:52:02.432342 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535112-tpht5" event={"ID":"a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce","Type":"ContainerStarted","Data":"6ebec88c02f795ba9d01e78a1fe4d811c88e4b801e90e505ecab32e1e6f4258b"} Feb 26 11:52:02 crc kubenswrapper[4724]: I0226 11:52:02.449101 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535112-tpht5" podStartSLOduration=1.472227595 podStartE2EDuration="2.44908084s" podCreationTimestamp="2026-02-26 11:52:00 +0000 UTC" firstStartedPulling="2026-02-26 11:52:00.95175684 +0000 UTC m=+2787.607495955" lastFinishedPulling="2026-02-26 11:52:01.928610085 +0000 UTC m=+2788.584349200" observedRunningTime="2026-02-26 11:52:02.444400851 +0000 UTC m=+2789.100139956" watchObservedRunningTime="2026-02-26 11:52:02.44908084 +0000 UTC m=+2789.104819955" Feb 26 11:52:03 crc kubenswrapper[4724]: I0226 11:52:03.446038 4724 generic.go:334] "Generic (PLEG): container finished" podID="a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce" containerID="6ebec88c02f795ba9d01e78a1fe4d811c88e4b801e90e505ecab32e1e6f4258b" exitCode=0 Feb 26 11:52:03 crc kubenswrapper[4724]: I0226 11:52:03.446119 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535112-tpht5" event={"ID":"a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce","Type":"ContainerDied","Data":"6ebec88c02f795ba9d01e78a1fe4d811c88e4b801e90e505ecab32e1e6f4258b"} Feb 26 11:52:03 crc kubenswrapper[4724]: I0226 11:52:03.776060 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:52:03 crc kubenswrapper[4724]: I0226 11:52:03.827101 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:52:04 crc kubenswrapper[4724]: I0226 11:52:04.017152 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h2z6p"] Feb 26 11:52:04 crc kubenswrapper[4724]: I0226 11:52:04.836365 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535112-tpht5" Feb 26 11:52:04 crc kubenswrapper[4724]: I0226 11:52:04.981517 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k57mp\" (UniqueName: \"kubernetes.io/projected/a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce-kube-api-access-k57mp\") pod \"a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce\" (UID: \"a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce\") " Feb 26 11:52:04 crc kubenswrapper[4724]: I0226 11:52:04.990981 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce-kube-api-access-k57mp" (OuterVolumeSpecName: "kube-api-access-k57mp") pod "a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce" (UID: "a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce"). InnerVolumeSpecName "kube-api-access-k57mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:52:05 crc kubenswrapper[4724]: I0226 11:52:05.085109 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k57mp\" (UniqueName: \"kubernetes.io/projected/a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce-kube-api-access-k57mp\") on node \"crc\" DevicePath \"\"" Feb 26 11:52:05 crc kubenswrapper[4724]: I0226 11:52:05.462859 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535112-tpht5" event={"ID":"a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce","Type":"ContainerDied","Data":"fe7c78653fb677adbb4c886e4539a1551b0356b4531c9c3c9bd1b1dbff6fa017"} Feb 26 11:52:05 crc kubenswrapper[4724]: I0226 11:52:05.462897 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535112-tpht5" Feb 26 11:52:05 crc kubenswrapper[4724]: I0226 11:52:05.462901 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7c78653fb677adbb4c886e4539a1551b0356b4531c9c3c9bd1b1dbff6fa017" Feb 26 11:52:05 crc kubenswrapper[4724]: I0226 11:52:05.463259 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h2z6p" podUID="bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" containerName="registry-server" containerID="cri-o://2b43f15feda8ac6835341a30cc2133abe07d1f70383f2bdceee06c4c5555459b" gracePeriod=2 Feb 26 11:52:05 crc kubenswrapper[4724]: I0226 11:52:05.527010 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535106-gjk6f"] Feb 26 11:52:05 crc kubenswrapper[4724]: I0226 11:52:05.535619 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535106-gjk6f"] Feb 26 11:52:05 crc kubenswrapper[4724]: I0226 11:52:05.893929 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:52:05 crc kubenswrapper[4724]: I0226 11:52:05.984591 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffd8514e-7ae7-4dff-a626-86bc0c716293" path="/var/lib/kubelet/pods/ffd8514e-7ae7-4dff-a626-86bc0c716293/volumes" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.035766 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-utilities\") pod \"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4\" (UID: \"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4\") " Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.036205 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-catalog-content\") pod \"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4\" (UID: \"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4\") " Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.036331 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzhrx\" (UniqueName: \"kubernetes.io/projected/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-kube-api-access-bzhrx\") pod \"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4\" (UID: \"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4\") " Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.037044 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-utilities" (OuterVolumeSpecName: "utilities") pod "bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" (UID: "bc09ac3e-6799-4f24-9f0a-8bc8d930bab4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.046861 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-kube-api-access-bzhrx" (OuterVolumeSpecName: "kube-api-access-bzhrx") pod "bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" (UID: "bc09ac3e-6799-4f24-9f0a-8bc8d930bab4"). InnerVolumeSpecName "kube-api-access-bzhrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.139442 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.140585 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzhrx\" (UniqueName: \"kubernetes.io/projected/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-kube-api-access-bzhrx\") on node \"crc\" DevicePath \"\"" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.162395 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" (UID: "bc09ac3e-6799-4f24-9f0a-8bc8d930bab4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.243042 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.474054 4724 generic.go:334] "Generic (PLEG): container finished" podID="bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" containerID="2b43f15feda8ac6835341a30cc2133abe07d1f70383f2bdceee06c4c5555459b" exitCode=0 Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.474108 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2z6p" event={"ID":"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4","Type":"ContainerDied","Data":"2b43f15feda8ac6835341a30cc2133abe07d1f70383f2bdceee06c4c5555459b"} Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.474143 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2z6p" event={"ID":"bc09ac3e-6799-4f24-9f0a-8bc8d930bab4","Type":"ContainerDied","Data":"dc0cf7b2a02a4fa9043c5a563a07fea2a8553b0e957e36093a97b1009c87be00"} Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.474166 4724 scope.go:117] "RemoveContainer" containerID="2b43f15feda8ac6835341a30cc2133abe07d1f70383f2bdceee06c4c5555459b" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.474384 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h2z6p" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.516078 4724 scope.go:117] "RemoveContainer" containerID="a5d0b2efe9d63b86477ff4fb8efb3333b52fb30283d065b904fd674a08754f88" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.527579 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h2z6p"] Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.544854 4724 scope.go:117] "RemoveContainer" containerID="3fc334dda42b6854da5e1ae8b80c0276da1aa3357e3a7def245f4457ce575af5" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.546783 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h2z6p"] Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.598957 4724 scope.go:117] "RemoveContainer" containerID="2b43f15feda8ac6835341a30cc2133abe07d1f70383f2bdceee06c4c5555459b" Feb 26 11:52:06 crc kubenswrapper[4724]: E0226 11:52:06.599480 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b43f15feda8ac6835341a30cc2133abe07d1f70383f2bdceee06c4c5555459b\": container with ID starting with 2b43f15feda8ac6835341a30cc2133abe07d1f70383f2bdceee06c4c5555459b not found: ID does not exist" containerID="2b43f15feda8ac6835341a30cc2133abe07d1f70383f2bdceee06c4c5555459b" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.599535 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b43f15feda8ac6835341a30cc2133abe07d1f70383f2bdceee06c4c5555459b"} err="failed to get container status \"2b43f15feda8ac6835341a30cc2133abe07d1f70383f2bdceee06c4c5555459b\": rpc error: code = NotFound desc = could not find container \"2b43f15feda8ac6835341a30cc2133abe07d1f70383f2bdceee06c4c5555459b\": container with ID starting with 2b43f15feda8ac6835341a30cc2133abe07d1f70383f2bdceee06c4c5555459b not found: ID does not exist" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.599698 4724 scope.go:117] "RemoveContainer" containerID="a5d0b2efe9d63b86477ff4fb8efb3333b52fb30283d065b904fd674a08754f88" Feb 26 11:52:06 crc kubenswrapper[4724]: E0226 11:52:06.600080 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5d0b2efe9d63b86477ff4fb8efb3333b52fb30283d065b904fd674a08754f88\": container with ID starting with a5d0b2efe9d63b86477ff4fb8efb3333b52fb30283d065b904fd674a08754f88 not found: ID does not exist" containerID="a5d0b2efe9d63b86477ff4fb8efb3333b52fb30283d065b904fd674a08754f88" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.600118 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5d0b2efe9d63b86477ff4fb8efb3333b52fb30283d065b904fd674a08754f88"} err="failed to get container status \"a5d0b2efe9d63b86477ff4fb8efb3333b52fb30283d065b904fd674a08754f88\": rpc error: code = NotFound desc = could not find container \"a5d0b2efe9d63b86477ff4fb8efb3333b52fb30283d065b904fd674a08754f88\": container with ID starting with a5d0b2efe9d63b86477ff4fb8efb3333b52fb30283d065b904fd674a08754f88 not found: ID does not exist" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.600149 4724 scope.go:117] "RemoveContainer" containerID="3fc334dda42b6854da5e1ae8b80c0276da1aa3357e3a7def245f4457ce575af5" Feb 26 11:52:06 crc kubenswrapper[4724]: E0226 11:52:06.600416 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc334dda42b6854da5e1ae8b80c0276da1aa3357e3a7def245f4457ce575af5\": container with ID starting with 3fc334dda42b6854da5e1ae8b80c0276da1aa3357e3a7def245f4457ce575af5 not found: ID does not exist" containerID="3fc334dda42b6854da5e1ae8b80c0276da1aa3357e3a7def245f4457ce575af5" Feb 26 11:52:06 crc kubenswrapper[4724]: I0226 11:52:06.600437 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc334dda42b6854da5e1ae8b80c0276da1aa3357e3a7def245f4457ce575af5"} err="failed to get container status \"3fc334dda42b6854da5e1ae8b80c0276da1aa3357e3a7def245f4457ce575af5\": rpc error: code = NotFound desc = could not find container \"3fc334dda42b6854da5e1ae8b80c0276da1aa3357e3a7def245f4457ce575af5\": container with ID starting with 3fc334dda42b6854da5e1ae8b80c0276da1aa3357e3a7def245f4457ce575af5 not found: ID does not exist" Feb 26 11:52:07 crc kubenswrapper[4724]: I0226 11:52:07.988074 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" path="/var/lib/kubelet/pods/bc09ac3e-6799-4f24-9f0a-8bc8d930bab4/volumes" Feb 26 11:52:12 crc kubenswrapper[4724]: I0226 11:52:12.975991 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:52:12 crc kubenswrapper[4724]: E0226 11:52:12.976848 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:52:26 crc kubenswrapper[4724]: I0226 11:52:26.976448 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:52:27 crc kubenswrapper[4724]: I0226 11:52:27.663346 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"11de6d36c0a5960fa70c51b05c62d38f7ca71ddb060c31d0ec8ff22c36196169"} Feb 26 11:53:00 crc kubenswrapper[4724]: I0226 11:53:00.085878 4724 scope.go:117] "RemoveContainer" containerID="975089a6d1d8eb823665d7b973a5b2021971cd8e5f8d2a62f4a23eff8998969b" Feb 26 11:53:30 crc kubenswrapper[4724]: I0226 11:53:30.263899 4724 generic.go:334] "Generic (PLEG): container finished" podID="8a0a7cda-6bc1-44ce-8d91-ca87271fb03e" containerID="a640a6bf4553f57f1c5982122b6971027cf3eed453d7bb6ea392379d8658bca8" exitCode=0 Feb 26 11:53:30 crc kubenswrapper[4724]: I0226 11:53:30.263990 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" event={"ID":"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e","Type":"ContainerDied","Data":"a640a6bf4553f57f1c5982122b6971027cf3eed453d7bb6ea392379d8658bca8"} Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.700786 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.760117 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-libvirt-combined-ca-bundle\") pod \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.760200 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5hgc\" (UniqueName: \"kubernetes.io/projected/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-kube-api-access-f5hgc\") pod \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.760282 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-inventory\") pod \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.760327 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-ssh-key-openstack-edpm-ipam\") pod \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.760396 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-libvirt-secret-0\") pod \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\" (UID: \"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e\") " Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.781280 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8a0a7cda-6bc1-44ce-8d91-ca87271fb03e" (UID: "8a0a7cda-6bc1-44ce-8d91-ca87271fb03e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.781476 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-kube-api-access-f5hgc" (OuterVolumeSpecName: "kube-api-access-f5hgc") pod "8a0a7cda-6bc1-44ce-8d91-ca87271fb03e" (UID: "8a0a7cda-6bc1-44ce-8d91-ca87271fb03e"). InnerVolumeSpecName "kube-api-access-f5hgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.788654 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-inventory" (OuterVolumeSpecName: "inventory") pod "8a0a7cda-6bc1-44ce-8d91-ca87271fb03e" (UID: "8a0a7cda-6bc1-44ce-8d91-ca87271fb03e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.790350 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8a0a7cda-6bc1-44ce-8d91-ca87271fb03e" (UID: "8a0a7cda-6bc1-44ce-8d91-ca87271fb03e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.794698 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8a0a7cda-6bc1-44ce-8d91-ca87271fb03e" (UID: "8a0a7cda-6bc1-44ce-8d91-ca87271fb03e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.866283 4724 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.866319 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5hgc\" (UniqueName: \"kubernetes.io/projected/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-kube-api-access-f5hgc\") on node \"crc\" DevicePath \"\"" Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.866330 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.866343 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:53:31 crc kubenswrapper[4724]: I0226 11:53:31.866353 4724 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8a0a7cda-6bc1-44ce-8d91-ca87271fb03e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.283195 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" event={"ID":"8a0a7cda-6bc1-44ce-8d91-ca87271fb03e","Type":"ContainerDied","Data":"c8af012504aa1efab4f13739cd18ce60ec28cdb9be958778449d79de1f674163"} Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.283248 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8af012504aa1efab4f13739cd18ce60ec28cdb9be958778449d79de1f674163" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.283359 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwr97" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.382829 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5"] Feb 26 11:53:32 crc kubenswrapper[4724]: E0226 11:53:32.383309 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce" containerName="oc" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.383328 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce" containerName="oc" Feb 26 11:53:32 crc kubenswrapper[4724]: E0226 11:53:32.383358 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" containerName="registry-server" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.383367 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" containerName="registry-server" Feb 26 11:53:32 crc kubenswrapper[4724]: E0226 11:53:32.383394 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0a7cda-6bc1-44ce-8d91-ca87271fb03e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.383403 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0a7cda-6bc1-44ce-8d91-ca87271fb03e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 11:53:32 crc kubenswrapper[4724]: E0226 11:53:32.383420 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" containerName="extract-utilities" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.383428 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" containerName="extract-utilities" Feb 26 11:53:32 crc kubenswrapper[4724]: E0226 11:53:32.383454 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" containerName="extract-content" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.383462 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" containerName="extract-content" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.383698 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce" containerName="oc" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.383721 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc09ac3e-6799-4f24-9f0a-8bc8d930bab4" containerName="registry-server" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.383732 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a0a7cda-6bc1-44ce-8d91-ca87271fb03e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.384481 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.386942 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.387692 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.387860 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.387898 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.388004 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.390020 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.402380 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.402675 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5"] Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.484823 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.484895 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9b788179-93c8-43fa-9c05-ce6807179444-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.484950 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.485089 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.485169 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.485212 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.485299 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.485342 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.485371 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.485628 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2g75\" (UniqueName: \"kubernetes.io/projected/9b788179-93c8-43fa-9c05-ce6807179444-kube-api-access-d2g75\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.485748 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.587267 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.587320 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9b788179-93c8-43fa-9c05-ce6807179444-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.587358 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.587389 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.587409 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.587428 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.587471 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.587508 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.587653 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.587706 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2g75\" (UniqueName: \"kubernetes.io/projected/9b788179-93c8-43fa-9c05-ce6807179444-kube-api-access-d2g75\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.587735 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.588953 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9b788179-93c8-43fa-9c05-ce6807179444-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.592750 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.594547 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.594974 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.595120 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.595460 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.596392 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.596539 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.597632 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.603801 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.614302 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2g75\" (UniqueName: \"kubernetes.io/projected/9b788179-93c8-43fa-9c05-ce6807179444-kube-api-access-d2g75\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tm4z5\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:32 crc kubenswrapper[4724]: I0226 11:53:32.701499 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:53:33 crc kubenswrapper[4724]: I0226 11:53:33.259095 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5"] Feb 26 11:53:33 crc kubenswrapper[4724]: I0226 11:53:33.294747 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" event={"ID":"9b788179-93c8-43fa-9c05-ce6807179444","Type":"ContainerStarted","Data":"c9ba6419adc43130022679a4ecce9145d53a9762fff8037ff15bea49dfe8505d"} Feb 26 11:53:34 crc kubenswrapper[4724]: I0226 11:53:34.314586 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" event={"ID":"9b788179-93c8-43fa-9c05-ce6807179444","Type":"ContainerStarted","Data":"f0b6089887bf0e08999b6609e3a0af98293b6a4d2e460c3ee42eb6e59e611f25"} Feb 26 11:53:34 crc kubenswrapper[4724]: I0226 11:53:34.352899 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" podStartSLOduration=1.8384625909999999 podStartE2EDuration="2.352878781s" podCreationTimestamp="2026-02-26 11:53:32 +0000 UTC" firstStartedPulling="2026-02-26 11:53:33.271355656 +0000 UTC m=+2879.927094771" lastFinishedPulling="2026-02-26 11:53:33.785771846 +0000 UTC m=+2880.441510961" observedRunningTime="2026-02-26 11:53:34.339145531 +0000 UTC m=+2880.994884646" watchObservedRunningTime="2026-02-26 11:53:34.352878781 +0000 UTC m=+2881.008617896" Feb 26 11:54:00 crc kubenswrapper[4724]: I0226 11:54:00.160121 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535114-df786"] Feb 26 11:54:00 crc kubenswrapper[4724]: I0226 11:54:00.162319 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535114-df786" Feb 26 11:54:00 crc kubenswrapper[4724]: I0226 11:54:00.164480 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:54:00 crc kubenswrapper[4724]: I0226 11:54:00.164687 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:54:00 crc kubenswrapper[4724]: I0226 11:54:00.164882 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:54:00 crc kubenswrapper[4724]: I0226 11:54:00.185628 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535114-df786"] Feb 26 11:54:00 crc kubenswrapper[4724]: I0226 11:54:00.308131 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld8rw\" (UniqueName: \"kubernetes.io/projected/30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d-kube-api-access-ld8rw\") pod \"auto-csr-approver-29535114-df786\" (UID: \"30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d\") " pod="openshift-infra/auto-csr-approver-29535114-df786" Feb 26 11:54:00 crc kubenswrapper[4724]: I0226 11:54:00.409634 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld8rw\" (UniqueName: \"kubernetes.io/projected/30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d-kube-api-access-ld8rw\") pod \"auto-csr-approver-29535114-df786\" (UID: \"30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d\") " pod="openshift-infra/auto-csr-approver-29535114-df786" Feb 26 11:54:00 crc kubenswrapper[4724]: I0226 11:54:00.449259 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld8rw\" (UniqueName: \"kubernetes.io/projected/30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d-kube-api-access-ld8rw\") pod \"auto-csr-approver-29535114-df786\" (UID: \"30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d\") " pod="openshift-infra/auto-csr-approver-29535114-df786" Feb 26 11:54:00 crc kubenswrapper[4724]: I0226 11:54:00.488418 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535114-df786" Feb 26 11:54:01 crc kubenswrapper[4724]: I0226 11:54:01.066370 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535114-df786"] Feb 26 11:54:01 crc kubenswrapper[4724]: W0226 11:54:01.071317 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30d9b6c6_89a4_4fb2_a2f0_5e169c3c359d.slice/crio-a12c00ec84c39b5866fcdc57f03a73bb71d153c9c782db8d5673ba4c3fd84eb4 WatchSource:0}: Error finding container a12c00ec84c39b5866fcdc57f03a73bb71d153c9c782db8d5673ba4c3fd84eb4: Status 404 returned error can't find the container with id a12c00ec84c39b5866fcdc57f03a73bb71d153c9c782db8d5673ba4c3fd84eb4 Feb 26 11:54:01 crc kubenswrapper[4724]: I0226 11:54:01.531407 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535114-df786" event={"ID":"30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d","Type":"ContainerStarted","Data":"a12c00ec84c39b5866fcdc57f03a73bb71d153c9c782db8d5673ba4c3fd84eb4"} Feb 26 11:54:02 crc kubenswrapper[4724]: I0226 11:54:02.542333 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535114-df786" event={"ID":"30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d","Type":"ContainerStarted","Data":"013f716743e64afb694d637371369e363ee3d7cc6f8a69acd8966d870d902577"} Feb 26 11:54:03 crc kubenswrapper[4724]: I0226 11:54:03.552546 4724 generic.go:334] "Generic (PLEG): container finished" podID="30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d" containerID="013f716743e64afb694d637371369e363ee3d7cc6f8a69acd8966d870d902577" exitCode=0 Feb 26 11:54:03 crc kubenswrapper[4724]: I0226 11:54:03.552618 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535114-df786" event={"ID":"30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d","Type":"ContainerDied","Data":"013f716743e64afb694d637371369e363ee3d7cc6f8a69acd8966d870d902577"} Feb 26 11:54:04 crc kubenswrapper[4724]: I0226 11:54:04.908728 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535114-df786" Feb 26 11:54:05 crc kubenswrapper[4724]: I0226 11:54:05.005187 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld8rw\" (UniqueName: \"kubernetes.io/projected/30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d-kube-api-access-ld8rw\") pod \"30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d\" (UID: \"30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d\") " Feb 26 11:54:05 crc kubenswrapper[4724]: I0226 11:54:05.015485 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d-kube-api-access-ld8rw" (OuterVolumeSpecName: "kube-api-access-ld8rw") pod "30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d" (UID: "30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d"). InnerVolumeSpecName "kube-api-access-ld8rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:54:05 crc kubenswrapper[4724]: I0226 11:54:05.107880 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld8rw\" (UniqueName: \"kubernetes.io/projected/30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d-kube-api-access-ld8rw\") on node \"crc\" DevicePath \"\"" Feb 26 11:54:05 crc kubenswrapper[4724]: I0226 11:54:05.570635 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535114-df786" event={"ID":"30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d","Type":"ContainerDied","Data":"a12c00ec84c39b5866fcdc57f03a73bb71d153c9c782db8d5673ba4c3fd84eb4"} Feb 26 11:54:05 crc kubenswrapper[4724]: I0226 11:54:05.570924 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a12c00ec84c39b5866fcdc57f03a73bb71d153c9c782db8d5673ba4c3fd84eb4" Feb 26 11:54:05 crc kubenswrapper[4724]: I0226 11:54:05.570676 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535114-df786" Feb 26 11:54:05 crc kubenswrapper[4724]: I0226 11:54:05.679675 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535108-mhgbs"] Feb 26 11:54:05 crc kubenswrapper[4724]: I0226 11:54:05.699686 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535108-mhgbs"] Feb 26 11:54:05 crc kubenswrapper[4724]: I0226 11:54:05.988079 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="612de9a5-9cf9-412f-823d-8be9cd1ebbdf" path="/var/lib/kubelet/pods/612de9a5-9cf9-412f-823d-8be9cd1ebbdf/volumes" Feb 26 11:54:46 crc kubenswrapper[4724]: I0226 11:54:46.906270 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:54:46 crc kubenswrapper[4724]: I0226 11:54:46.907431 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:55:00 crc kubenswrapper[4724]: I0226 11:55:00.206027 4724 scope.go:117] "RemoveContainer" containerID="477cdce3d35da0d80f5e894983da9dd0f3edbfc216a97bcb67792032f8c97dcf" Feb 26 11:55:16 crc kubenswrapper[4724]: I0226 11:55:16.907210 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:55:16 crc kubenswrapper[4724]: I0226 11:55:16.907925 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:55:46 crc kubenswrapper[4724]: I0226 11:55:46.906339 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:55:46 crc kubenswrapper[4724]: I0226 11:55:46.906878 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:55:46 crc kubenswrapper[4724]: I0226 11:55:46.906940 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:55:46 crc kubenswrapper[4724]: I0226 11:55:46.907845 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11de6d36c0a5960fa70c51b05c62d38f7ca71ddb060c31d0ec8ff22c36196169"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:55:46 crc kubenswrapper[4724]: I0226 11:55:46.907911 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://11de6d36c0a5960fa70c51b05c62d38f7ca71ddb060c31d0ec8ff22c36196169" gracePeriod=600 Feb 26 11:55:47 crc kubenswrapper[4724]: I0226 11:55:47.569658 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="11de6d36c0a5960fa70c51b05c62d38f7ca71ddb060c31d0ec8ff22c36196169" exitCode=0 Feb 26 11:55:47 crc kubenswrapper[4724]: I0226 11:55:47.569720 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"11de6d36c0a5960fa70c51b05c62d38f7ca71ddb060c31d0ec8ff22c36196169"} Feb 26 11:55:47 crc kubenswrapper[4724]: I0226 11:55:47.570132 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5"} Feb 26 11:55:47 crc kubenswrapper[4724]: I0226 11:55:47.570163 4724 scope.go:117] "RemoveContainer" containerID="3f96ee3a97b99b2169ede2ae54e2a1f98d9c1559ccc64d9100aa489c2233376b" Feb 26 11:55:56 crc kubenswrapper[4724]: I0226 11:55:56.655532 4724 generic.go:334] "Generic (PLEG): container finished" podID="9b788179-93c8-43fa-9c05-ce6807179444" containerID="f0b6089887bf0e08999b6609e3a0af98293b6a4d2e460c3ee42eb6e59e611f25" exitCode=0 Feb 26 11:55:56 crc kubenswrapper[4724]: I0226 11:55:56.655606 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" event={"ID":"9b788179-93c8-43fa-9c05-ce6807179444","Type":"ContainerDied","Data":"f0b6089887bf0e08999b6609e3a0af98293b6a4d2e460c3ee42eb6e59e611f25"} Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.121567 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.285901 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-1\") pod \"9b788179-93c8-43fa-9c05-ce6807179444\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.287281 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-migration-ssh-key-1\") pod \"9b788179-93c8-43fa-9c05-ce6807179444\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.287426 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-3\") pod \"9b788179-93c8-43fa-9c05-ce6807179444\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.287538 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-2\") pod \"9b788179-93c8-43fa-9c05-ce6807179444\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.287638 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-migration-ssh-key-0\") pod \"9b788179-93c8-43fa-9c05-ce6807179444\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.287792 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-inventory\") pod \"9b788179-93c8-43fa-9c05-ce6807179444\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.287954 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9b788179-93c8-43fa-9c05-ce6807179444-nova-extra-config-0\") pod \"9b788179-93c8-43fa-9c05-ce6807179444\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.288120 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-ssh-key-openstack-edpm-ipam\") pod \"9b788179-93c8-43fa-9c05-ce6807179444\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.288300 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2g75\" (UniqueName: \"kubernetes.io/projected/9b788179-93c8-43fa-9c05-ce6807179444-kube-api-access-d2g75\") pod \"9b788179-93c8-43fa-9c05-ce6807179444\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.288386 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-0\") pod \"9b788179-93c8-43fa-9c05-ce6807179444\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.288505 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-combined-ca-bundle\") pod \"9b788179-93c8-43fa-9c05-ce6807179444\" (UID: \"9b788179-93c8-43fa-9c05-ce6807179444\") " Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.292644 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "9b788179-93c8-43fa-9c05-ce6807179444" (UID: "9b788179-93c8-43fa-9c05-ce6807179444"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.314941 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b788179-93c8-43fa-9c05-ce6807179444-kube-api-access-d2g75" (OuterVolumeSpecName: "kube-api-access-d2g75") pod "9b788179-93c8-43fa-9c05-ce6807179444" (UID: "9b788179-93c8-43fa-9c05-ce6807179444"). InnerVolumeSpecName "kube-api-access-d2g75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.317991 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "9b788179-93c8-43fa-9c05-ce6807179444" (UID: "9b788179-93c8-43fa-9c05-ce6807179444"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.320453 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "9b788179-93c8-43fa-9c05-ce6807179444" (UID: "9b788179-93c8-43fa-9c05-ce6807179444"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.322126 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9b788179-93c8-43fa-9c05-ce6807179444" (UID: "9b788179-93c8-43fa-9c05-ce6807179444"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.324030 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "9b788179-93c8-43fa-9c05-ce6807179444" (UID: "9b788179-93c8-43fa-9c05-ce6807179444"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.324107 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-inventory" (OuterVolumeSpecName: "inventory") pod "9b788179-93c8-43fa-9c05-ce6807179444" (UID: "9b788179-93c8-43fa-9c05-ce6807179444"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.324486 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "9b788179-93c8-43fa-9c05-ce6807179444" (UID: "9b788179-93c8-43fa-9c05-ce6807179444"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.328113 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "9b788179-93c8-43fa-9c05-ce6807179444" (UID: "9b788179-93c8-43fa-9c05-ce6807179444"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.347593 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b788179-93c8-43fa-9c05-ce6807179444-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "9b788179-93c8-43fa-9c05-ce6807179444" (UID: "9b788179-93c8-43fa-9c05-ce6807179444"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.348092 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "9b788179-93c8-43fa-9c05-ce6807179444" (UID: "9b788179-93c8-43fa-9c05-ce6807179444"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.390948 4724 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.391008 4724 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.391018 4724 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.391028 4724 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.391037 4724 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.391045 4724 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.391054 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.391065 4724 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9b788179-93c8-43fa-9c05-ce6807179444-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.391073 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.391081 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2g75\" (UniqueName: \"kubernetes.io/projected/9b788179-93c8-43fa-9c05-ce6807179444-kube-api-access-d2g75\") on node \"crc\" DevicePath \"\"" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.391088 4724 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9b788179-93c8-43fa-9c05-ce6807179444-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.675071 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" event={"ID":"9b788179-93c8-43fa-9c05-ce6807179444","Type":"ContainerDied","Data":"c9ba6419adc43130022679a4ecce9145d53a9762fff8037ff15bea49dfe8505d"} Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.675415 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9ba6419adc43130022679a4ecce9145d53a9762fff8037ff15bea49dfe8505d" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.675200 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tm4z5" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.780983 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc"] Feb 26 11:55:58 crc kubenswrapper[4724]: E0226 11:55:58.781611 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d" containerName="oc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.781677 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d" containerName="oc" Feb 26 11:55:58 crc kubenswrapper[4724]: E0226 11:55:58.781735 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b788179-93c8-43fa-9c05-ce6807179444" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.781784 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b788179-93c8-43fa-9c05-ce6807179444" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.782040 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b788179-93c8-43fa-9c05-ce6807179444" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.782109 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d" containerName="oc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.782790 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.785476 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.785521 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.785536 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.785581 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.790800 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xvq4m" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.797065 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc"] Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.799705 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.800372 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.800666 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.800773 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.801004 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78zt7\" (UniqueName: \"kubernetes.io/projected/b9209966-a73c-4858-8faf-9053e5447993-kube-api-access-78zt7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.801285 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.801416 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.903134 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78zt7\" (UniqueName: \"kubernetes.io/projected/b9209966-a73c-4858-8faf-9053e5447993-kube-api-access-78zt7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.903506 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.903743 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.903901 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.904057 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.904575 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.904679 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.907391 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.907869 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.911263 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.912299 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.920868 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.924929 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:58 crc kubenswrapper[4724]: I0226 11:55:58.927374 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78zt7\" (UniqueName: \"kubernetes.io/projected/b9209966-a73c-4858-8faf-9053e5447993-kube-api-access-78zt7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-stfrc\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:59 crc kubenswrapper[4724]: I0226 11:55:59.108267 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:55:59 crc kubenswrapper[4724]: I0226 11:55:59.663317 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc"] Feb 26 11:55:59 crc kubenswrapper[4724]: I0226 11:55:59.685547 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" event={"ID":"b9209966-a73c-4858-8faf-9053e5447993","Type":"ContainerStarted","Data":"fb2d59a0ae1bdfe4c83f1645de431897959f22b756f6542ea5d7c1b4c414fe9e"} Feb 26 11:56:00 crc kubenswrapper[4724]: I0226 11:56:00.140937 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535116-k8lc7"] Feb 26 11:56:00 crc kubenswrapper[4724]: I0226 11:56:00.142389 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535116-k8lc7" Feb 26 11:56:00 crc kubenswrapper[4724]: I0226 11:56:00.153401 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:56:00 crc kubenswrapper[4724]: I0226 11:56:00.153615 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:56:00 crc kubenswrapper[4724]: I0226 11:56:00.153748 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:56:00 crc kubenswrapper[4724]: I0226 11:56:00.162314 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535116-k8lc7"] Feb 26 11:56:00 crc kubenswrapper[4724]: I0226 11:56:00.245377 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9gsk\" (UniqueName: \"kubernetes.io/projected/52d4f9c1-8152-4d31-98db-2a1bb1b731ec-kube-api-access-w9gsk\") pod \"auto-csr-approver-29535116-k8lc7\" (UID: \"52d4f9c1-8152-4d31-98db-2a1bb1b731ec\") " pod="openshift-infra/auto-csr-approver-29535116-k8lc7" Feb 26 11:56:00 crc kubenswrapper[4724]: I0226 11:56:00.347676 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9gsk\" (UniqueName: \"kubernetes.io/projected/52d4f9c1-8152-4d31-98db-2a1bb1b731ec-kube-api-access-w9gsk\") pod \"auto-csr-approver-29535116-k8lc7\" (UID: \"52d4f9c1-8152-4d31-98db-2a1bb1b731ec\") " pod="openshift-infra/auto-csr-approver-29535116-k8lc7" Feb 26 11:56:00 crc kubenswrapper[4724]: I0226 11:56:00.366775 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9gsk\" (UniqueName: \"kubernetes.io/projected/52d4f9c1-8152-4d31-98db-2a1bb1b731ec-kube-api-access-w9gsk\") pod \"auto-csr-approver-29535116-k8lc7\" (UID: \"52d4f9c1-8152-4d31-98db-2a1bb1b731ec\") " pod="openshift-infra/auto-csr-approver-29535116-k8lc7" Feb 26 11:56:00 crc kubenswrapper[4724]: I0226 11:56:00.590339 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535116-k8lc7" Feb 26 11:56:00 crc kubenswrapper[4724]: I0226 11:56:00.713574 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" event={"ID":"b9209966-a73c-4858-8faf-9053e5447993","Type":"ContainerStarted","Data":"26643c13b04cefb3594ea4713eaf057caf426bf6e6ac4328de15248cf97f2673"} Feb 26 11:56:00 crc kubenswrapper[4724]: I0226 11:56:00.737814 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" podStartSLOduration=2.314328649 podStartE2EDuration="2.737795566s" podCreationTimestamp="2026-02-26 11:55:58 +0000 UTC" firstStartedPulling="2026-02-26 11:55:59.665483261 +0000 UTC m=+3026.321222376" lastFinishedPulling="2026-02-26 11:56:00.088950188 +0000 UTC m=+3026.744689293" observedRunningTime="2026-02-26 11:56:00.731625878 +0000 UTC m=+3027.387365003" watchObservedRunningTime="2026-02-26 11:56:00.737795566 +0000 UTC m=+3027.393534681" Feb 26 11:56:01 crc kubenswrapper[4724]: I0226 11:56:01.057743 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535116-k8lc7"] Feb 26 11:56:01 crc kubenswrapper[4724]: I0226 11:56:01.725020 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535116-k8lc7" event={"ID":"52d4f9c1-8152-4d31-98db-2a1bb1b731ec","Type":"ContainerStarted","Data":"74823e644adc16453af793f25963893443ee7265a5e68416c10f0d942d8fb902"} Feb 26 11:56:02 crc kubenswrapper[4724]: I0226 11:56:02.736944 4724 generic.go:334] "Generic (PLEG): container finished" podID="52d4f9c1-8152-4d31-98db-2a1bb1b731ec" containerID="0d965d8bad80b95a7c22e0743071ae7a6c0090f4fcb884ec603549f4611c4246" exitCode=0 Feb 26 11:56:02 crc kubenswrapper[4724]: I0226 11:56:02.737011 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535116-k8lc7" event={"ID":"52d4f9c1-8152-4d31-98db-2a1bb1b731ec","Type":"ContainerDied","Data":"0d965d8bad80b95a7c22e0743071ae7a6c0090f4fcb884ec603549f4611c4246"} Feb 26 11:56:04 crc kubenswrapper[4724]: I0226 11:56:04.123045 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535116-k8lc7" Feb 26 11:56:04 crc kubenswrapper[4724]: I0226 11:56:04.249960 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9gsk\" (UniqueName: \"kubernetes.io/projected/52d4f9c1-8152-4d31-98db-2a1bb1b731ec-kube-api-access-w9gsk\") pod \"52d4f9c1-8152-4d31-98db-2a1bb1b731ec\" (UID: \"52d4f9c1-8152-4d31-98db-2a1bb1b731ec\") " Feb 26 11:56:04 crc kubenswrapper[4724]: I0226 11:56:04.257436 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d4f9c1-8152-4d31-98db-2a1bb1b731ec-kube-api-access-w9gsk" (OuterVolumeSpecName: "kube-api-access-w9gsk") pod "52d4f9c1-8152-4d31-98db-2a1bb1b731ec" (UID: "52d4f9c1-8152-4d31-98db-2a1bb1b731ec"). InnerVolumeSpecName "kube-api-access-w9gsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:56:04 crc kubenswrapper[4724]: I0226 11:56:04.352789 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9gsk\" (UniqueName: \"kubernetes.io/projected/52d4f9c1-8152-4d31-98db-2a1bb1b731ec-kube-api-access-w9gsk\") on node \"crc\" DevicePath \"\"" Feb 26 11:56:04 crc kubenswrapper[4724]: I0226 11:56:04.755661 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535116-k8lc7" event={"ID":"52d4f9c1-8152-4d31-98db-2a1bb1b731ec","Type":"ContainerDied","Data":"74823e644adc16453af793f25963893443ee7265a5e68416c10f0d942d8fb902"} Feb 26 11:56:04 crc kubenswrapper[4724]: I0226 11:56:04.755702 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74823e644adc16453af793f25963893443ee7265a5e68416c10f0d942d8fb902" Feb 26 11:56:04 crc kubenswrapper[4724]: I0226 11:56:04.755802 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535116-k8lc7" Feb 26 11:56:05 crc kubenswrapper[4724]: I0226 11:56:05.207161 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535110-jx99k"] Feb 26 11:56:05 crc kubenswrapper[4724]: I0226 11:56:05.219049 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535110-jx99k"] Feb 26 11:56:05 crc kubenswrapper[4724]: I0226 11:56:05.988669 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e81268-2152-485e-ab23-331c1e0e738e" path="/var/lib/kubelet/pods/c7e81268-2152-485e-ab23-331c1e0e738e/volumes" Feb 26 11:57:00 crc kubenswrapper[4724]: I0226 11:57:00.311639 4724 scope.go:117] "RemoveContainer" containerID="c001fb455ec7730566a14e4ab9b7d520719db426453d6fbc9e881ff5769b128c" Feb 26 11:58:00 crc kubenswrapper[4724]: I0226 11:58:00.171443 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535118-5d7rd"] Feb 26 11:58:00 crc kubenswrapper[4724]: E0226 11:58:00.172799 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d4f9c1-8152-4d31-98db-2a1bb1b731ec" containerName="oc" Feb 26 11:58:00 crc kubenswrapper[4724]: I0226 11:58:00.172821 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d4f9c1-8152-4d31-98db-2a1bb1b731ec" containerName="oc" Feb 26 11:58:00 crc kubenswrapper[4724]: I0226 11:58:00.173106 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d4f9c1-8152-4d31-98db-2a1bb1b731ec" containerName="oc" Feb 26 11:58:00 crc kubenswrapper[4724]: I0226 11:58:00.174125 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535118-5d7rd" Feb 26 11:58:00 crc kubenswrapper[4724]: I0226 11:58:00.181301 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 11:58:00 crc kubenswrapper[4724]: I0226 11:58:00.181500 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 11:58:00 crc kubenswrapper[4724]: I0226 11:58:00.182070 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 11:58:00 crc kubenswrapper[4724]: I0226 11:58:00.183225 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535118-5d7rd"] Feb 26 11:58:00 crc kubenswrapper[4724]: I0226 11:58:00.317029 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68vtm\" (UniqueName: \"kubernetes.io/projected/98933753-e9bb-495a-a8fa-b8dc924c173b-kube-api-access-68vtm\") pod \"auto-csr-approver-29535118-5d7rd\" (UID: \"98933753-e9bb-495a-a8fa-b8dc924c173b\") " pod="openshift-infra/auto-csr-approver-29535118-5d7rd" Feb 26 11:58:00 crc kubenswrapper[4724]: I0226 11:58:00.419296 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68vtm\" (UniqueName: \"kubernetes.io/projected/98933753-e9bb-495a-a8fa-b8dc924c173b-kube-api-access-68vtm\") pod \"auto-csr-approver-29535118-5d7rd\" (UID: \"98933753-e9bb-495a-a8fa-b8dc924c173b\") " pod="openshift-infra/auto-csr-approver-29535118-5d7rd" Feb 26 11:58:00 crc kubenswrapper[4724]: I0226 11:58:00.438282 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68vtm\" (UniqueName: \"kubernetes.io/projected/98933753-e9bb-495a-a8fa-b8dc924c173b-kube-api-access-68vtm\") pod \"auto-csr-approver-29535118-5d7rd\" (UID: \"98933753-e9bb-495a-a8fa-b8dc924c173b\") " pod="openshift-infra/auto-csr-approver-29535118-5d7rd" Feb 26 11:58:00 crc kubenswrapper[4724]: I0226 11:58:00.508220 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535118-5d7rd" Feb 26 11:58:00 crc kubenswrapper[4724]: I0226 11:58:00.983069 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535118-5d7rd"] Feb 26 11:58:00 crc kubenswrapper[4724]: I0226 11:58:00.987997 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 11:58:01 crc kubenswrapper[4724]: I0226 11:58:01.880314 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535118-5d7rd" event={"ID":"98933753-e9bb-495a-a8fa-b8dc924c173b","Type":"ContainerStarted","Data":"47332c525a15ac39ab0985c08beeaff6337f98f9a9e14ab2cac3944573647a23"} Feb 26 11:58:03 crc kubenswrapper[4724]: I0226 11:58:03.903799 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535118-5d7rd" event={"ID":"98933753-e9bb-495a-a8fa-b8dc924c173b","Type":"ContainerStarted","Data":"4293b07d4c4239e06f4312ed62720336077aff8dd13d37f116816500faf8bcd5"} Feb 26 11:58:04 crc kubenswrapper[4724]: I0226 11:58:04.914082 4724 generic.go:334] "Generic (PLEG): container finished" podID="98933753-e9bb-495a-a8fa-b8dc924c173b" containerID="4293b07d4c4239e06f4312ed62720336077aff8dd13d37f116816500faf8bcd5" exitCode=0 Feb 26 11:58:04 crc kubenswrapper[4724]: I0226 11:58:04.914140 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535118-5d7rd" event={"ID":"98933753-e9bb-495a-a8fa-b8dc924c173b","Type":"ContainerDied","Data":"4293b07d4c4239e06f4312ed62720336077aff8dd13d37f116816500faf8bcd5"} Feb 26 11:58:05 crc kubenswrapper[4724]: I0226 11:58:05.283060 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535118-5d7rd" Feb 26 11:58:05 crc kubenswrapper[4724]: I0226 11:58:05.422600 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68vtm\" (UniqueName: \"kubernetes.io/projected/98933753-e9bb-495a-a8fa-b8dc924c173b-kube-api-access-68vtm\") pod \"98933753-e9bb-495a-a8fa-b8dc924c173b\" (UID: \"98933753-e9bb-495a-a8fa-b8dc924c173b\") " Feb 26 11:58:05 crc kubenswrapper[4724]: I0226 11:58:05.429610 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98933753-e9bb-495a-a8fa-b8dc924c173b-kube-api-access-68vtm" (OuterVolumeSpecName: "kube-api-access-68vtm") pod "98933753-e9bb-495a-a8fa-b8dc924c173b" (UID: "98933753-e9bb-495a-a8fa-b8dc924c173b"). InnerVolumeSpecName "kube-api-access-68vtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:58:05 crc kubenswrapper[4724]: I0226 11:58:05.525014 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68vtm\" (UniqueName: \"kubernetes.io/projected/98933753-e9bb-495a-a8fa-b8dc924c173b-kube-api-access-68vtm\") on node \"crc\" DevicePath \"\"" Feb 26 11:58:05 crc kubenswrapper[4724]: I0226 11:58:05.926867 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535118-5d7rd" event={"ID":"98933753-e9bb-495a-a8fa-b8dc924c173b","Type":"ContainerDied","Data":"47332c525a15ac39ab0985c08beeaff6337f98f9a9e14ab2cac3944573647a23"} Feb 26 11:58:05 crc kubenswrapper[4724]: I0226 11:58:05.926913 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47332c525a15ac39ab0985c08beeaff6337f98f9a9e14ab2cac3944573647a23" Feb 26 11:58:05 crc kubenswrapper[4724]: I0226 11:58:05.926913 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535118-5d7rd" Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.132041 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g78j2"] Feb 26 11:58:06 crc kubenswrapper[4724]: E0226 11:58:06.132581 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98933753-e9bb-495a-a8fa-b8dc924c173b" containerName="oc" Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.132599 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="98933753-e9bb-495a-a8fa-b8dc924c173b" containerName="oc" Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.132828 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="98933753-e9bb-495a-a8fa-b8dc924c173b" containerName="oc" Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.134557 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.155949 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g78j2"] Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.238188 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5a4932-1413-46d9-a63d-6e4596ca6b47-catalog-content\") pod \"certified-operators-g78j2\" (UID: \"2a5a4932-1413-46d9-a63d-6e4596ca6b47\") " pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.238252 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsfgl\" (UniqueName: \"kubernetes.io/projected/2a5a4932-1413-46d9-a63d-6e4596ca6b47-kube-api-access-zsfgl\") pod \"certified-operators-g78j2\" (UID: \"2a5a4932-1413-46d9-a63d-6e4596ca6b47\") " pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.238274 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5a4932-1413-46d9-a63d-6e4596ca6b47-utilities\") pod \"certified-operators-g78j2\" (UID: \"2a5a4932-1413-46d9-a63d-6e4596ca6b47\") " pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.340530 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5a4932-1413-46d9-a63d-6e4596ca6b47-catalog-content\") pod \"certified-operators-g78j2\" (UID: \"2a5a4932-1413-46d9-a63d-6e4596ca6b47\") " pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.340614 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsfgl\" (UniqueName: \"kubernetes.io/projected/2a5a4932-1413-46d9-a63d-6e4596ca6b47-kube-api-access-zsfgl\") pod \"certified-operators-g78j2\" (UID: \"2a5a4932-1413-46d9-a63d-6e4596ca6b47\") " pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.340646 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5a4932-1413-46d9-a63d-6e4596ca6b47-utilities\") pod \"certified-operators-g78j2\" (UID: \"2a5a4932-1413-46d9-a63d-6e4596ca6b47\") " pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.341290 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5a4932-1413-46d9-a63d-6e4596ca6b47-utilities\") pod \"certified-operators-g78j2\" (UID: \"2a5a4932-1413-46d9-a63d-6e4596ca6b47\") " pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.341413 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5a4932-1413-46d9-a63d-6e4596ca6b47-catalog-content\") pod \"certified-operators-g78j2\" (UID: \"2a5a4932-1413-46d9-a63d-6e4596ca6b47\") " pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.364788 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535112-tpht5"] Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.372692 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535112-tpht5"] Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.381481 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsfgl\" (UniqueName: \"kubernetes.io/projected/2a5a4932-1413-46d9-a63d-6e4596ca6b47-kube-api-access-zsfgl\") pod \"certified-operators-g78j2\" (UID: \"2a5a4932-1413-46d9-a63d-6e4596ca6b47\") " pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:06 crc kubenswrapper[4724]: I0226 11:58:06.479294 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:07 crc kubenswrapper[4724]: I0226 11:58:07.048099 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g78j2"] Feb 26 11:58:07 crc kubenswrapper[4724]: I0226 11:58:07.984507 4724 generic.go:334] "Generic (PLEG): container finished" podID="2a5a4932-1413-46d9-a63d-6e4596ca6b47" containerID="5567dd9e421f390204c8f0ffbcf486e239db37d19b15808fd135dd89ec3b149e" exitCode=0 Feb 26 11:58:07 crc kubenswrapper[4724]: I0226 11:58:07.990724 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce" path="/var/lib/kubelet/pods/a46d3c23-e2fd-43f0-9f3c-ad58bd0237ce/volumes" Feb 26 11:58:07 crc kubenswrapper[4724]: I0226 11:58:07.991554 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g78j2" event={"ID":"2a5a4932-1413-46d9-a63d-6e4596ca6b47","Type":"ContainerDied","Data":"5567dd9e421f390204c8f0ffbcf486e239db37d19b15808fd135dd89ec3b149e"} Feb 26 11:58:07 crc kubenswrapper[4724]: I0226 11:58:07.991582 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g78j2" event={"ID":"2a5a4932-1413-46d9-a63d-6e4596ca6b47","Type":"ContainerStarted","Data":"5cc47d8b683797732e552f0e68f2329594f27cb0891da2241333a34c41d59d30"} Feb 26 11:58:10 crc kubenswrapper[4724]: I0226 11:58:10.049625 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g78j2" event={"ID":"2a5a4932-1413-46d9-a63d-6e4596ca6b47","Type":"ContainerStarted","Data":"f8a5b4bccf8bf510c953aa250d8e7b1477bd5bfab1f348c940de8b367f33517b"} Feb 26 11:58:14 crc kubenswrapper[4724]: I0226 11:58:14.085424 4724 generic.go:334] "Generic (PLEG): container finished" podID="2a5a4932-1413-46d9-a63d-6e4596ca6b47" containerID="f8a5b4bccf8bf510c953aa250d8e7b1477bd5bfab1f348c940de8b367f33517b" exitCode=0 Feb 26 11:58:14 crc kubenswrapper[4724]: I0226 11:58:14.085524 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g78j2" event={"ID":"2a5a4932-1413-46d9-a63d-6e4596ca6b47","Type":"ContainerDied","Data":"f8a5b4bccf8bf510c953aa250d8e7b1477bd5bfab1f348c940de8b367f33517b"} Feb 26 11:58:15 crc kubenswrapper[4724]: I0226 11:58:15.108074 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g78j2" event={"ID":"2a5a4932-1413-46d9-a63d-6e4596ca6b47","Type":"ContainerStarted","Data":"6132c0c813b9161991953e069137446c5ae374e25cd3bce78699d6335dbd914f"} Feb 26 11:58:15 crc kubenswrapper[4724]: I0226 11:58:15.136794 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g78j2" podStartSLOduration=2.536493658 podStartE2EDuration="9.136772376s" podCreationTimestamp="2026-02-26 11:58:06 +0000 UTC" firstStartedPulling="2026-02-26 11:58:07.986678498 +0000 UTC m=+3154.642417623" lastFinishedPulling="2026-02-26 11:58:14.586957216 +0000 UTC m=+3161.242696341" observedRunningTime="2026-02-26 11:58:15.129002585 +0000 UTC m=+3161.784741740" watchObservedRunningTime="2026-02-26 11:58:15.136772376 +0000 UTC m=+3161.792511491" Feb 26 11:58:16 crc kubenswrapper[4724]: I0226 11:58:16.479884 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:16 crc kubenswrapper[4724]: I0226 11:58:16.480302 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:16 crc kubenswrapper[4724]: I0226 11:58:16.906454 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:58:16 crc kubenswrapper[4724]: I0226 11:58:16.906542 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:58:17 crc kubenswrapper[4724]: I0226 11:58:17.531988 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g78j2" podUID="2a5a4932-1413-46d9-a63d-6e4596ca6b47" containerName="registry-server" probeResult="failure" output=< Feb 26 11:58:17 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:58:17 crc kubenswrapper[4724]: > Feb 26 11:58:27 crc kubenswrapper[4724]: I0226 11:58:27.551326 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g78j2" podUID="2a5a4932-1413-46d9-a63d-6e4596ca6b47" containerName="registry-server" probeResult="failure" output=< Feb 26 11:58:27 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 11:58:27 crc kubenswrapper[4724]: > Feb 26 11:58:36 crc kubenswrapper[4724]: I0226 11:58:36.584810 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:36 crc kubenswrapper[4724]: I0226 11:58:36.651774 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:37 crc kubenswrapper[4724]: I0226 11:58:37.334213 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g78j2"] Feb 26 11:58:38 crc kubenswrapper[4724]: I0226 11:58:38.297786 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g78j2" podUID="2a5a4932-1413-46d9-a63d-6e4596ca6b47" containerName="registry-server" containerID="cri-o://6132c0c813b9161991953e069137446c5ae374e25cd3bce78699d6335dbd914f" gracePeriod=2 Feb 26 11:58:38 crc kubenswrapper[4724]: I0226 11:58:38.813294 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:38 crc kubenswrapper[4724]: I0226 11:58:38.862108 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsfgl\" (UniqueName: \"kubernetes.io/projected/2a5a4932-1413-46d9-a63d-6e4596ca6b47-kube-api-access-zsfgl\") pod \"2a5a4932-1413-46d9-a63d-6e4596ca6b47\" (UID: \"2a5a4932-1413-46d9-a63d-6e4596ca6b47\") " Feb 26 11:58:38 crc kubenswrapper[4724]: I0226 11:58:38.862365 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5a4932-1413-46d9-a63d-6e4596ca6b47-catalog-content\") pod \"2a5a4932-1413-46d9-a63d-6e4596ca6b47\" (UID: \"2a5a4932-1413-46d9-a63d-6e4596ca6b47\") " Feb 26 11:58:38 crc kubenswrapper[4724]: I0226 11:58:38.869138 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5a4932-1413-46d9-a63d-6e4596ca6b47-kube-api-access-zsfgl" (OuterVolumeSpecName: "kube-api-access-zsfgl") pod "2a5a4932-1413-46d9-a63d-6e4596ca6b47" (UID: "2a5a4932-1413-46d9-a63d-6e4596ca6b47"). InnerVolumeSpecName "kube-api-access-zsfgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:58:38 crc kubenswrapper[4724]: I0226 11:58:38.869806 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a5a4932-1413-46d9-a63d-6e4596ca6b47-utilities" (OuterVolumeSpecName: "utilities") pod "2a5a4932-1413-46d9-a63d-6e4596ca6b47" (UID: "2a5a4932-1413-46d9-a63d-6e4596ca6b47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:58:38 crc kubenswrapper[4724]: I0226 11:58:38.870303 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5a4932-1413-46d9-a63d-6e4596ca6b47-utilities\") pod \"2a5a4932-1413-46d9-a63d-6e4596ca6b47\" (UID: \"2a5a4932-1413-46d9-a63d-6e4596ca6b47\") " Feb 26 11:58:38 crc kubenswrapper[4724]: I0226 11:58:38.874887 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5a4932-1413-46d9-a63d-6e4596ca6b47-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 11:58:38 crc kubenswrapper[4724]: I0226 11:58:38.874938 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsfgl\" (UniqueName: \"kubernetes.io/projected/2a5a4932-1413-46d9-a63d-6e4596ca6b47-kube-api-access-zsfgl\") on node \"crc\" DevicePath \"\"" Feb 26 11:58:38 crc kubenswrapper[4724]: I0226 11:58:38.947453 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a5a4932-1413-46d9-a63d-6e4596ca6b47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a5a4932-1413-46d9-a63d-6e4596ca6b47" (UID: "2a5a4932-1413-46d9-a63d-6e4596ca6b47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 11:58:38 crc kubenswrapper[4724]: I0226 11:58:38.976246 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5a4932-1413-46d9-a63d-6e4596ca6b47-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 11:58:39 crc kubenswrapper[4724]: I0226 11:58:39.308336 4724 generic.go:334] "Generic (PLEG): container finished" podID="2a5a4932-1413-46d9-a63d-6e4596ca6b47" containerID="6132c0c813b9161991953e069137446c5ae374e25cd3bce78699d6335dbd914f" exitCode=0 Feb 26 11:58:39 crc kubenswrapper[4724]: I0226 11:58:39.308380 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g78j2" event={"ID":"2a5a4932-1413-46d9-a63d-6e4596ca6b47","Type":"ContainerDied","Data":"6132c0c813b9161991953e069137446c5ae374e25cd3bce78699d6335dbd914f"} Feb 26 11:58:39 crc kubenswrapper[4724]: I0226 11:58:39.308410 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g78j2" event={"ID":"2a5a4932-1413-46d9-a63d-6e4596ca6b47","Type":"ContainerDied","Data":"5cc47d8b683797732e552f0e68f2329594f27cb0891da2241333a34c41d59d30"} Feb 26 11:58:39 crc kubenswrapper[4724]: I0226 11:58:39.308429 4724 scope.go:117] "RemoveContainer" containerID="6132c0c813b9161991953e069137446c5ae374e25cd3bce78699d6335dbd914f" Feb 26 11:58:39 crc kubenswrapper[4724]: I0226 11:58:39.309268 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g78j2" Feb 26 11:58:39 crc kubenswrapper[4724]: I0226 11:58:39.331884 4724 scope.go:117] "RemoveContainer" containerID="f8a5b4bccf8bf510c953aa250d8e7b1477bd5bfab1f348c940de8b367f33517b" Feb 26 11:58:39 crc kubenswrapper[4724]: I0226 11:58:39.349518 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g78j2"] Feb 26 11:58:39 crc kubenswrapper[4724]: I0226 11:58:39.361870 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g78j2"] Feb 26 11:58:39 crc kubenswrapper[4724]: I0226 11:58:39.374966 4724 scope.go:117] "RemoveContainer" containerID="5567dd9e421f390204c8f0ffbcf486e239db37d19b15808fd135dd89ec3b149e" Feb 26 11:58:39 crc kubenswrapper[4724]: I0226 11:58:39.421011 4724 scope.go:117] "RemoveContainer" containerID="6132c0c813b9161991953e069137446c5ae374e25cd3bce78699d6335dbd914f" Feb 26 11:58:39 crc kubenswrapper[4724]: E0226 11:58:39.421547 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6132c0c813b9161991953e069137446c5ae374e25cd3bce78699d6335dbd914f\": container with ID starting with 6132c0c813b9161991953e069137446c5ae374e25cd3bce78699d6335dbd914f not found: ID does not exist" containerID="6132c0c813b9161991953e069137446c5ae374e25cd3bce78699d6335dbd914f" Feb 26 11:58:39 crc kubenswrapper[4724]: I0226 11:58:39.421598 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6132c0c813b9161991953e069137446c5ae374e25cd3bce78699d6335dbd914f"} err="failed to get container status \"6132c0c813b9161991953e069137446c5ae374e25cd3bce78699d6335dbd914f\": rpc error: code = NotFound desc = could not find container \"6132c0c813b9161991953e069137446c5ae374e25cd3bce78699d6335dbd914f\": container with ID starting with 6132c0c813b9161991953e069137446c5ae374e25cd3bce78699d6335dbd914f not found: ID does not exist" Feb 26 11:58:39 crc kubenswrapper[4724]: I0226 11:58:39.421626 4724 scope.go:117] "RemoveContainer" containerID="f8a5b4bccf8bf510c953aa250d8e7b1477bd5bfab1f348c940de8b367f33517b" Feb 26 11:58:39 crc kubenswrapper[4724]: E0226 11:58:39.422233 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8a5b4bccf8bf510c953aa250d8e7b1477bd5bfab1f348c940de8b367f33517b\": container with ID starting with f8a5b4bccf8bf510c953aa250d8e7b1477bd5bfab1f348c940de8b367f33517b not found: ID does not exist" containerID="f8a5b4bccf8bf510c953aa250d8e7b1477bd5bfab1f348c940de8b367f33517b" Feb 26 11:58:39 crc kubenswrapper[4724]: I0226 11:58:39.422342 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a5b4bccf8bf510c953aa250d8e7b1477bd5bfab1f348c940de8b367f33517b"} err="failed to get container status \"f8a5b4bccf8bf510c953aa250d8e7b1477bd5bfab1f348c940de8b367f33517b\": rpc error: code = NotFound desc = could not find container \"f8a5b4bccf8bf510c953aa250d8e7b1477bd5bfab1f348c940de8b367f33517b\": container with ID starting with f8a5b4bccf8bf510c953aa250d8e7b1477bd5bfab1f348c940de8b367f33517b not found: ID does not exist" Feb 26 11:58:39 crc kubenswrapper[4724]: I0226 11:58:39.422376 4724 scope.go:117] "RemoveContainer" containerID="5567dd9e421f390204c8f0ffbcf486e239db37d19b15808fd135dd89ec3b149e" Feb 26 11:58:39 crc kubenswrapper[4724]: E0226 11:58:39.422733 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5567dd9e421f390204c8f0ffbcf486e239db37d19b15808fd135dd89ec3b149e\": container with ID starting with 5567dd9e421f390204c8f0ffbcf486e239db37d19b15808fd135dd89ec3b149e not found: ID does not exist" containerID="5567dd9e421f390204c8f0ffbcf486e239db37d19b15808fd135dd89ec3b149e" Feb 26 11:58:39 crc kubenswrapper[4724]: I0226 11:58:39.422770 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5567dd9e421f390204c8f0ffbcf486e239db37d19b15808fd135dd89ec3b149e"} err="failed to get container status \"5567dd9e421f390204c8f0ffbcf486e239db37d19b15808fd135dd89ec3b149e\": rpc error: code = NotFound desc = could not find container \"5567dd9e421f390204c8f0ffbcf486e239db37d19b15808fd135dd89ec3b149e\": container with ID starting with 5567dd9e421f390204c8f0ffbcf486e239db37d19b15808fd135dd89ec3b149e not found: ID does not exist" Feb 26 11:58:40 crc kubenswrapper[4724]: I0226 11:58:40.000015 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5a4932-1413-46d9-a63d-6e4596ca6b47" path="/var/lib/kubelet/pods/2a5a4932-1413-46d9-a63d-6e4596ca6b47/volumes" Feb 26 11:58:46 crc kubenswrapper[4724]: I0226 11:58:46.907644 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:58:46 crc kubenswrapper[4724]: I0226 11:58:46.908250 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:58:52 crc kubenswrapper[4724]: I0226 11:58:52.446995 4724 generic.go:334] "Generic (PLEG): container finished" podID="b9209966-a73c-4858-8faf-9053e5447993" containerID="26643c13b04cefb3594ea4713eaf057caf426bf6e6ac4328de15248cf97f2673" exitCode=0 Feb 26 11:58:52 crc kubenswrapper[4724]: I0226 11:58:52.447093 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" event={"ID":"b9209966-a73c-4858-8faf-9053e5447993","Type":"ContainerDied","Data":"26643c13b04cefb3594ea4713eaf057caf426bf6e6ac4328de15248cf97f2673"} Feb 26 11:58:53 crc kubenswrapper[4724]: I0226 11:58:53.874981 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.043225 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-telemetry-combined-ca-bundle\") pod \"b9209966-a73c-4858-8faf-9053e5447993\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.043335 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-0\") pod \"b9209966-a73c-4858-8faf-9053e5447993\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.043656 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-1\") pod \"b9209966-a73c-4858-8faf-9053e5447993\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.043730 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-2\") pod \"b9209966-a73c-4858-8faf-9053e5447993\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.043777 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78zt7\" (UniqueName: \"kubernetes.io/projected/b9209966-a73c-4858-8faf-9053e5447993-kube-api-access-78zt7\") pod \"b9209966-a73c-4858-8faf-9053e5447993\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.043923 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-inventory\") pod \"b9209966-a73c-4858-8faf-9053e5447993\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.043998 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ssh-key-openstack-edpm-ipam\") pod \"b9209966-a73c-4858-8faf-9053e5447993\" (UID: \"b9209966-a73c-4858-8faf-9053e5447993\") " Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.051809 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9209966-a73c-4858-8faf-9053e5447993-kube-api-access-78zt7" (OuterVolumeSpecName: "kube-api-access-78zt7") pod "b9209966-a73c-4858-8faf-9053e5447993" (UID: "b9209966-a73c-4858-8faf-9053e5447993"). InnerVolumeSpecName "kube-api-access-78zt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.055002 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b9209966-a73c-4858-8faf-9053e5447993" (UID: "b9209966-a73c-4858-8faf-9053e5447993"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.077841 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "b9209966-a73c-4858-8faf-9053e5447993" (UID: "b9209966-a73c-4858-8faf-9053e5447993"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.078970 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-inventory" (OuterVolumeSpecName: "inventory") pod "b9209966-a73c-4858-8faf-9053e5447993" (UID: "b9209966-a73c-4858-8faf-9053e5447993"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.087597 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b9209966-a73c-4858-8faf-9053e5447993" (UID: "b9209966-a73c-4858-8faf-9053e5447993"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.087896 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "b9209966-a73c-4858-8faf-9053e5447993" (UID: "b9209966-a73c-4858-8faf-9053e5447993"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.089862 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "b9209966-a73c-4858-8faf-9053e5447993" (UID: "b9209966-a73c-4858-8faf-9053e5447993"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.154866 4724 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.154907 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.154922 4724 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.155081 4724 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.155112 4724 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.155126 4724 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b9209966-a73c-4858-8faf-9053e5447993-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.155139 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78zt7\" (UniqueName: \"kubernetes.io/projected/b9209966-a73c-4858-8faf-9053e5447993-kube-api-access-78zt7\") on node \"crc\" DevicePath \"\"" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.468115 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" event={"ID":"b9209966-a73c-4858-8faf-9053e5447993","Type":"ContainerDied","Data":"fb2d59a0ae1bdfe4c83f1645de431897959f22b756f6542ea5d7c1b4c414fe9e"} Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.468170 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb2d59a0ae1bdfe4c83f1645de431897959f22b756f6542ea5d7c1b4c414fe9e" Feb 26 11:58:54 crc kubenswrapper[4724]: I0226 11:58:54.468227 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-stfrc" Feb 26 11:59:00 crc kubenswrapper[4724]: I0226 11:59:00.415260 4724 scope.go:117] "RemoveContainer" containerID="6ebec88c02f795ba9d01e78a1fe4d811c88e4b801e90e505ecab32e1e6f4258b" Feb 26 11:59:16 crc kubenswrapper[4724]: I0226 11:59:16.905968 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 11:59:16 crc kubenswrapper[4724]: I0226 11:59:16.906611 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 11:59:16 crc kubenswrapper[4724]: I0226 11:59:16.906664 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 11:59:16 crc kubenswrapper[4724]: I0226 11:59:16.907509 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 11:59:16 crc kubenswrapper[4724]: I0226 11:59:16.907565 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" gracePeriod=600 Feb 26 11:59:17 crc kubenswrapper[4724]: E0226 11:59:17.031943 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:59:17 crc kubenswrapper[4724]: I0226 11:59:17.685928 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" exitCode=0 Feb 26 11:59:17 crc kubenswrapper[4724]: I0226 11:59:17.685974 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5"} Feb 26 11:59:17 crc kubenswrapper[4724]: I0226 11:59:17.686014 4724 scope.go:117] "RemoveContainer" containerID="11de6d36c0a5960fa70c51b05c62d38f7ca71ddb060c31d0ec8ff22c36196169" Feb 26 11:59:17 crc kubenswrapper[4724]: I0226 11:59:17.686832 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 11:59:17 crc kubenswrapper[4724]: E0226 11:59:17.687325 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:59:29 crc kubenswrapper[4724]: I0226 11:59:29.976159 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 11:59:29 crc kubenswrapper[4724]: E0226 11:59:29.977069 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:59:41 crc kubenswrapper[4724]: I0226 11:59:41.975878 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 11:59:41 crc kubenswrapper[4724]: E0226 11:59:41.976723 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 11:59:55 crc kubenswrapper[4724]: I0226 11:59:55.976121 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 11:59:55 crc kubenswrapper[4724]: E0226 11:59:55.978401 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.160914 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb"] Feb 26 12:00:00 crc kubenswrapper[4724]: E0226 12:00:00.161862 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5a4932-1413-46d9-a63d-6e4596ca6b47" containerName="extract-utilities" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.161884 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5a4932-1413-46d9-a63d-6e4596ca6b47" containerName="extract-utilities" Feb 26 12:00:00 crc kubenswrapper[4724]: E0226 12:00:00.161911 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5a4932-1413-46d9-a63d-6e4596ca6b47" containerName="registry-server" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.161923 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5a4932-1413-46d9-a63d-6e4596ca6b47" containerName="registry-server" Feb 26 12:00:00 crc kubenswrapper[4724]: E0226 12:00:00.161980 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9209966-a73c-4858-8faf-9053e5447993" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.161992 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9209966-a73c-4858-8faf-9053e5447993" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 12:00:00 crc kubenswrapper[4724]: E0226 12:00:00.162015 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5a4932-1413-46d9-a63d-6e4596ca6b47" containerName="extract-content" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.162026 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5a4932-1413-46d9-a63d-6e4596ca6b47" containerName="extract-content" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.162334 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5a4932-1413-46d9-a63d-6e4596ca6b47" containerName="registry-server" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.162369 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9209966-a73c-4858-8faf-9053e5447993" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.163392 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.166432 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.168708 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.169752 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb"] Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.251986 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535120-qfnj9"] Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.253494 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535120-qfnj9" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.255943 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.257763 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.259166 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.271992 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535120-qfnj9"] Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.299421 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-887zb\" (UniqueName: \"kubernetes.io/projected/1efc605b-a275-457e-baf3-3548c0eb929e-kube-api-access-887zb\") pod \"collect-profiles-29535120-8bzwb\" (UID: \"1efc605b-a275-457e-baf3-3548c0eb929e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.299488 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1efc605b-a275-457e-baf3-3548c0eb929e-config-volume\") pod \"collect-profiles-29535120-8bzwb\" (UID: \"1efc605b-a275-457e-baf3-3548c0eb929e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.299646 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1efc605b-a275-457e-baf3-3548c0eb929e-secret-volume\") pod \"collect-profiles-29535120-8bzwb\" (UID: \"1efc605b-a275-457e-baf3-3548c0eb929e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.401817 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-887zb\" (UniqueName: \"kubernetes.io/projected/1efc605b-a275-457e-baf3-3548c0eb929e-kube-api-access-887zb\") pod \"collect-profiles-29535120-8bzwb\" (UID: \"1efc605b-a275-457e-baf3-3548c0eb929e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.402146 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1efc605b-a275-457e-baf3-3548c0eb929e-config-volume\") pod \"collect-profiles-29535120-8bzwb\" (UID: \"1efc605b-a275-457e-baf3-3548c0eb929e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.402408 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1efc605b-a275-457e-baf3-3548c0eb929e-secret-volume\") pod \"collect-profiles-29535120-8bzwb\" (UID: \"1efc605b-a275-457e-baf3-3548c0eb929e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.402638 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22pvs\" (UniqueName: \"kubernetes.io/projected/3d49bbd9-833c-413f-a187-ebbb2a4bce2b-kube-api-access-22pvs\") pod \"auto-csr-approver-29535120-qfnj9\" (UID: \"3d49bbd9-833c-413f-a187-ebbb2a4bce2b\") " pod="openshift-infra/auto-csr-approver-29535120-qfnj9" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.403233 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1efc605b-a275-457e-baf3-3548c0eb929e-config-volume\") pod \"collect-profiles-29535120-8bzwb\" (UID: \"1efc605b-a275-457e-baf3-3548c0eb929e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.419160 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1efc605b-a275-457e-baf3-3548c0eb929e-secret-volume\") pod \"collect-profiles-29535120-8bzwb\" (UID: \"1efc605b-a275-457e-baf3-3548c0eb929e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.422389 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-887zb\" (UniqueName: \"kubernetes.io/projected/1efc605b-a275-457e-baf3-3548c0eb929e-kube-api-access-887zb\") pod \"collect-profiles-29535120-8bzwb\" (UID: \"1efc605b-a275-457e-baf3-3548c0eb929e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.485526 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.510700 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22pvs\" (UniqueName: \"kubernetes.io/projected/3d49bbd9-833c-413f-a187-ebbb2a4bce2b-kube-api-access-22pvs\") pod \"auto-csr-approver-29535120-qfnj9\" (UID: \"3d49bbd9-833c-413f-a187-ebbb2a4bce2b\") " pod="openshift-infra/auto-csr-approver-29535120-qfnj9" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.536463 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22pvs\" (UniqueName: \"kubernetes.io/projected/3d49bbd9-833c-413f-a187-ebbb2a4bce2b-kube-api-access-22pvs\") pod \"auto-csr-approver-29535120-qfnj9\" (UID: \"3d49bbd9-833c-413f-a187-ebbb2a4bce2b\") " pod="openshift-infra/auto-csr-approver-29535120-qfnj9" Feb 26 12:00:00 crc kubenswrapper[4724]: I0226 12:00:00.579895 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535120-qfnj9" Feb 26 12:00:01 crc kubenswrapper[4724]: I0226 12:00:01.018325 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb"] Feb 26 12:00:01 crc kubenswrapper[4724]: I0226 12:00:01.079621 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" event={"ID":"1efc605b-a275-457e-baf3-3548c0eb929e","Type":"ContainerStarted","Data":"4945105af2c64b56827043ddaa2207bf62c4a2d058a8e4d36e8cf9bfe17997e6"} Feb 26 12:00:01 crc kubenswrapper[4724]: I0226 12:00:01.156732 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535120-qfnj9"] Feb 26 12:00:02 crc kubenswrapper[4724]: I0226 12:00:02.090911 4724 generic.go:334] "Generic (PLEG): container finished" podID="1efc605b-a275-457e-baf3-3548c0eb929e" containerID="ecf70371a2ac911d14ef78fb9b42c1844c914cdeea5dc8ef48f408c7f9676572" exitCode=0 Feb 26 12:00:02 crc kubenswrapper[4724]: I0226 12:00:02.091095 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" event={"ID":"1efc605b-a275-457e-baf3-3548c0eb929e","Type":"ContainerDied","Data":"ecf70371a2ac911d14ef78fb9b42c1844c914cdeea5dc8ef48f408c7f9676572"} Feb 26 12:00:02 crc kubenswrapper[4724]: I0226 12:00:02.094553 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535120-qfnj9" event={"ID":"3d49bbd9-833c-413f-a187-ebbb2a4bce2b","Type":"ContainerStarted","Data":"379a71b512f74208a4830d1159e6455e59b8f43766f92501c1851d5d2c9c19b2"} Feb 26 12:00:03 crc kubenswrapper[4724]: I0226 12:00:03.427450 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" Feb 26 12:00:03 crc kubenswrapper[4724]: I0226 12:00:03.608968 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1efc605b-a275-457e-baf3-3548c0eb929e-config-volume\") pod \"1efc605b-a275-457e-baf3-3548c0eb929e\" (UID: \"1efc605b-a275-457e-baf3-3548c0eb929e\") " Feb 26 12:00:03 crc kubenswrapper[4724]: I0226 12:00:03.609470 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1efc605b-a275-457e-baf3-3548c0eb929e-secret-volume\") pod \"1efc605b-a275-457e-baf3-3548c0eb929e\" (UID: \"1efc605b-a275-457e-baf3-3548c0eb929e\") " Feb 26 12:00:03 crc kubenswrapper[4724]: I0226 12:00:03.609615 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-887zb\" (UniqueName: \"kubernetes.io/projected/1efc605b-a275-457e-baf3-3548c0eb929e-kube-api-access-887zb\") pod \"1efc605b-a275-457e-baf3-3548c0eb929e\" (UID: \"1efc605b-a275-457e-baf3-3548c0eb929e\") " Feb 26 12:00:03 crc kubenswrapper[4724]: I0226 12:00:03.609769 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1efc605b-a275-457e-baf3-3548c0eb929e-config-volume" (OuterVolumeSpecName: "config-volume") pod "1efc605b-a275-457e-baf3-3548c0eb929e" (UID: "1efc605b-a275-457e-baf3-3548c0eb929e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 12:00:03 crc kubenswrapper[4724]: I0226 12:00:03.610264 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1efc605b-a275-457e-baf3-3548c0eb929e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 12:00:03 crc kubenswrapper[4724]: I0226 12:00:03.617866 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1efc605b-a275-457e-baf3-3548c0eb929e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1efc605b-a275-457e-baf3-3548c0eb929e" (UID: "1efc605b-a275-457e-baf3-3548c0eb929e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 12:00:03 crc kubenswrapper[4724]: I0226 12:00:03.619788 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1efc605b-a275-457e-baf3-3548c0eb929e-kube-api-access-887zb" (OuterVolumeSpecName: "kube-api-access-887zb") pod "1efc605b-a275-457e-baf3-3548c0eb929e" (UID: "1efc605b-a275-457e-baf3-3548c0eb929e"). InnerVolumeSpecName "kube-api-access-887zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:00:03 crc kubenswrapper[4724]: I0226 12:00:03.711930 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1efc605b-a275-457e-baf3-3548c0eb929e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 12:00:03 crc kubenswrapper[4724]: I0226 12:00:03.712232 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-887zb\" (UniqueName: \"kubernetes.io/projected/1efc605b-a275-457e-baf3-3548c0eb929e-kube-api-access-887zb\") on node \"crc\" DevicePath \"\"" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.124002 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" event={"ID":"1efc605b-a275-457e-baf3-3548c0eb929e","Type":"ContainerDied","Data":"4945105af2c64b56827043ddaa2207bf62c4a2d058a8e4d36e8cf9bfe17997e6"} Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.124057 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4945105af2c64b56827043ddaa2207bf62c4a2d058a8e4d36e8cf9bfe17997e6" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.124137 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.450626 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Feb 26 12:00:04 crc kubenswrapper[4724]: E0226 12:00:04.451411 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1efc605b-a275-457e-baf3-3548c0eb929e" containerName="collect-profiles" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.451431 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1efc605b-a275-457e-baf3-3548c0eb929e" containerName="collect-profiles" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.451727 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1efc605b-a275-457e-baf3-3548c0eb929e" containerName="collect-profiles" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.453479 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.460115 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.465097 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-khdhf" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.465299 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.465364 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.497777 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.534331 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14b6ff63-4a92-49d9-9d37-0f2092545b77-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.534575 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.534686 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/14b6ff63-4a92-49d9-9d37-0f2092545b77-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.544591 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7"] Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.555708 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535075-rv8b7"] Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.637094 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.637356 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.637540 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14b6ff63-4a92-49d9-9d37-0f2092545b77-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.637604 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/14b6ff63-4a92-49d9-9d37-0f2092545b77-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.637654 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.637696 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/14b6ff63-4a92-49d9-9d37-0f2092545b77-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.637762 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.637992 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/14b6ff63-4a92-49d9-9d37-0f2092545b77-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.638096 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvtqx\" (UniqueName: \"kubernetes.io/projected/14b6ff63-4a92-49d9-9d37-0f2092545b77-kube-api-access-bvtqx\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.641236 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/14b6ff63-4a92-49d9-9d37-0f2092545b77-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.641522 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14b6ff63-4a92-49d9-9d37-0f2092545b77-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.642975 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.741200 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.741535 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/14b6ff63-4a92-49d9-9d37-0f2092545b77-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.741573 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.741609 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/14b6ff63-4a92-49d9-9d37-0f2092545b77-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.741645 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvtqx\" (UniqueName: \"kubernetes.io/projected/14b6ff63-4a92-49d9-9d37-0f2092545b77-kube-api-access-bvtqx\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.741696 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.742114 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/14b6ff63-4a92-49d9-9d37-0f2092545b77-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.742502 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/14b6ff63-4a92-49d9-9d37-0f2092545b77-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.743371 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.746337 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.748705 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.770302 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvtqx\" (UniqueName: \"kubernetes.io/projected/14b6ff63-4a92-49d9-9d37-0f2092545b77-kube-api-access-bvtqx\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:04 crc kubenswrapper[4724]: I0226 12:00:04.794965 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:05 crc kubenswrapper[4724]: I0226 12:00:05.080533 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 12:00:05 crc kubenswrapper[4724]: I0226 12:00:05.620120 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Feb 26 12:00:05 crc kubenswrapper[4724]: W0226 12:00:05.625331 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14b6ff63_4a92_49d9_9d37_0f2092545b77.slice/crio-9fc4205fd5b72c50c66826bc69e83ab35d49920066f2e5030285b0dba052ce6b WatchSource:0}: Error finding container 9fc4205fd5b72c50c66826bc69e83ab35d49920066f2e5030285b0dba052ce6b: Status 404 returned error can't find the container with id 9fc4205fd5b72c50c66826bc69e83ab35d49920066f2e5030285b0dba052ce6b Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.032968 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac1205f6-96f8-47e5-bd64-8bfae8525d43" path="/var/lib/kubelet/pods/ac1205f6-96f8-47e5-bd64-8bfae8525d43/volumes" Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.144079 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"14b6ff63-4a92-49d9-9d37-0f2092545b77","Type":"ContainerStarted","Data":"9fc4205fd5b72c50c66826bc69e83ab35d49920066f2e5030285b0dba052ce6b"} Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.145838 4724 generic.go:334] "Generic (PLEG): container finished" podID="3d49bbd9-833c-413f-a187-ebbb2a4bce2b" containerID="af1f479c9ae010d452db170a1f868339c37b491d3c1f0684f7cdb8a8cc0abc88" exitCode=0 Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.145866 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535120-qfnj9" event={"ID":"3d49bbd9-833c-413f-a187-ebbb2a4bce2b","Type":"ContainerDied","Data":"af1f479c9ae010d452db170a1f868339c37b491d3c1f0684f7cdb8a8cc0abc88"} Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.369751 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wm6wn"] Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.372036 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.373872 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ae0b63-a936-4680-8320-04b5ba6a6de4-utilities\") pod \"redhat-marketplace-wm6wn\" (UID: \"31ae0b63-a936-4680-8320-04b5ba6a6de4\") " pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.373937 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ae0b63-a936-4680-8320-04b5ba6a6de4-catalog-content\") pod \"redhat-marketplace-wm6wn\" (UID: \"31ae0b63-a936-4680-8320-04b5ba6a6de4\") " pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.374044 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fchb\" (UniqueName: \"kubernetes.io/projected/31ae0b63-a936-4680-8320-04b5ba6a6de4-kube-api-access-9fchb\") pod \"redhat-marketplace-wm6wn\" (UID: \"31ae0b63-a936-4680-8320-04b5ba6a6de4\") " pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.427100 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm6wn"] Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.476134 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ae0b63-a936-4680-8320-04b5ba6a6de4-utilities\") pod \"redhat-marketplace-wm6wn\" (UID: \"31ae0b63-a936-4680-8320-04b5ba6a6de4\") " pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.476380 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ae0b63-a936-4680-8320-04b5ba6a6de4-catalog-content\") pod \"redhat-marketplace-wm6wn\" (UID: \"31ae0b63-a936-4680-8320-04b5ba6a6de4\") " pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.476575 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fchb\" (UniqueName: \"kubernetes.io/projected/31ae0b63-a936-4680-8320-04b5ba6a6de4-kube-api-access-9fchb\") pod \"redhat-marketplace-wm6wn\" (UID: \"31ae0b63-a936-4680-8320-04b5ba6a6de4\") " pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.476842 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ae0b63-a936-4680-8320-04b5ba6a6de4-catalog-content\") pod \"redhat-marketplace-wm6wn\" (UID: \"31ae0b63-a936-4680-8320-04b5ba6a6de4\") " pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.476856 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ae0b63-a936-4680-8320-04b5ba6a6de4-utilities\") pod \"redhat-marketplace-wm6wn\" (UID: \"31ae0b63-a936-4680-8320-04b5ba6a6de4\") " pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.501261 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fchb\" (UniqueName: \"kubernetes.io/projected/31ae0b63-a936-4680-8320-04b5ba6a6de4-kube-api-access-9fchb\") pod \"redhat-marketplace-wm6wn\" (UID: \"31ae0b63-a936-4680-8320-04b5ba6a6de4\") " pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:00:06 crc kubenswrapper[4724]: I0226 12:00:06.706869 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:00:07 crc kubenswrapper[4724]: I0226 12:00:07.045462 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm6wn"] Feb 26 12:00:07 crc kubenswrapper[4724]: W0226 12:00:07.057963 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31ae0b63_a936_4680_8320_04b5ba6a6de4.slice/crio-410a314169399e92a68cf6c472c2b7439f5f770f35099495aa2426aea8824ef3 WatchSource:0}: Error finding container 410a314169399e92a68cf6c472c2b7439f5f770f35099495aa2426aea8824ef3: Status 404 returned error can't find the container with id 410a314169399e92a68cf6c472c2b7439f5f770f35099495aa2426aea8824ef3 Feb 26 12:00:07 crc kubenswrapper[4724]: I0226 12:00:07.164374 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm6wn" event={"ID":"31ae0b63-a936-4680-8320-04b5ba6a6de4","Type":"ContainerStarted","Data":"410a314169399e92a68cf6c472c2b7439f5f770f35099495aa2426aea8824ef3"} Feb 26 12:00:07 crc kubenswrapper[4724]: I0226 12:00:07.541775 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535120-qfnj9" Feb 26 12:00:07 crc kubenswrapper[4724]: I0226 12:00:07.715059 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22pvs\" (UniqueName: \"kubernetes.io/projected/3d49bbd9-833c-413f-a187-ebbb2a4bce2b-kube-api-access-22pvs\") pod \"3d49bbd9-833c-413f-a187-ebbb2a4bce2b\" (UID: \"3d49bbd9-833c-413f-a187-ebbb2a4bce2b\") " Feb 26 12:00:07 crc kubenswrapper[4724]: I0226 12:00:07.722400 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d49bbd9-833c-413f-a187-ebbb2a4bce2b-kube-api-access-22pvs" (OuterVolumeSpecName: "kube-api-access-22pvs") pod "3d49bbd9-833c-413f-a187-ebbb2a4bce2b" (UID: "3d49bbd9-833c-413f-a187-ebbb2a4bce2b"). InnerVolumeSpecName "kube-api-access-22pvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:00:07 crc kubenswrapper[4724]: I0226 12:00:07.820910 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22pvs\" (UniqueName: \"kubernetes.io/projected/3d49bbd9-833c-413f-a187-ebbb2a4bce2b-kube-api-access-22pvs\") on node \"crc\" DevicePath \"\"" Feb 26 12:00:08 crc kubenswrapper[4724]: I0226 12:00:08.178611 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535120-qfnj9" event={"ID":"3d49bbd9-833c-413f-a187-ebbb2a4bce2b","Type":"ContainerDied","Data":"379a71b512f74208a4830d1159e6455e59b8f43766f92501c1851d5d2c9c19b2"} Feb 26 12:00:08 crc kubenswrapper[4724]: I0226 12:00:08.178862 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="379a71b512f74208a4830d1159e6455e59b8f43766f92501c1851d5d2c9c19b2" Feb 26 12:00:08 crc kubenswrapper[4724]: I0226 12:00:08.178930 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535120-qfnj9" Feb 26 12:00:08 crc kubenswrapper[4724]: I0226 12:00:08.181976 4724 generic.go:334] "Generic (PLEG): container finished" podID="31ae0b63-a936-4680-8320-04b5ba6a6de4" containerID="11f80776746b531289369e44ff43aacf9a14125a86e89c104dcb7333f7df44b3" exitCode=0 Feb 26 12:00:08 crc kubenswrapper[4724]: I0226 12:00:08.182030 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm6wn" event={"ID":"31ae0b63-a936-4680-8320-04b5ba6a6de4","Type":"ContainerDied","Data":"11f80776746b531289369e44ff43aacf9a14125a86e89c104dcb7333f7df44b3"} Feb 26 12:00:08 crc kubenswrapper[4724]: I0226 12:00:08.612372 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535114-df786"] Feb 26 12:00:08 crc kubenswrapper[4724]: I0226 12:00:08.620761 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535114-df786"] Feb 26 12:00:08 crc kubenswrapper[4724]: I0226 12:00:08.976088 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:00:08 crc kubenswrapper[4724]: E0226 12:00:08.976330 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:00:09 crc kubenswrapper[4724]: I0226 12:00:09.991663 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d" path="/var/lib/kubelet/pods/30d9b6c6-89a4-4fb2-a2f0-5e169c3c359d/volumes" Feb 26 12:00:10 crc kubenswrapper[4724]: I0226 12:00:10.237629 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm6wn" event={"ID":"31ae0b63-a936-4680-8320-04b5ba6a6de4","Type":"ContainerStarted","Data":"4f233773aa27caca3f80e4672111323ac25adc76102271a9cba0d454d396ebbc"} Feb 26 12:00:18 crc kubenswrapper[4724]: I0226 12:00:18.324802 4724 generic.go:334] "Generic (PLEG): container finished" podID="31ae0b63-a936-4680-8320-04b5ba6a6de4" containerID="4f233773aa27caca3f80e4672111323ac25adc76102271a9cba0d454d396ebbc" exitCode=0 Feb 26 12:00:18 crc kubenswrapper[4724]: I0226 12:00:18.324874 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm6wn" event={"ID":"31ae0b63-a936-4680-8320-04b5ba6a6de4","Type":"ContainerDied","Data":"4f233773aa27caca3f80e4672111323ac25adc76102271a9cba0d454d396ebbc"} Feb 26 12:00:19 crc kubenswrapper[4724]: I0226 12:00:19.976219 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:00:19 crc kubenswrapper[4724]: E0226 12:00:19.976775 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:00:21 crc kubenswrapper[4724]: I0226 12:00:21.358896 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm6wn" event={"ID":"31ae0b63-a936-4680-8320-04b5ba6a6de4","Type":"ContainerStarted","Data":"49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4"} Feb 26 12:00:21 crc kubenswrapper[4724]: I0226 12:00:21.389999 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wm6wn" podStartSLOduration=3.027896021 podStartE2EDuration="15.38998106s" podCreationTimestamp="2026-02-26 12:00:06 +0000 UTC" firstStartedPulling="2026-02-26 12:00:08.184167448 +0000 UTC m=+3274.839906563" lastFinishedPulling="2026-02-26 12:00:20.546252487 +0000 UTC m=+3287.201991602" observedRunningTime="2026-02-26 12:00:21.37871602 +0000 UTC m=+3288.034455145" watchObservedRunningTime="2026-02-26 12:00:21.38998106 +0000 UTC m=+3288.045720185" Feb 26 12:00:26 crc kubenswrapper[4724]: I0226 12:00:26.734041 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:00:26 crc kubenswrapper[4724]: I0226 12:00:26.734648 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:00:27 crc kubenswrapper[4724]: I0226 12:00:27.794007 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wm6wn" podUID="31ae0b63-a936-4680-8320-04b5ba6a6de4" containerName="registry-server" probeResult="failure" output=< Feb 26 12:00:27 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:00:27 crc kubenswrapper[4724]: > Feb 26 12:00:32 crc kubenswrapper[4724]: I0226 12:00:32.979405 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:00:32 crc kubenswrapper[4724]: E0226 12:00:32.991097 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:00:37 crc kubenswrapper[4724]: I0226 12:00:37.760831 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wm6wn" podUID="31ae0b63-a936-4680-8320-04b5ba6a6de4" containerName="registry-server" probeResult="failure" output=< Feb 26 12:00:37 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:00:37 crc kubenswrapper[4724]: > Feb 26 12:00:46 crc kubenswrapper[4724]: I0226 12:00:46.775585 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:00:46 crc kubenswrapper[4724]: I0226 12:00:46.837531 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:00:46 crc kubenswrapper[4724]: I0226 12:00:46.976374 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:00:46 crc kubenswrapper[4724]: E0226 12:00:46.976839 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:00:47 crc kubenswrapper[4724]: I0226 12:00:47.072670 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm6wn"] Feb 26 12:00:48 crc kubenswrapper[4724]: I0226 12:00:48.081189 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wm6wn" podUID="31ae0b63-a936-4680-8320-04b5ba6a6de4" containerName="registry-server" containerID="cri-o://49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4" gracePeriod=2 Feb 26 12:00:49 crc kubenswrapper[4724]: I0226 12:00:49.090910 4724 generic.go:334] "Generic (PLEG): container finished" podID="31ae0b63-a936-4680-8320-04b5ba6a6de4" containerID="49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4" exitCode=0 Feb 26 12:00:49 crc kubenswrapper[4724]: I0226 12:00:49.090982 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm6wn" event={"ID":"31ae0b63-a936-4680-8320-04b5ba6a6de4","Type":"ContainerDied","Data":"49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4"} Feb 26 12:00:57 crc kubenswrapper[4724]: E0226 12:00:57.801443 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4 is running failed: container process not found" containerID="49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 12:00:57 crc kubenswrapper[4724]: E0226 12:00:57.821040 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4 is running failed: container process not found" containerID="49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 12:00:57 crc kubenswrapper[4724]: E0226 12:00:57.823711 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4 is running failed: container process not found" containerID="49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 12:00:57 crc kubenswrapper[4724]: E0226 12:00:57.823778 4724 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-wm6wn" podUID="31ae0b63-a936-4680-8320-04b5ba6a6de4" containerName="registry-server" Feb 26 12:00:59 crc kubenswrapper[4724]: I0226 12:00:59.853017 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wkp2s"] Feb 26 12:00:59 crc kubenswrapper[4724]: E0226 12:00:59.854129 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d49bbd9-833c-413f-a187-ebbb2a4bce2b" containerName="oc" Feb 26 12:00:59 crc kubenswrapper[4724]: I0226 12:00:59.854147 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d49bbd9-833c-413f-a187-ebbb2a4bce2b" containerName="oc" Feb 26 12:00:59 crc kubenswrapper[4724]: I0226 12:00:59.854444 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d49bbd9-833c-413f-a187-ebbb2a4bce2b" containerName="oc" Feb 26 12:00:59 crc kubenswrapper[4724]: I0226 12:00:59.859350 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:00:59 crc kubenswrapper[4724]: I0226 12:00:59.873779 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wkp2s"] Feb 26 12:00:59 crc kubenswrapper[4724]: I0226 12:00:59.946555 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26cc587e-877d-4ba1-87e8-3542e82b1935-utilities\") pod \"community-operators-wkp2s\" (UID: \"26cc587e-877d-4ba1-87e8-3542e82b1935\") " pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:00:59 crc kubenswrapper[4724]: I0226 12:00:59.946716 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9w66\" (UniqueName: \"kubernetes.io/projected/26cc587e-877d-4ba1-87e8-3542e82b1935-kube-api-access-m9w66\") pod \"community-operators-wkp2s\" (UID: \"26cc587e-877d-4ba1-87e8-3542e82b1935\") " pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:00:59 crc kubenswrapper[4724]: I0226 12:00:59.946830 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26cc587e-877d-4ba1-87e8-3542e82b1935-catalog-content\") pod \"community-operators-wkp2s\" (UID: \"26cc587e-877d-4ba1-87e8-3542e82b1935\") " pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:01:00 crc kubenswrapper[4724]: I0226 12:01:00.047994 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9w66\" (UniqueName: \"kubernetes.io/projected/26cc587e-877d-4ba1-87e8-3542e82b1935-kube-api-access-m9w66\") pod \"community-operators-wkp2s\" (UID: \"26cc587e-877d-4ba1-87e8-3542e82b1935\") " pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:01:00 crc kubenswrapper[4724]: I0226 12:01:00.048104 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26cc587e-877d-4ba1-87e8-3542e82b1935-catalog-content\") pod \"community-operators-wkp2s\" (UID: \"26cc587e-877d-4ba1-87e8-3542e82b1935\") " pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:01:00 crc kubenswrapper[4724]: I0226 12:01:00.048143 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26cc587e-877d-4ba1-87e8-3542e82b1935-utilities\") pod \"community-operators-wkp2s\" (UID: \"26cc587e-877d-4ba1-87e8-3542e82b1935\") " pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:01:00 crc kubenswrapper[4724]: I0226 12:01:00.048740 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26cc587e-877d-4ba1-87e8-3542e82b1935-utilities\") pod \"community-operators-wkp2s\" (UID: \"26cc587e-877d-4ba1-87e8-3542e82b1935\") " pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:01:00 crc kubenswrapper[4724]: I0226 12:01:00.048905 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26cc587e-877d-4ba1-87e8-3542e82b1935-catalog-content\") pod \"community-operators-wkp2s\" (UID: \"26cc587e-877d-4ba1-87e8-3542e82b1935\") " pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:01:00 crc kubenswrapper[4724]: I0226 12:01:00.958231 4724 scope.go:117] "RemoveContainer" containerID="052437116e29d5f41d32260321c962da74f58b2f08dc44c85ebff105046c618f" Feb 26 12:01:00 crc kubenswrapper[4724]: I0226 12:01:00.977364 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:01:00 crc kubenswrapper[4724]: E0226 12:01:00.977978 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:01:00 crc kubenswrapper[4724]: I0226 12:01:00.981412 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9w66\" (UniqueName: \"kubernetes.io/projected/26cc587e-877d-4ba1-87e8-3542e82b1935-kube-api-access-m9w66\") pod \"community-operators-wkp2s\" (UID: \"26cc587e-877d-4ba1-87e8-3542e82b1935\") " pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.093144 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.109415 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29535121-zgmbr"] Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.110681 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.137247 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-combined-ca-bundle\") pod \"keystone-cron-29535121-zgmbr\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.137324 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-fernet-keys\") pod \"keystone-cron-29535121-zgmbr\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.137378 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g988m\" (UniqueName: \"kubernetes.io/projected/97ac65d3-f64d-4a73-b7b6-df090fc3706d-kube-api-access-g988m\") pod \"keystone-cron-29535121-zgmbr\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.137710 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-config-data\") pod \"keystone-cron-29535121-zgmbr\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.173399 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535121-zgmbr"] Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.263473 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-config-data\") pod \"keystone-cron-29535121-zgmbr\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.263578 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-combined-ca-bundle\") pod \"keystone-cron-29535121-zgmbr\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.263610 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-fernet-keys\") pod \"keystone-cron-29535121-zgmbr\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.263638 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g988m\" (UniqueName: \"kubernetes.io/projected/97ac65d3-f64d-4a73-b7b6-df090fc3706d-kube-api-access-g988m\") pod \"keystone-cron-29535121-zgmbr\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.269807 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-combined-ca-bundle\") pod \"keystone-cron-29535121-zgmbr\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.294254 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-fernet-keys\") pod \"keystone-cron-29535121-zgmbr\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.294746 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-config-data\") pod \"keystone-cron-29535121-zgmbr\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.320927 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g988m\" (UniqueName: \"kubernetes.io/projected/97ac65d3-f64d-4a73-b7b6-df090fc3706d-kube-api-access-g988m\") pod \"keystone-cron-29535121-zgmbr\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:01 crc kubenswrapper[4724]: I0226 12:01:01.461690 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:06 crc kubenswrapper[4724]: E0226 12:01:06.708445 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4 is running failed: container process not found" containerID="49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 12:01:06 crc kubenswrapper[4724]: E0226 12:01:06.710928 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4 is running failed: container process not found" containerID="49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 12:01:06 crc kubenswrapper[4724]: E0226 12:01:06.711347 4724 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4 is running failed: container process not found" containerID="49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 12:01:06 crc kubenswrapper[4724]: E0226 12:01:06.711395 4724 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-wm6wn" podUID="31ae0b63-a936-4680-8320-04b5ba6a6de4" containerName="registry-server" Feb 26 12:01:12 crc kubenswrapper[4724]: I0226 12:01:12.976102 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:01:12 crc kubenswrapper[4724]: E0226 12:01:12.976666 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:01:13 crc kubenswrapper[4724]: I0226 12:01:13.197745 4724 scope.go:117] "RemoveContainer" containerID="013f716743e64afb694d637371369e363ee3d7cc6f8a69acd8966d870d902577" Feb 26 12:01:13 crc kubenswrapper[4724]: E0226 12:01:13.662203 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8419493e1fd846703d277695e03fc5eb" Feb 26 12:01:13 crc kubenswrapper[4724]: E0226 12:01:13.669855 4724 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8419493e1fd846703d277695e03fc5eb" Feb 26 12:01:13 crc kubenswrapper[4724]: E0226 12:01:13.677337 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8419493e1fd846703d277695e03fc5eb,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvtqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-multi-thread-testing_openstack(14b6ff63-4a92-49d9-9d37-0f2092545b77): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 12:01:13 crc kubenswrapper[4724]: E0226 12:01:13.679612 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="14b6ff63-4a92-49d9-9d37-0f2092545b77" Feb 26 12:01:14 crc kubenswrapper[4724]: E0226 12:01:14.084782 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8419493e1fd846703d277695e03fc5eb\\\"\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="14b6ff63-4a92-49d9-9d37-0f2092545b77" Feb 26 12:01:14 crc kubenswrapper[4724]: I0226 12:01:14.284677 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:01:14 crc kubenswrapper[4724]: I0226 12:01:14.327461 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ae0b63-a936-4680-8320-04b5ba6a6de4-catalog-content\") pod \"31ae0b63-a936-4680-8320-04b5ba6a6de4\" (UID: \"31ae0b63-a936-4680-8320-04b5ba6a6de4\") " Feb 26 12:01:14 crc kubenswrapper[4724]: I0226 12:01:14.327584 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fchb\" (UniqueName: \"kubernetes.io/projected/31ae0b63-a936-4680-8320-04b5ba6a6de4-kube-api-access-9fchb\") pod \"31ae0b63-a936-4680-8320-04b5ba6a6de4\" (UID: \"31ae0b63-a936-4680-8320-04b5ba6a6de4\") " Feb 26 12:01:14 crc kubenswrapper[4724]: I0226 12:01:14.327688 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ae0b63-a936-4680-8320-04b5ba6a6de4-utilities\") pod \"31ae0b63-a936-4680-8320-04b5ba6a6de4\" (UID: \"31ae0b63-a936-4680-8320-04b5ba6a6de4\") " Feb 26 12:01:14 crc kubenswrapper[4724]: I0226 12:01:14.328509 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ae0b63-a936-4680-8320-04b5ba6a6de4-utilities" (OuterVolumeSpecName: "utilities") pod "31ae0b63-a936-4680-8320-04b5ba6a6de4" (UID: "31ae0b63-a936-4680-8320-04b5ba6a6de4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:01:14 crc kubenswrapper[4724]: I0226 12:01:14.337146 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ae0b63-a936-4680-8320-04b5ba6a6de4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31ae0b63-a936-4680-8320-04b5ba6a6de4" (UID: "31ae0b63-a936-4680-8320-04b5ba6a6de4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:01:14 crc kubenswrapper[4724]: I0226 12:01:14.376461 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ae0b63-a936-4680-8320-04b5ba6a6de4-kube-api-access-9fchb" (OuterVolumeSpecName: "kube-api-access-9fchb") pod "31ae0b63-a936-4680-8320-04b5ba6a6de4" (UID: "31ae0b63-a936-4680-8320-04b5ba6a6de4"). InnerVolumeSpecName "kube-api-access-9fchb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:01:14 crc kubenswrapper[4724]: I0226 12:01:14.430000 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ae0b63-a936-4680-8320-04b5ba6a6de4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:01:14 crc kubenswrapper[4724]: I0226 12:01:14.430029 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fchb\" (UniqueName: \"kubernetes.io/projected/31ae0b63-a936-4680-8320-04b5ba6a6de4-kube-api-access-9fchb\") on node \"crc\" DevicePath \"\"" Feb 26 12:01:14 crc kubenswrapper[4724]: I0226 12:01:14.430040 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ae0b63-a936-4680-8320-04b5ba6a6de4-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:01:14 crc kubenswrapper[4724]: I0226 12:01:14.563885 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wkp2s"] Feb 26 12:01:14 crc kubenswrapper[4724]: I0226 12:01:14.593963 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535121-zgmbr"] Feb 26 12:01:15 crc kubenswrapper[4724]: I0226 12:01:15.090388 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm6wn" event={"ID":"31ae0b63-a936-4680-8320-04b5ba6a6de4","Type":"ContainerDied","Data":"410a314169399e92a68cf6c472c2b7439f5f770f35099495aa2426aea8824ef3"} Feb 26 12:01:15 crc kubenswrapper[4724]: I0226 12:01:15.090733 4724 scope.go:117] "RemoveContainer" containerID="49856195854ed48cd8af5b82666a031f127133b59caa3a19374d394025f21de4" Feb 26 12:01:15 crc kubenswrapper[4724]: I0226 12:01:15.090664 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm6wn" Feb 26 12:01:15 crc kubenswrapper[4724]: I0226 12:01:15.095714 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkp2s" event={"ID":"26cc587e-877d-4ba1-87e8-3542e82b1935","Type":"ContainerStarted","Data":"a91f0c0b62c06624e7cb4aabe3a415cd0242f5e69cfd4ba4ec7c0c02b5af4535"} Feb 26 12:01:15 crc kubenswrapper[4724]: I0226 12:01:15.097030 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535121-zgmbr" event={"ID":"97ac65d3-f64d-4a73-b7b6-df090fc3706d","Type":"ContainerStarted","Data":"c40fc2553925e010289a66832dea2d7bbbcbe65699b9ae35ccf3be0233888f69"} Feb 26 12:01:15 crc kubenswrapper[4724]: I0226 12:01:15.135207 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm6wn"] Feb 26 12:01:15 crc kubenswrapper[4724]: I0226 12:01:15.188608 4724 scope.go:117] "RemoveContainer" containerID="4f233773aa27caca3f80e4672111323ac25adc76102271a9cba0d454d396ebbc" Feb 26 12:01:15 crc kubenswrapper[4724]: I0226 12:01:15.191031 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm6wn"] Feb 26 12:01:15 crc kubenswrapper[4724]: I0226 12:01:15.269360 4724 scope.go:117] "RemoveContainer" containerID="11f80776746b531289369e44ff43aacf9a14125a86e89c104dcb7333f7df44b3" Feb 26 12:01:15 crc kubenswrapper[4724]: I0226 12:01:15.986542 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ae0b63-a936-4680-8320-04b5ba6a6de4" path="/var/lib/kubelet/pods/31ae0b63-a936-4680-8320-04b5ba6a6de4/volumes" Feb 26 12:01:16 crc kubenswrapper[4724]: I0226 12:01:16.113426 4724 generic.go:334] "Generic (PLEG): container finished" podID="26cc587e-877d-4ba1-87e8-3542e82b1935" containerID="a13dcee65fc4c4d6e57c428bdfe20a238462e93e6ff632de32743383c73dee80" exitCode=0 Feb 26 12:01:16 crc kubenswrapper[4724]: I0226 12:01:16.113514 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkp2s" event={"ID":"26cc587e-877d-4ba1-87e8-3542e82b1935","Type":"ContainerDied","Data":"a13dcee65fc4c4d6e57c428bdfe20a238462e93e6ff632de32743383c73dee80"} Feb 26 12:01:16 crc kubenswrapper[4724]: I0226 12:01:16.117904 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535121-zgmbr" event={"ID":"97ac65d3-f64d-4a73-b7b6-df090fc3706d","Type":"ContainerStarted","Data":"4add9a9a1d6e89c7acc6fd83ce47979b7300ceea40426f7789850b09a02ad5ac"} Feb 26 12:01:16 crc kubenswrapper[4724]: I0226 12:01:16.162275 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29535121-zgmbr" podStartSLOduration=15.162254545 podStartE2EDuration="15.162254545s" podCreationTimestamp="2026-02-26 12:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 12:01:16.159911735 +0000 UTC m=+3342.815650860" watchObservedRunningTime="2026-02-26 12:01:16.162254545 +0000 UTC m=+3342.817993670" Feb 26 12:01:18 crc kubenswrapper[4724]: I0226 12:01:18.161709 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkp2s" event={"ID":"26cc587e-877d-4ba1-87e8-3542e82b1935","Type":"ContainerStarted","Data":"937ef412e0c86d01456e20055d24bae7d92c6ba7c616bd57880c161b3bc03e91"} Feb 26 12:01:19 crc kubenswrapper[4724]: I0226 12:01:19.175711 4724 generic.go:334] "Generic (PLEG): container finished" podID="26cc587e-877d-4ba1-87e8-3542e82b1935" containerID="937ef412e0c86d01456e20055d24bae7d92c6ba7c616bd57880c161b3bc03e91" exitCode=0 Feb 26 12:01:19 crc kubenswrapper[4724]: I0226 12:01:19.175751 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkp2s" event={"ID":"26cc587e-877d-4ba1-87e8-3542e82b1935","Type":"ContainerDied","Data":"937ef412e0c86d01456e20055d24bae7d92c6ba7c616bd57880c161b3bc03e91"} Feb 26 12:01:20 crc kubenswrapper[4724]: I0226 12:01:20.186075 4724 generic.go:334] "Generic (PLEG): container finished" podID="97ac65d3-f64d-4a73-b7b6-df090fc3706d" containerID="4add9a9a1d6e89c7acc6fd83ce47979b7300ceea40426f7789850b09a02ad5ac" exitCode=0 Feb 26 12:01:20 crc kubenswrapper[4724]: I0226 12:01:20.186165 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535121-zgmbr" event={"ID":"97ac65d3-f64d-4a73-b7b6-df090fc3706d","Type":"ContainerDied","Data":"4add9a9a1d6e89c7acc6fd83ce47979b7300ceea40426f7789850b09a02ad5ac"} Feb 26 12:01:21 crc kubenswrapper[4724]: I0226 12:01:21.199604 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkp2s" event={"ID":"26cc587e-877d-4ba1-87e8-3542e82b1935","Type":"ContainerStarted","Data":"72b6e542eb8657d6c055f27892161065199857d23cf23ac8bf18e14c9ad8a94f"} Feb 26 12:01:21 crc kubenswrapper[4724]: I0226 12:01:21.219238 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wkp2s" podStartSLOduration=18.113662909 podStartE2EDuration="22.219214646s" podCreationTimestamp="2026-02-26 12:00:59 +0000 UTC" firstStartedPulling="2026-02-26 12:01:16.116015125 +0000 UTC m=+3342.771754240" lastFinishedPulling="2026-02-26 12:01:20.221566862 +0000 UTC m=+3346.877305977" observedRunningTime="2026-02-26 12:01:21.217263666 +0000 UTC m=+3347.873002821" watchObservedRunningTime="2026-02-26 12:01:21.219214646 +0000 UTC m=+3347.874953781" Feb 26 12:01:21 crc kubenswrapper[4724]: I0226 12:01:21.872756 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:21 crc kubenswrapper[4724]: I0226 12:01:21.975917 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-combined-ca-bundle\") pod \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " Feb 26 12:01:21 crc kubenswrapper[4724]: I0226 12:01:21.975963 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-config-data\") pod \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " Feb 26 12:01:21 crc kubenswrapper[4724]: I0226 12:01:21.976043 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g988m\" (UniqueName: \"kubernetes.io/projected/97ac65d3-f64d-4a73-b7b6-df090fc3706d-kube-api-access-g988m\") pod \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " Feb 26 12:01:21 crc kubenswrapper[4724]: I0226 12:01:21.976247 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-fernet-keys\") pod \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\" (UID: \"97ac65d3-f64d-4a73-b7b6-df090fc3706d\") " Feb 26 12:01:22 crc kubenswrapper[4724]: I0226 12:01:22.000358 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "97ac65d3-f64d-4a73-b7b6-df090fc3706d" (UID: "97ac65d3-f64d-4a73-b7b6-df090fc3706d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 12:01:22 crc kubenswrapper[4724]: I0226 12:01:22.001823 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ac65d3-f64d-4a73-b7b6-df090fc3706d-kube-api-access-g988m" (OuterVolumeSpecName: "kube-api-access-g988m") pod "97ac65d3-f64d-4a73-b7b6-df090fc3706d" (UID: "97ac65d3-f64d-4a73-b7b6-df090fc3706d"). InnerVolumeSpecName "kube-api-access-g988m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:01:22 crc kubenswrapper[4724]: I0226 12:01:22.015511 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97ac65d3-f64d-4a73-b7b6-df090fc3706d" (UID: "97ac65d3-f64d-4a73-b7b6-df090fc3706d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 12:01:22 crc kubenswrapper[4724]: I0226 12:01:22.038029 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-config-data" (OuterVolumeSpecName: "config-data") pod "97ac65d3-f64d-4a73-b7b6-df090fc3706d" (UID: "97ac65d3-f64d-4a73-b7b6-df090fc3706d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 12:01:22 crc kubenswrapper[4724]: I0226 12:01:22.080976 4724 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 12:01:22 crc kubenswrapper[4724]: I0226 12:01:22.081099 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 12:01:22 crc kubenswrapper[4724]: I0226 12:01:22.081157 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ac65d3-f64d-4a73-b7b6-df090fc3706d-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 12:01:22 crc kubenswrapper[4724]: I0226 12:01:22.081232 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g988m\" (UniqueName: \"kubernetes.io/projected/97ac65d3-f64d-4a73-b7b6-df090fc3706d-kube-api-access-g988m\") on node \"crc\" DevicePath \"\"" Feb 26 12:01:22 crc kubenswrapper[4724]: I0226 12:01:22.214722 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535121-zgmbr" event={"ID":"97ac65d3-f64d-4a73-b7b6-df090fc3706d","Type":"ContainerDied","Data":"c40fc2553925e010289a66832dea2d7bbbcbe65699b9ae35ccf3be0233888f69"} Feb 26 12:01:22 crc kubenswrapper[4724]: I0226 12:01:22.214796 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c40fc2553925e010289a66832dea2d7bbbcbe65699b9ae35ccf3be0233888f69" Feb 26 12:01:22 crc kubenswrapper[4724]: I0226 12:01:22.214754 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535121-zgmbr" Feb 26 12:01:26 crc kubenswrapper[4724]: I0226 12:01:26.497833 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:01:26 crc kubenswrapper[4724]: E0226 12:01:26.498695 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:01:26 crc kubenswrapper[4724]: I0226 12:01:26.696332 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 26 12:01:28 crc kubenswrapper[4724]: I0226 12:01:28.534750 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"14b6ff63-4a92-49d9-9d37-0f2092545b77","Type":"ContainerStarted","Data":"aa44cb08d45e4b9bf327a86f7953cb12f197c9ca36499dcdd62d9d5ef4c89ca1"} Feb 26 12:01:28 crc kubenswrapper[4724]: I0226 12:01:28.563390 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podStartSLOduration=4.498281237 podStartE2EDuration="1m25.563366318s" podCreationTimestamp="2026-02-26 12:00:03 +0000 UTC" firstStartedPulling="2026-02-26 12:00:05.627764569 +0000 UTC m=+3272.283503684" lastFinishedPulling="2026-02-26 12:01:26.69284965 +0000 UTC m=+3353.348588765" observedRunningTime="2026-02-26 12:01:28.555061455 +0000 UTC m=+3355.210800570" watchObservedRunningTime="2026-02-26 12:01:28.563366318 +0000 UTC m=+3355.219105433" Feb 26 12:01:31 crc kubenswrapper[4724]: I0226 12:01:31.094091 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:01:31 crc kubenswrapper[4724]: I0226 12:01:31.094582 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:01:31 crc kubenswrapper[4724]: I0226 12:01:31.140629 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:01:31 crc kubenswrapper[4724]: I0226 12:01:31.604330 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:01:31 crc kubenswrapper[4724]: I0226 12:01:31.662524 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wkp2s"] Feb 26 12:01:33 crc kubenswrapper[4724]: I0226 12:01:33.576397 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wkp2s" podUID="26cc587e-877d-4ba1-87e8-3542e82b1935" containerName="registry-server" containerID="cri-o://72b6e542eb8657d6c055f27892161065199857d23cf23ac8bf18e14c9ad8a94f" gracePeriod=2 Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.188418 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.302763 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26cc587e-877d-4ba1-87e8-3542e82b1935-catalog-content\") pod \"26cc587e-877d-4ba1-87e8-3542e82b1935\" (UID: \"26cc587e-877d-4ba1-87e8-3542e82b1935\") " Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.302852 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26cc587e-877d-4ba1-87e8-3542e82b1935-utilities\") pod \"26cc587e-877d-4ba1-87e8-3542e82b1935\" (UID: \"26cc587e-877d-4ba1-87e8-3542e82b1935\") " Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.302898 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9w66\" (UniqueName: \"kubernetes.io/projected/26cc587e-877d-4ba1-87e8-3542e82b1935-kube-api-access-m9w66\") pod \"26cc587e-877d-4ba1-87e8-3542e82b1935\" (UID: \"26cc587e-877d-4ba1-87e8-3542e82b1935\") " Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.304506 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26cc587e-877d-4ba1-87e8-3542e82b1935-utilities" (OuterVolumeSpecName: "utilities") pod "26cc587e-877d-4ba1-87e8-3542e82b1935" (UID: "26cc587e-877d-4ba1-87e8-3542e82b1935"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.308536 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26cc587e-877d-4ba1-87e8-3542e82b1935-kube-api-access-m9w66" (OuterVolumeSpecName: "kube-api-access-m9w66") pod "26cc587e-877d-4ba1-87e8-3542e82b1935" (UID: "26cc587e-877d-4ba1-87e8-3542e82b1935"). InnerVolumeSpecName "kube-api-access-m9w66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.354507 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26cc587e-877d-4ba1-87e8-3542e82b1935-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26cc587e-877d-4ba1-87e8-3542e82b1935" (UID: "26cc587e-877d-4ba1-87e8-3542e82b1935"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.405806 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26cc587e-877d-4ba1-87e8-3542e82b1935-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.405871 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26cc587e-877d-4ba1-87e8-3542e82b1935-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.405891 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9w66\" (UniqueName: \"kubernetes.io/projected/26cc587e-877d-4ba1-87e8-3542e82b1935-kube-api-access-m9w66\") on node \"crc\" DevicePath \"\"" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.591754 4724 generic.go:334] "Generic (PLEG): container finished" podID="26cc587e-877d-4ba1-87e8-3542e82b1935" containerID="72b6e542eb8657d6c055f27892161065199857d23cf23ac8bf18e14c9ad8a94f" exitCode=0 Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.591806 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkp2s" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.591835 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkp2s" event={"ID":"26cc587e-877d-4ba1-87e8-3542e82b1935","Type":"ContainerDied","Data":"72b6e542eb8657d6c055f27892161065199857d23cf23ac8bf18e14c9ad8a94f"} Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.591895 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkp2s" event={"ID":"26cc587e-877d-4ba1-87e8-3542e82b1935","Type":"ContainerDied","Data":"a91f0c0b62c06624e7cb4aabe3a415cd0242f5e69cfd4ba4ec7c0c02b5af4535"} Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.591931 4724 scope.go:117] "RemoveContainer" containerID="72b6e542eb8657d6c055f27892161065199857d23cf23ac8bf18e14c9ad8a94f" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.641451 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wkp2s"] Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.644645 4724 scope.go:117] "RemoveContainer" containerID="937ef412e0c86d01456e20055d24bae7d92c6ba7c616bd57880c161b3bc03e91" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.653762 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wkp2s"] Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.672826 4724 scope.go:117] "RemoveContainer" containerID="a13dcee65fc4c4d6e57c428bdfe20a238462e93e6ff632de32743383c73dee80" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.710717 4724 scope.go:117] "RemoveContainer" containerID="72b6e542eb8657d6c055f27892161065199857d23cf23ac8bf18e14c9ad8a94f" Feb 26 12:01:34 crc kubenswrapper[4724]: E0226 12:01:34.711362 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72b6e542eb8657d6c055f27892161065199857d23cf23ac8bf18e14c9ad8a94f\": container with ID starting with 72b6e542eb8657d6c055f27892161065199857d23cf23ac8bf18e14c9ad8a94f not found: ID does not exist" containerID="72b6e542eb8657d6c055f27892161065199857d23cf23ac8bf18e14c9ad8a94f" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.711420 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72b6e542eb8657d6c055f27892161065199857d23cf23ac8bf18e14c9ad8a94f"} err="failed to get container status \"72b6e542eb8657d6c055f27892161065199857d23cf23ac8bf18e14c9ad8a94f\": rpc error: code = NotFound desc = could not find container \"72b6e542eb8657d6c055f27892161065199857d23cf23ac8bf18e14c9ad8a94f\": container with ID starting with 72b6e542eb8657d6c055f27892161065199857d23cf23ac8bf18e14c9ad8a94f not found: ID does not exist" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.711454 4724 scope.go:117] "RemoveContainer" containerID="937ef412e0c86d01456e20055d24bae7d92c6ba7c616bd57880c161b3bc03e91" Feb 26 12:01:34 crc kubenswrapper[4724]: E0226 12:01:34.711851 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"937ef412e0c86d01456e20055d24bae7d92c6ba7c616bd57880c161b3bc03e91\": container with ID starting with 937ef412e0c86d01456e20055d24bae7d92c6ba7c616bd57880c161b3bc03e91 not found: ID does not exist" containerID="937ef412e0c86d01456e20055d24bae7d92c6ba7c616bd57880c161b3bc03e91" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.711888 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"937ef412e0c86d01456e20055d24bae7d92c6ba7c616bd57880c161b3bc03e91"} err="failed to get container status \"937ef412e0c86d01456e20055d24bae7d92c6ba7c616bd57880c161b3bc03e91\": rpc error: code = NotFound desc = could not find container \"937ef412e0c86d01456e20055d24bae7d92c6ba7c616bd57880c161b3bc03e91\": container with ID starting with 937ef412e0c86d01456e20055d24bae7d92c6ba7c616bd57880c161b3bc03e91 not found: ID does not exist" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.711913 4724 scope.go:117] "RemoveContainer" containerID="a13dcee65fc4c4d6e57c428bdfe20a238462e93e6ff632de32743383c73dee80" Feb 26 12:01:34 crc kubenswrapper[4724]: E0226 12:01:34.712243 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a13dcee65fc4c4d6e57c428bdfe20a238462e93e6ff632de32743383c73dee80\": container with ID starting with a13dcee65fc4c4d6e57c428bdfe20a238462e93e6ff632de32743383c73dee80 not found: ID does not exist" containerID="a13dcee65fc4c4d6e57c428bdfe20a238462e93e6ff632de32743383c73dee80" Feb 26 12:01:34 crc kubenswrapper[4724]: I0226 12:01:34.712290 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a13dcee65fc4c4d6e57c428bdfe20a238462e93e6ff632de32743383c73dee80"} err="failed to get container status \"a13dcee65fc4c4d6e57c428bdfe20a238462e93e6ff632de32743383c73dee80\": rpc error: code = NotFound desc = could not find container \"a13dcee65fc4c4d6e57c428bdfe20a238462e93e6ff632de32743383c73dee80\": container with ID starting with a13dcee65fc4c4d6e57c428bdfe20a238462e93e6ff632de32743383c73dee80 not found: ID does not exist" Feb 26 12:01:35 crc kubenswrapper[4724]: I0226 12:01:35.987506 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26cc587e-877d-4ba1-87e8-3542e82b1935" path="/var/lib/kubelet/pods/26cc587e-877d-4ba1-87e8-3542e82b1935/volumes" Feb 26 12:01:39 crc kubenswrapper[4724]: I0226 12:01:39.975735 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:01:39 crc kubenswrapper[4724]: E0226 12:01:39.976426 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:01:52 crc kubenswrapper[4724]: I0226 12:01:52.976922 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:01:52 crc kubenswrapper[4724]: E0226 12:01:52.977659 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.768955 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gl5kr"] Feb 26 12:01:53 crc kubenswrapper[4724]: E0226 12:01:53.769442 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ae0b63-a936-4680-8320-04b5ba6a6de4" containerName="extract-content" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.769470 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ae0b63-a936-4680-8320-04b5ba6a6de4" containerName="extract-content" Feb 26 12:01:53 crc kubenswrapper[4724]: E0226 12:01:53.769489 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ae0b63-a936-4680-8320-04b5ba6a6de4" containerName="registry-server" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.769498 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ae0b63-a936-4680-8320-04b5ba6a6de4" containerName="registry-server" Feb 26 12:01:53 crc kubenswrapper[4724]: E0226 12:01:53.769524 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ac65d3-f64d-4a73-b7b6-df090fc3706d" containerName="keystone-cron" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.769534 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ac65d3-f64d-4a73-b7b6-df090fc3706d" containerName="keystone-cron" Feb 26 12:01:53 crc kubenswrapper[4724]: E0226 12:01:53.769556 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26cc587e-877d-4ba1-87e8-3542e82b1935" containerName="extract-utilities" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.769563 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="26cc587e-877d-4ba1-87e8-3542e82b1935" containerName="extract-utilities" Feb 26 12:01:53 crc kubenswrapper[4724]: E0226 12:01:53.769576 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26cc587e-877d-4ba1-87e8-3542e82b1935" containerName="extract-content" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.769585 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="26cc587e-877d-4ba1-87e8-3542e82b1935" containerName="extract-content" Feb 26 12:01:53 crc kubenswrapper[4724]: E0226 12:01:53.769599 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26cc587e-877d-4ba1-87e8-3542e82b1935" containerName="registry-server" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.769607 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="26cc587e-877d-4ba1-87e8-3542e82b1935" containerName="registry-server" Feb 26 12:01:53 crc kubenswrapper[4724]: E0226 12:01:53.769638 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ae0b63-a936-4680-8320-04b5ba6a6de4" containerName="extract-utilities" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.769647 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ae0b63-a936-4680-8320-04b5ba6a6de4" containerName="extract-utilities" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.769868 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="26cc587e-877d-4ba1-87e8-3542e82b1935" containerName="registry-server" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.769888 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ae0b63-a936-4680-8320-04b5ba6a6de4" containerName="registry-server" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.769905 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ac65d3-f64d-4a73-b7b6-df090fc3706d" containerName="keystone-cron" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.772163 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.787465 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gl5kr"] Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.866702 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc47p\" (UniqueName: \"kubernetes.io/projected/a9e6e325-2d95-46d3-822a-a21aa94cfb04-kube-api-access-nc47p\") pod \"redhat-operators-gl5kr\" (UID: \"a9e6e325-2d95-46d3-822a-a21aa94cfb04\") " pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.866812 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9e6e325-2d95-46d3-822a-a21aa94cfb04-utilities\") pod \"redhat-operators-gl5kr\" (UID: \"a9e6e325-2d95-46d3-822a-a21aa94cfb04\") " pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.866835 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9e6e325-2d95-46d3-822a-a21aa94cfb04-catalog-content\") pod \"redhat-operators-gl5kr\" (UID: \"a9e6e325-2d95-46d3-822a-a21aa94cfb04\") " pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.968425 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc47p\" (UniqueName: \"kubernetes.io/projected/a9e6e325-2d95-46d3-822a-a21aa94cfb04-kube-api-access-nc47p\") pod \"redhat-operators-gl5kr\" (UID: \"a9e6e325-2d95-46d3-822a-a21aa94cfb04\") " pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.968512 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9e6e325-2d95-46d3-822a-a21aa94cfb04-utilities\") pod \"redhat-operators-gl5kr\" (UID: \"a9e6e325-2d95-46d3-822a-a21aa94cfb04\") " pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.968571 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9e6e325-2d95-46d3-822a-a21aa94cfb04-catalog-content\") pod \"redhat-operators-gl5kr\" (UID: \"a9e6e325-2d95-46d3-822a-a21aa94cfb04\") " pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.969038 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9e6e325-2d95-46d3-822a-a21aa94cfb04-catalog-content\") pod \"redhat-operators-gl5kr\" (UID: \"a9e6e325-2d95-46d3-822a-a21aa94cfb04\") " pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.969298 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9e6e325-2d95-46d3-822a-a21aa94cfb04-utilities\") pod \"redhat-operators-gl5kr\" (UID: \"a9e6e325-2d95-46d3-822a-a21aa94cfb04\") " pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:01:53 crc kubenswrapper[4724]: I0226 12:01:53.991019 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc47p\" (UniqueName: \"kubernetes.io/projected/a9e6e325-2d95-46d3-822a-a21aa94cfb04-kube-api-access-nc47p\") pod \"redhat-operators-gl5kr\" (UID: \"a9e6e325-2d95-46d3-822a-a21aa94cfb04\") " pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:01:54 crc kubenswrapper[4724]: I0226 12:01:54.105400 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:01:54 crc kubenswrapper[4724]: I0226 12:01:54.619702 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gl5kr"] Feb 26 12:01:54 crc kubenswrapper[4724]: I0226 12:01:54.842915 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl5kr" event={"ID":"a9e6e325-2d95-46d3-822a-a21aa94cfb04","Type":"ContainerStarted","Data":"2ecb4ba9edf4c1cf57dbd3720d67afbd474adfdd1dff57401976ea6e8cc0f3f8"} Feb 26 12:01:55 crc kubenswrapper[4724]: I0226 12:01:55.854011 4724 generic.go:334] "Generic (PLEG): container finished" podID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerID="6a59cbe2d3179e15b9a5620c2e35beb3fd6332205088c482f924c7334e948ffd" exitCode=0 Feb 26 12:01:55 crc kubenswrapper[4724]: I0226 12:01:55.854103 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl5kr" event={"ID":"a9e6e325-2d95-46d3-822a-a21aa94cfb04","Type":"ContainerDied","Data":"6a59cbe2d3179e15b9a5620c2e35beb3fd6332205088c482f924c7334e948ffd"} Feb 26 12:01:59 crc kubenswrapper[4724]: I0226 12:01:59.900952 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl5kr" event={"ID":"a9e6e325-2d95-46d3-822a-a21aa94cfb04","Type":"ContainerStarted","Data":"c42c40cb86ca0b2ee92ad09d5be5a292dbb1a91f044bf62177ecfd4de0c8fc9b"} Feb 26 12:02:00 crc kubenswrapper[4724]: I0226 12:02:00.165300 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535122-zpprr"] Feb 26 12:02:00 crc kubenswrapper[4724]: I0226 12:02:00.166947 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535122-zpprr" Feb 26 12:02:00 crc kubenswrapper[4724]: I0226 12:02:00.172950 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:02:00 crc kubenswrapper[4724]: I0226 12:02:00.172982 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:02:00 crc kubenswrapper[4724]: I0226 12:02:00.173043 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:02:00 crc kubenswrapper[4724]: I0226 12:02:00.178855 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535122-zpprr"] Feb 26 12:02:00 crc kubenswrapper[4724]: I0226 12:02:00.194743 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnxlq\" (UniqueName: \"kubernetes.io/projected/5ea65c59-e7cb-443d-8450-65fc9d963caf-kube-api-access-wnxlq\") pod \"auto-csr-approver-29535122-zpprr\" (UID: \"5ea65c59-e7cb-443d-8450-65fc9d963caf\") " pod="openshift-infra/auto-csr-approver-29535122-zpprr" Feb 26 12:02:00 crc kubenswrapper[4724]: I0226 12:02:00.297756 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnxlq\" (UniqueName: \"kubernetes.io/projected/5ea65c59-e7cb-443d-8450-65fc9d963caf-kube-api-access-wnxlq\") pod \"auto-csr-approver-29535122-zpprr\" (UID: \"5ea65c59-e7cb-443d-8450-65fc9d963caf\") " pod="openshift-infra/auto-csr-approver-29535122-zpprr" Feb 26 12:02:00 crc kubenswrapper[4724]: I0226 12:02:00.322808 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnxlq\" (UniqueName: \"kubernetes.io/projected/5ea65c59-e7cb-443d-8450-65fc9d963caf-kube-api-access-wnxlq\") pod \"auto-csr-approver-29535122-zpprr\" (UID: \"5ea65c59-e7cb-443d-8450-65fc9d963caf\") " pod="openshift-infra/auto-csr-approver-29535122-zpprr" Feb 26 12:02:00 crc kubenswrapper[4724]: I0226 12:02:00.499404 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535122-zpprr" Feb 26 12:02:01 crc kubenswrapper[4724]: I0226 12:02:01.057352 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535122-zpprr"] Feb 26 12:02:01 crc kubenswrapper[4724]: I0226 12:02:01.918010 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535122-zpprr" event={"ID":"5ea65c59-e7cb-443d-8450-65fc9d963caf","Type":"ContainerStarted","Data":"7ffa938e8a711ec57a7277350172e546aa0dd985a8dd7e4c48a5c383fbf27ffd"} Feb 26 12:02:04 crc kubenswrapper[4724]: I0226 12:02:04.984321 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:02:04 crc kubenswrapper[4724]: E0226 12:02:04.985247 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:02:08 crc kubenswrapper[4724]: I0226 12:02:08.980443 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535122-zpprr" event={"ID":"5ea65c59-e7cb-443d-8450-65fc9d963caf","Type":"ContainerStarted","Data":"ff18a4aaa12faf39d325f50bfab8dc39a758e4a2f88cf6410bdfb38ac733e7ec"} Feb 26 12:02:09 crc kubenswrapper[4724]: I0226 12:02:09.000763 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535122-zpprr" podStartSLOduration=2.447390023 podStartE2EDuration="9.000731093s" podCreationTimestamp="2026-02-26 12:02:00 +0000 UTC" firstStartedPulling="2026-02-26 12:02:01.061303663 +0000 UTC m=+3387.717042778" lastFinishedPulling="2026-02-26 12:02:07.614644733 +0000 UTC m=+3394.270383848" observedRunningTime="2026-02-26 12:02:08.994350479 +0000 UTC m=+3395.650089604" watchObservedRunningTime="2026-02-26 12:02:09.000731093 +0000 UTC m=+3395.656470218" Feb 26 12:02:16 crc kubenswrapper[4724]: I0226 12:02:16.062593 4724 generic.go:334] "Generic (PLEG): container finished" podID="5ea65c59-e7cb-443d-8450-65fc9d963caf" containerID="ff18a4aaa12faf39d325f50bfab8dc39a758e4a2f88cf6410bdfb38ac733e7ec" exitCode=0 Feb 26 12:02:16 crc kubenswrapper[4724]: I0226 12:02:16.062678 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535122-zpprr" event={"ID":"5ea65c59-e7cb-443d-8450-65fc9d963caf","Type":"ContainerDied","Data":"ff18a4aaa12faf39d325f50bfab8dc39a758e4a2f88cf6410bdfb38ac733e7ec"} Feb 26 12:02:17 crc kubenswrapper[4724]: I0226 12:02:17.565226 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535122-zpprr" Feb 26 12:02:17 crc kubenswrapper[4724]: I0226 12:02:17.589124 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnxlq\" (UniqueName: \"kubernetes.io/projected/5ea65c59-e7cb-443d-8450-65fc9d963caf-kube-api-access-wnxlq\") pod \"5ea65c59-e7cb-443d-8450-65fc9d963caf\" (UID: \"5ea65c59-e7cb-443d-8450-65fc9d963caf\") " Feb 26 12:02:17 crc kubenswrapper[4724]: I0226 12:02:17.611879 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea65c59-e7cb-443d-8450-65fc9d963caf-kube-api-access-wnxlq" (OuterVolumeSpecName: "kube-api-access-wnxlq") pod "5ea65c59-e7cb-443d-8450-65fc9d963caf" (UID: "5ea65c59-e7cb-443d-8450-65fc9d963caf"). InnerVolumeSpecName "kube-api-access-wnxlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:02:17 crc kubenswrapper[4724]: I0226 12:02:17.692527 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnxlq\" (UniqueName: \"kubernetes.io/projected/5ea65c59-e7cb-443d-8450-65fc9d963caf-kube-api-access-wnxlq\") on node \"crc\" DevicePath \"\"" Feb 26 12:02:18 crc kubenswrapper[4724]: I0226 12:02:18.082476 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535122-zpprr" event={"ID":"5ea65c59-e7cb-443d-8450-65fc9d963caf","Type":"ContainerDied","Data":"7ffa938e8a711ec57a7277350172e546aa0dd985a8dd7e4c48a5c383fbf27ffd"} Feb 26 12:02:18 crc kubenswrapper[4724]: I0226 12:02:18.082516 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ffa938e8a711ec57a7277350172e546aa0dd985a8dd7e4c48a5c383fbf27ffd" Feb 26 12:02:18 crc kubenswrapper[4724]: I0226 12:02:18.082567 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535122-zpprr" Feb 26 12:02:18 crc kubenswrapper[4724]: I0226 12:02:18.146318 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535116-k8lc7"] Feb 26 12:02:18 crc kubenswrapper[4724]: I0226 12:02:18.158838 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535116-k8lc7"] Feb 26 12:02:19 crc kubenswrapper[4724]: I0226 12:02:19.976143 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:02:19 crc kubenswrapper[4724]: E0226 12:02:19.976723 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:02:19 crc kubenswrapper[4724]: I0226 12:02:19.987775 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d4f9c1-8152-4d31-98db-2a1bb1b731ec" path="/var/lib/kubelet/pods/52d4f9c1-8152-4d31-98db-2a1bb1b731ec/volumes" Feb 26 12:02:20 crc kubenswrapper[4724]: I0226 12:02:20.103490 4724 generic.go:334] "Generic (PLEG): container finished" podID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerID="c42c40cb86ca0b2ee92ad09d5be5a292dbb1a91f044bf62177ecfd4de0c8fc9b" exitCode=0 Feb 26 12:02:20 crc kubenswrapper[4724]: I0226 12:02:20.103585 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl5kr" event={"ID":"a9e6e325-2d95-46d3-822a-a21aa94cfb04","Type":"ContainerDied","Data":"c42c40cb86ca0b2ee92ad09d5be5a292dbb1a91f044bf62177ecfd4de0c8fc9b"} Feb 26 12:02:22 crc kubenswrapper[4724]: I0226 12:02:22.126567 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl5kr" event={"ID":"a9e6e325-2d95-46d3-822a-a21aa94cfb04","Type":"ContainerStarted","Data":"192d4dc2ca0f664f65c0d4ce10fae1b5454edfcfac91ee8b0fe4fcc97b4e2c53"} Feb 26 12:02:22 crc kubenswrapper[4724]: I0226 12:02:22.164377 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gl5kr" podStartSLOduration=4.106722332 podStartE2EDuration="29.164361511s" podCreationTimestamp="2026-02-26 12:01:53 +0000 UTC" firstStartedPulling="2026-02-26 12:01:55.856302922 +0000 UTC m=+3382.512042037" lastFinishedPulling="2026-02-26 12:02:20.913942101 +0000 UTC m=+3407.569681216" observedRunningTime="2026-02-26 12:02:22.162129263 +0000 UTC m=+3408.817868398" watchObservedRunningTime="2026-02-26 12:02:22.164361511 +0000 UTC m=+3408.820100626" Feb 26 12:02:24 crc kubenswrapper[4724]: I0226 12:02:24.135754 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:02:24 crc kubenswrapper[4724]: I0226 12:02:24.137419 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:02:25 crc kubenswrapper[4724]: I0226 12:02:25.183096 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gl5kr" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerName="registry-server" probeResult="failure" output=< Feb 26 12:02:25 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:02:25 crc kubenswrapper[4724]: > Feb 26 12:02:34 crc kubenswrapper[4724]: I0226 12:02:34.976004 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:02:34 crc kubenswrapper[4724]: E0226 12:02:34.976743 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:02:35 crc kubenswrapper[4724]: I0226 12:02:35.154610 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gl5kr" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerName="registry-server" probeResult="failure" output=< Feb 26 12:02:35 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:02:35 crc kubenswrapper[4724]: > Feb 26 12:02:45 crc kubenswrapper[4724]: I0226 12:02:45.162120 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gl5kr" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerName="registry-server" probeResult="failure" output=< Feb 26 12:02:45 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:02:45 crc kubenswrapper[4724]: > Feb 26 12:02:48 crc kubenswrapper[4724]: I0226 12:02:48.975879 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:02:48 crc kubenswrapper[4724]: E0226 12:02:48.976753 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:02:55 crc kubenswrapper[4724]: I0226 12:02:55.158511 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gl5kr" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerName="registry-server" probeResult="failure" output=< Feb 26 12:02:55 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:02:55 crc kubenswrapper[4724]: > Feb 26 12:03:03 crc kubenswrapper[4724]: I0226 12:03:03.982927 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:03:03 crc kubenswrapper[4724]: E0226 12:03:03.983748 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:03:05 crc kubenswrapper[4724]: I0226 12:03:05.151056 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gl5kr" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerName="registry-server" probeResult="failure" output=< Feb 26 12:03:05 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:03:05 crc kubenswrapper[4724]: > Feb 26 12:03:13 crc kubenswrapper[4724]: I0226 12:03:13.743086 4724 scope.go:117] "RemoveContainer" containerID="0d965d8bad80b95a7c22e0743071ae7a6c0090f4fcb884ec603549f4611c4246" Feb 26 12:03:15 crc kubenswrapper[4724]: I0226 12:03:15.157547 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gl5kr" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerName="registry-server" probeResult="failure" output=< Feb 26 12:03:15 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:03:15 crc kubenswrapper[4724]: > Feb 26 12:03:16 crc kubenswrapper[4724]: I0226 12:03:16.975482 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:03:16 crc kubenswrapper[4724]: E0226 12:03:16.976017 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:03:25 crc kubenswrapper[4724]: I0226 12:03:25.154527 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gl5kr" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerName="registry-server" probeResult="failure" output=< Feb 26 12:03:25 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:03:25 crc kubenswrapper[4724]: > Feb 26 12:03:30 crc kubenswrapper[4724]: I0226 12:03:30.977509 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:03:30 crc kubenswrapper[4724]: E0226 12:03:30.978582 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:03:34 crc kubenswrapper[4724]: I0226 12:03:34.183854 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:03:34 crc kubenswrapper[4724]: I0226 12:03:34.255315 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:03:34 crc kubenswrapper[4724]: I0226 12:03:34.423902 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gl5kr"] Feb 26 12:03:35 crc kubenswrapper[4724]: I0226 12:03:35.861628 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gl5kr" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerName="registry-server" containerID="cri-o://192d4dc2ca0f664f65c0d4ce10fae1b5454edfcfac91ee8b0fe4fcc97b4e2c53" gracePeriod=2 Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.438948 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.518008 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc47p\" (UniqueName: \"kubernetes.io/projected/a9e6e325-2d95-46d3-822a-a21aa94cfb04-kube-api-access-nc47p\") pod \"a9e6e325-2d95-46d3-822a-a21aa94cfb04\" (UID: \"a9e6e325-2d95-46d3-822a-a21aa94cfb04\") " Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.518111 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9e6e325-2d95-46d3-822a-a21aa94cfb04-catalog-content\") pod \"a9e6e325-2d95-46d3-822a-a21aa94cfb04\" (UID: \"a9e6e325-2d95-46d3-822a-a21aa94cfb04\") " Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.518240 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9e6e325-2d95-46d3-822a-a21aa94cfb04-utilities\") pod \"a9e6e325-2d95-46d3-822a-a21aa94cfb04\" (UID: \"a9e6e325-2d95-46d3-822a-a21aa94cfb04\") " Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.519728 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9e6e325-2d95-46d3-822a-a21aa94cfb04-utilities" (OuterVolumeSpecName: "utilities") pod "a9e6e325-2d95-46d3-822a-a21aa94cfb04" (UID: "a9e6e325-2d95-46d3-822a-a21aa94cfb04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.568923 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e6e325-2d95-46d3-822a-a21aa94cfb04-kube-api-access-nc47p" (OuterVolumeSpecName: "kube-api-access-nc47p") pod "a9e6e325-2d95-46d3-822a-a21aa94cfb04" (UID: "a9e6e325-2d95-46d3-822a-a21aa94cfb04"). InnerVolumeSpecName "kube-api-access-nc47p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.625458 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc47p\" (UniqueName: \"kubernetes.io/projected/a9e6e325-2d95-46d3-822a-a21aa94cfb04-kube-api-access-nc47p\") on node \"crc\" DevicePath \"\"" Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.625498 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9e6e325-2d95-46d3-822a-a21aa94cfb04-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.700494 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9e6e325-2d95-46d3-822a-a21aa94cfb04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9e6e325-2d95-46d3-822a-a21aa94cfb04" (UID: "a9e6e325-2d95-46d3-822a-a21aa94cfb04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.727742 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9e6e325-2d95-46d3-822a-a21aa94cfb04-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.875559 4724 generic.go:334] "Generic (PLEG): container finished" podID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerID="192d4dc2ca0f664f65c0d4ce10fae1b5454edfcfac91ee8b0fe4fcc97b4e2c53" exitCode=0 Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.875870 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl5kr" event={"ID":"a9e6e325-2d95-46d3-822a-a21aa94cfb04","Type":"ContainerDied","Data":"192d4dc2ca0f664f65c0d4ce10fae1b5454edfcfac91ee8b0fe4fcc97b4e2c53"} Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.875955 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl5kr" event={"ID":"a9e6e325-2d95-46d3-822a-a21aa94cfb04","Type":"ContainerDied","Data":"2ecb4ba9edf4c1cf57dbd3720d67afbd474adfdd1dff57401976ea6e8cc0f3f8"} Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.875974 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl5kr" Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.875984 4724 scope.go:117] "RemoveContainer" containerID="192d4dc2ca0f664f65c0d4ce10fae1b5454edfcfac91ee8b0fe4fcc97b4e2c53" Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.925058 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gl5kr"] Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.937339 4724 scope.go:117] "RemoveContainer" containerID="c42c40cb86ca0b2ee92ad09d5be5a292dbb1a91f044bf62177ecfd4de0c8fc9b" Feb 26 12:03:36 crc kubenswrapper[4724]: I0226 12:03:36.940622 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gl5kr"] Feb 26 12:03:37 crc kubenswrapper[4724]: I0226 12:03:37.023918 4724 scope.go:117] "RemoveContainer" containerID="6a59cbe2d3179e15b9a5620c2e35beb3fd6332205088c482f924c7334e948ffd" Feb 26 12:03:37 crc kubenswrapper[4724]: I0226 12:03:37.051409 4724 scope.go:117] "RemoveContainer" containerID="192d4dc2ca0f664f65c0d4ce10fae1b5454edfcfac91ee8b0fe4fcc97b4e2c53" Feb 26 12:03:37 crc kubenswrapper[4724]: E0226 12:03:37.052679 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"192d4dc2ca0f664f65c0d4ce10fae1b5454edfcfac91ee8b0fe4fcc97b4e2c53\": container with ID starting with 192d4dc2ca0f664f65c0d4ce10fae1b5454edfcfac91ee8b0fe4fcc97b4e2c53 not found: ID does not exist" containerID="192d4dc2ca0f664f65c0d4ce10fae1b5454edfcfac91ee8b0fe4fcc97b4e2c53" Feb 26 12:03:37 crc kubenswrapper[4724]: I0226 12:03:37.052764 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192d4dc2ca0f664f65c0d4ce10fae1b5454edfcfac91ee8b0fe4fcc97b4e2c53"} err="failed to get container status \"192d4dc2ca0f664f65c0d4ce10fae1b5454edfcfac91ee8b0fe4fcc97b4e2c53\": rpc error: code = NotFound desc = could not find container \"192d4dc2ca0f664f65c0d4ce10fae1b5454edfcfac91ee8b0fe4fcc97b4e2c53\": container with ID starting with 192d4dc2ca0f664f65c0d4ce10fae1b5454edfcfac91ee8b0fe4fcc97b4e2c53 not found: ID does not exist" Feb 26 12:03:37 crc kubenswrapper[4724]: I0226 12:03:37.052836 4724 scope.go:117] "RemoveContainer" containerID="c42c40cb86ca0b2ee92ad09d5be5a292dbb1a91f044bf62177ecfd4de0c8fc9b" Feb 26 12:03:37 crc kubenswrapper[4724]: E0226 12:03:37.053276 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c42c40cb86ca0b2ee92ad09d5be5a292dbb1a91f044bf62177ecfd4de0c8fc9b\": container with ID starting with c42c40cb86ca0b2ee92ad09d5be5a292dbb1a91f044bf62177ecfd4de0c8fc9b not found: ID does not exist" containerID="c42c40cb86ca0b2ee92ad09d5be5a292dbb1a91f044bf62177ecfd4de0c8fc9b" Feb 26 12:03:37 crc kubenswrapper[4724]: I0226 12:03:37.053337 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c42c40cb86ca0b2ee92ad09d5be5a292dbb1a91f044bf62177ecfd4de0c8fc9b"} err="failed to get container status \"c42c40cb86ca0b2ee92ad09d5be5a292dbb1a91f044bf62177ecfd4de0c8fc9b\": rpc error: code = NotFound desc = could not find container \"c42c40cb86ca0b2ee92ad09d5be5a292dbb1a91f044bf62177ecfd4de0c8fc9b\": container with ID starting with c42c40cb86ca0b2ee92ad09d5be5a292dbb1a91f044bf62177ecfd4de0c8fc9b not found: ID does not exist" Feb 26 12:03:37 crc kubenswrapper[4724]: I0226 12:03:37.053355 4724 scope.go:117] "RemoveContainer" containerID="6a59cbe2d3179e15b9a5620c2e35beb3fd6332205088c482f924c7334e948ffd" Feb 26 12:03:37 crc kubenswrapper[4724]: E0226 12:03:37.053734 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a59cbe2d3179e15b9a5620c2e35beb3fd6332205088c482f924c7334e948ffd\": container with ID starting with 6a59cbe2d3179e15b9a5620c2e35beb3fd6332205088c482f924c7334e948ffd not found: ID does not exist" containerID="6a59cbe2d3179e15b9a5620c2e35beb3fd6332205088c482f924c7334e948ffd" Feb 26 12:03:37 crc kubenswrapper[4724]: I0226 12:03:37.053770 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a59cbe2d3179e15b9a5620c2e35beb3fd6332205088c482f924c7334e948ffd"} err="failed to get container status \"6a59cbe2d3179e15b9a5620c2e35beb3fd6332205088c482f924c7334e948ffd\": rpc error: code = NotFound desc = could not find container \"6a59cbe2d3179e15b9a5620c2e35beb3fd6332205088c482f924c7334e948ffd\": container with ID starting with 6a59cbe2d3179e15b9a5620c2e35beb3fd6332205088c482f924c7334e948ffd not found: ID does not exist" Feb 26 12:03:37 crc kubenswrapper[4724]: I0226 12:03:37.988007 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" path="/var/lib/kubelet/pods/a9e6e325-2d95-46d3-822a-a21aa94cfb04/volumes" Feb 26 12:03:43 crc kubenswrapper[4724]: I0226 12:03:43.989411 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:03:43 crc kubenswrapper[4724]: E0226 12:03:43.990234 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:03:57 crc kubenswrapper[4724]: I0226 12:03:57.978046 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:03:57 crc kubenswrapper[4724]: E0226 12:03:57.978858 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.195340 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535124-wgsrt"] Feb 26 12:04:00 crc kubenswrapper[4724]: E0226 12:04:00.199248 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea65c59-e7cb-443d-8450-65fc9d963caf" containerName="oc" Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.199317 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea65c59-e7cb-443d-8450-65fc9d963caf" containerName="oc" Feb 26 12:04:00 crc kubenswrapper[4724]: E0226 12:04:00.199360 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerName="extract-content" Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.199370 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerName="extract-content" Feb 26 12:04:00 crc kubenswrapper[4724]: E0226 12:04:00.199399 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerName="extract-utilities" Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.199407 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerName="extract-utilities" Feb 26 12:04:00 crc kubenswrapper[4724]: E0226 12:04:00.199425 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerName="registry-server" Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.199433 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerName="registry-server" Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.200744 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea65c59-e7cb-443d-8450-65fc9d963caf" containerName="oc" Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.200792 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9e6e325-2d95-46d3-822a-a21aa94cfb04" containerName="registry-server" Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.202242 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535124-wgsrt" Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.206790 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.207088 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.209986 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.283511 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535124-wgsrt"] Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.332950 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9bnw\" (UniqueName: \"kubernetes.io/projected/17982b22-2b96-4fee-8902-c3e25989021b-kube-api-access-m9bnw\") pod \"auto-csr-approver-29535124-wgsrt\" (UID: \"17982b22-2b96-4fee-8902-c3e25989021b\") " pod="openshift-infra/auto-csr-approver-29535124-wgsrt" Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.435716 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9bnw\" (UniqueName: \"kubernetes.io/projected/17982b22-2b96-4fee-8902-c3e25989021b-kube-api-access-m9bnw\") pod \"auto-csr-approver-29535124-wgsrt\" (UID: \"17982b22-2b96-4fee-8902-c3e25989021b\") " pod="openshift-infra/auto-csr-approver-29535124-wgsrt" Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.473226 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9bnw\" (UniqueName: \"kubernetes.io/projected/17982b22-2b96-4fee-8902-c3e25989021b-kube-api-access-m9bnw\") pod \"auto-csr-approver-29535124-wgsrt\" (UID: \"17982b22-2b96-4fee-8902-c3e25989021b\") " pod="openshift-infra/auto-csr-approver-29535124-wgsrt" Feb 26 12:04:00 crc kubenswrapper[4724]: I0226 12:04:00.539740 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535124-wgsrt" Feb 26 12:04:02 crc kubenswrapper[4724]: I0226 12:04:02.613444 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535124-wgsrt"] Feb 26 12:04:02 crc kubenswrapper[4724]: I0226 12:04:02.623901 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 12:04:03 crc kubenswrapper[4724]: I0226 12:04:03.640282 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535124-wgsrt" event={"ID":"17982b22-2b96-4fee-8902-c3e25989021b","Type":"ContainerStarted","Data":"c4aeb4a047f27e8fef050db6b9ef96b83d0f3130b5f98627f48443053b5ccad0"} Feb 26 12:04:08 crc kubenswrapper[4724]: I0226 12:04:08.690325 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535124-wgsrt" event={"ID":"17982b22-2b96-4fee-8902-c3e25989021b","Type":"ContainerStarted","Data":"2977cccbc649299cba59ee3978435667753095004b3cdf83ff15ed846198b37a"} Feb 26 12:04:08 crc kubenswrapper[4724]: I0226 12:04:08.710800 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535124-wgsrt" podStartSLOduration=5.76687358 podStartE2EDuration="8.710779972s" podCreationTimestamp="2026-02-26 12:04:00 +0000 UTC" firstStartedPulling="2026-02-26 12:04:02.621408373 +0000 UTC m=+3509.277147488" lastFinishedPulling="2026-02-26 12:04:05.565314765 +0000 UTC m=+3512.221053880" observedRunningTime="2026-02-26 12:04:08.706196175 +0000 UTC m=+3515.361935290" watchObservedRunningTime="2026-02-26 12:04:08.710779972 +0000 UTC m=+3515.366519087" Feb 26 12:04:10 crc kubenswrapper[4724]: I0226 12:04:10.709709 4724 generic.go:334] "Generic (PLEG): container finished" podID="17982b22-2b96-4fee-8902-c3e25989021b" containerID="2977cccbc649299cba59ee3978435667753095004b3cdf83ff15ed846198b37a" exitCode=0 Feb 26 12:04:10 crc kubenswrapper[4724]: I0226 12:04:10.709793 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535124-wgsrt" event={"ID":"17982b22-2b96-4fee-8902-c3e25989021b","Type":"ContainerDied","Data":"2977cccbc649299cba59ee3978435667753095004b3cdf83ff15ed846198b37a"} Feb 26 12:04:10 crc kubenswrapper[4724]: I0226 12:04:10.975811 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:04:10 crc kubenswrapper[4724]: E0226 12:04:10.976082 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:04:12 crc kubenswrapper[4724]: I0226 12:04:12.375459 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535124-wgsrt" Feb 26 12:04:12 crc kubenswrapper[4724]: I0226 12:04:12.488018 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9bnw\" (UniqueName: \"kubernetes.io/projected/17982b22-2b96-4fee-8902-c3e25989021b-kube-api-access-m9bnw\") pod \"17982b22-2b96-4fee-8902-c3e25989021b\" (UID: \"17982b22-2b96-4fee-8902-c3e25989021b\") " Feb 26 12:04:12 crc kubenswrapper[4724]: I0226 12:04:12.496538 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17982b22-2b96-4fee-8902-c3e25989021b-kube-api-access-m9bnw" (OuterVolumeSpecName: "kube-api-access-m9bnw") pod "17982b22-2b96-4fee-8902-c3e25989021b" (UID: "17982b22-2b96-4fee-8902-c3e25989021b"). InnerVolumeSpecName "kube-api-access-m9bnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:04:12 crc kubenswrapper[4724]: I0226 12:04:12.591447 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9bnw\" (UniqueName: \"kubernetes.io/projected/17982b22-2b96-4fee-8902-c3e25989021b-kube-api-access-m9bnw\") on node \"crc\" DevicePath \"\"" Feb 26 12:04:12 crc kubenswrapper[4724]: I0226 12:04:12.741562 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535124-wgsrt" event={"ID":"17982b22-2b96-4fee-8902-c3e25989021b","Type":"ContainerDied","Data":"c4aeb4a047f27e8fef050db6b9ef96b83d0f3130b5f98627f48443053b5ccad0"} Feb 26 12:04:12 crc kubenswrapper[4724]: I0226 12:04:12.741864 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4aeb4a047f27e8fef050db6b9ef96b83d0f3130b5f98627f48443053b5ccad0" Feb 26 12:04:12 crc kubenswrapper[4724]: I0226 12:04:12.741614 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535124-wgsrt" Feb 26 12:04:13 crc kubenswrapper[4724]: I0226 12:04:13.522523 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535118-5d7rd"] Feb 26 12:04:13 crc kubenswrapper[4724]: I0226 12:04:13.562926 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535118-5d7rd"] Feb 26 12:04:13 crc kubenswrapper[4724]: I0226 12:04:13.996230 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98933753-e9bb-495a-a8fa-b8dc924c173b" path="/var/lib/kubelet/pods/98933753-e9bb-495a-a8fa-b8dc924c173b/volumes" Feb 26 12:04:22 crc kubenswrapper[4724]: I0226 12:04:22.975706 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:04:23 crc kubenswrapper[4724]: I0226 12:04:23.942710 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"4df5c2ef2a1caf17aafa325fb1254464a251fba6b0d8441497b238e575c08bc9"} Feb 26 12:05:14 crc kubenswrapper[4724]: I0226 12:05:14.038164 4724 scope.go:117] "RemoveContainer" containerID="4293b07d4c4239e06f4312ed62720336077aff8dd13d37f116816500faf8bcd5" Feb 26 12:05:39 crc kubenswrapper[4724]: E0226 12:05:39.842334 4724 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.145:51320->38.102.83.145:45037: write tcp 38.102.83.145:51320->38.102.83.145:45037: write: broken pipe Feb 26 12:06:00 crc kubenswrapper[4724]: I0226 12:06:00.159363 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535126-qvn9j"] Feb 26 12:06:00 crc kubenswrapper[4724]: E0226 12:06:00.160339 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17982b22-2b96-4fee-8902-c3e25989021b" containerName="oc" Feb 26 12:06:00 crc kubenswrapper[4724]: I0226 12:06:00.160356 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="17982b22-2b96-4fee-8902-c3e25989021b" containerName="oc" Feb 26 12:06:00 crc kubenswrapper[4724]: I0226 12:06:00.161289 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="17982b22-2b96-4fee-8902-c3e25989021b" containerName="oc" Feb 26 12:06:00 crc kubenswrapper[4724]: I0226 12:06:00.162348 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535126-qvn9j" Feb 26 12:06:00 crc kubenswrapper[4724]: I0226 12:06:00.164860 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:06:00 crc kubenswrapper[4724]: I0226 12:06:00.164954 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:06:00 crc kubenswrapper[4724]: I0226 12:06:00.167099 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:06:00 crc kubenswrapper[4724]: I0226 12:06:00.212664 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nqpk\" (UniqueName: \"kubernetes.io/projected/5633c143-692a-4e6a-993c-ed35be3b9c1a-kube-api-access-9nqpk\") pod \"auto-csr-approver-29535126-qvn9j\" (UID: \"5633c143-692a-4e6a-993c-ed35be3b9c1a\") " pod="openshift-infra/auto-csr-approver-29535126-qvn9j" Feb 26 12:06:00 crc kubenswrapper[4724]: I0226 12:06:00.244667 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535126-qvn9j"] Feb 26 12:06:00 crc kubenswrapper[4724]: I0226 12:06:00.315164 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nqpk\" (UniqueName: \"kubernetes.io/projected/5633c143-692a-4e6a-993c-ed35be3b9c1a-kube-api-access-9nqpk\") pod \"auto-csr-approver-29535126-qvn9j\" (UID: \"5633c143-692a-4e6a-993c-ed35be3b9c1a\") " pod="openshift-infra/auto-csr-approver-29535126-qvn9j" Feb 26 12:06:00 crc kubenswrapper[4724]: I0226 12:06:00.351131 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nqpk\" (UniqueName: \"kubernetes.io/projected/5633c143-692a-4e6a-993c-ed35be3b9c1a-kube-api-access-9nqpk\") pod \"auto-csr-approver-29535126-qvn9j\" (UID: \"5633c143-692a-4e6a-993c-ed35be3b9c1a\") " pod="openshift-infra/auto-csr-approver-29535126-qvn9j" Feb 26 12:06:00 crc kubenswrapper[4724]: I0226 12:06:00.485530 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535126-qvn9j" Feb 26 12:06:02 crc kubenswrapper[4724]: I0226 12:06:02.546259 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535126-qvn9j"] Feb 26 12:06:02 crc kubenswrapper[4724]: W0226 12:06:02.573962 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5633c143_692a_4e6a_993c_ed35be3b9c1a.slice/crio-766fd2b4f088d352fbe034ab4a97d903d5750db0e287aceb653e02b6621955d4 WatchSource:0}: Error finding container 766fd2b4f088d352fbe034ab4a97d903d5750db0e287aceb653e02b6621955d4: Status 404 returned error can't find the container with id 766fd2b4f088d352fbe034ab4a97d903d5750db0e287aceb653e02b6621955d4 Feb 26 12:06:02 crc kubenswrapper[4724]: I0226 12:06:02.649656 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535126-qvn9j" event={"ID":"5633c143-692a-4e6a-993c-ed35be3b9c1a","Type":"ContainerStarted","Data":"766fd2b4f088d352fbe034ab4a97d903d5750db0e287aceb653e02b6621955d4"} Feb 26 12:06:09 crc kubenswrapper[4724]: I0226 12:06:09.706565 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535126-qvn9j" event={"ID":"5633c143-692a-4e6a-993c-ed35be3b9c1a","Type":"ContainerStarted","Data":"b8217b27a0985f502cebfd4f527819aacb555806f90f3af090f00dabc15d7bd0"} Feb 26 12:06:09 crc kubenswrapper[4724]: I0226 12:06:09.729372 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535126-qvn9j" podStartSLOduration=6.55638772 podStartE2EDuration="9.729355982s" podCreationTimestamp="2026-02-26 12:06:00 +0000 UTC" firstStartedPulling="2026-02-26 12:06:02.579292641 +0000 UTC m=+3629.235031746" lastFinishedPulling="2026-02-26 12:06:05.752260893 +0000 UTC m=+3632.408000008" observedRunningTime="2026-02-26 12:06:09.719746866 +0000 UTC m=+3636.375485971" watchObservedRunningTime="2026-02-26 12:06:09.729355982 +0000 UTC m=+3636.385095097" Feb 26 12:06:14 crc kubenswrapper[4724]: I0226 12:06:14.755355 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535126-qvn9j" event={"ID":"5633c143-692a-4e6a-993c-ed35be3b9c1a","Type":"ContainerDied","Data":"b8217b27a0985f502cebfd4f527819aacb555806f90f3af090f00dabc15d7bd0"} Feb 26 12:06:14 crc kubenswrapper[4724]: I0226 12:06:14.809436 4724 generic.go:334] "Generic (PLEG): container finished" podID="5633c143-692a-4e6a-993c-ed35be3b9c1a" containerID="b8217b27a0985f502cebfd4f527819aacb555806f90f3af090f00dabc15d7bd0" exitCode=0 Feb 26 12:06:16 crc kubenswrapper[4724]: I0226 12:06:16.487088 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535126-qvn9j" Feb 26 12:06:16 crc kubenswrapper[4724]: I0226 12:06:16.603152 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nqpk\" (UniqueName: \"kubernetes.io/projected/5633c143-692a-4e6a-993c-ed35be3b9c1a-kube-api-access-9nqpk\") pod \"5633c143-692a-4e6a-993c-ed35be3b9c1a\" (UID: \"5633c143-692a-4e6a-993c-ed35be3b9c1a\") " Feb 26 12:06:16 crc kubenswrapper[4724]: I0226 12:06:16.631161 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5633c143-692a-4e6a-993c-ed35be3b9c1a-kube-api-access-9nqpk" (OuterVolumeSpecName: "kube-api-access-9nqpk") pod "5633c143-692a-4e6a-993c-ed35be3b9c1a" (UID: "5633c143-692a-4e6a-993c-ed35be3b9c1a"). InnerVolumeSpecName "kube-api-access-9nqpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:06:16 crc kubenswrapper[4724]: I0226 12:06:16.705995 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nqpk\" (UniqueName: \"kubernetes.io/projected/5633c143-692a-4e6a-993c-ed35be3b9c1a-kube-api-access-9nqpk\") on node \"crc\" DevicePath \"\"" Feb 26 12:06:16 crc kubenswrapper[4724]: I0226 12:06:16.862853 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535126-qvn9j" event={"ID":"5633c143-692a-4e6a-993c-ed35be3b9c1a","Type":"ContainerDied","Data":"766fd2b4f088d352fbe034ab4a97d903d5750db0e287aceb653e02b6621955d4"} Feb 26 12:06:16 crc kubenswrapper[4724]: I0226 12:06:16.862917 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535126-qvn9j" Feb 26 12:06:16 crc kubenswrapper[4724]: I0226 12:06:16.863958 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="766fd2b4f088d352fbe034ab4a97d903d5750db0e287aceb653e02b6621955d4" Feb 26 12:06:16 crc kubenswrapper[4724]: I0226 12:06:16.866701 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535120-qfnj9"] Feb 26 12:06:16 crc kubenswrapper[4724]: I0226 12:06:16.875559 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535120-qfnj9"] Feb 26 12:06:17 crc kubenswrapper[4724]: I0226 12:06:17.986482 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d49bbd9-833c-413f-a187-ebbb2a4bce2b" path="/var/lib/kubelet/pods/3d49bbd9-833c-413f-a187-ebbb2a4bce2b/volumes" Feb 26 12:06:46 crc kubenswrapper[4724]: I0226 12:06:46.906250 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:06:46 crc kubenswrapper[4724]: I0226 12:06:46.907468 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:07:14 crc kubenswrapper[4724]: I0226 12:07:14.828237 4724 scope.go:117] "RemoveContainer" containerID="af1f479c9ae010d452db170a1f868339c37b491d3c1f0684f7cdb8a8cc0abc88" Feb 26 12:07:16 crc kubenswrapper[4724]: I0226 12:07:16.906611 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:07:16 crc kubenswrapper[4724]: I0226 12:07:16.907194 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:07:46 crc kubenswrapper[4724]: I0226 12:07:46.905809 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:07:46 crc kubenswrapper[4724]: I0226 12:07:46.906460 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:07:46 crc kubenswrapper[4724]: I0226 12:07:46.906514 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 12:07:46 crc kubenswrapper[4724]: I0226 12:07:46.907412 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4df5c2ef2a1caf17aafa325fb1254464a251fba6b0d8441497b238e575c08bc9"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 12:07:46 crc kubenswrapper[4724]: I0226 12:07:46.908707 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://4df5c2ef2a1caf17aafa325fb1254464a251fba6b0d8441497b238e575c08bc9" gracePeriod=600 Feb 26 12:07:47 crc kubenswrapper[4724]: I0226 12:07:47.418871 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="4df5c2ef2a1caf17aafa325fb1254464a251fba6b0d8441497b238e575c08bc9" exitCode=0 Feb 26 12:07:47 crc kubenswrapper[4724]: I0226 12:07:47.419060 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"4df5c2ef2a1caf17aafa325fb1254464a251fba6b0d8441497b238e575c08bc9"} Feb 26 12:07:47 crc kubenswrapper[4724]: I0226 12:07:47.419272 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a"} Feb 26 12:07:47 crc kubenswrapper[4724]: I0226 12:07:47.419296 4724 scope.go:117] "RemoveContainer" containerID="9bae72d485f176592ccdda65ec065d60aa1741d3f1449dbe816cfc4a74f91ca5" Feb 26 12:08:00 crc kubenswrapper[4724]: I0226 12:08:00.209457 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535128-k4mvw"] Feb 26 12:08:00 crc kubenswrapper[4724]: E0226 12:08:00.211723 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5633c143-692a-4e6a-993c-ed35be3b9c1a" containerName="oc" Feb 26 12:08:00 crc kubenswrapper[4724]: I0226 12:08:00.211753 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5633c143-692a-4e6a-993c-ed35be3b9c1a" containerName="oc" Feb 26 12:08:00 crc kubenswrapper[4724]: I0226 12:08:00.212085 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5633c143-692a-4e6a-993c-ed35be3b9c1a" containerName="oc" Feb 26 12:08:00 crc kubenswrapper[4724]: I0226 12:08:00.213347 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535128-k4mvw" Feb 26 12:08:00 crc kubenswrapper[4724]: I0226 12:08:00.216309 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:08:00 crc kubenswrapper[4724]: I0226 12:08:00.216990 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:08:00 crc kubenswrapper[4724]: I0226 12:08:00.217056 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:08:00 crc kubenswrapper[4724]: I0226 12:08:00.255221 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535128-k4mvw"] Feb 26 12:08:00 crc kubenswrapper[4724]: I0226 12:08:00.394495 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc9mb\" (UniqueName: \"kubernetes.io/projected/c018a601-fd63-4e85-a94e-582acf4fa03b-kube-api-access-mc9mb\") pod \"auto-csr-approver-29535128-k4mvw\" (UID: \"c018a601-fd63-4e85-a94e-582acf4fa03b\") " pod="openshift-infra/auto-csr-approver-29535128-k4mvw" Feb 26 12:08:00 crc kubenswrapper[4724]: I0226 12:08:00.496626 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc9mb\" (UniqueName: \"kubernetes.io/projected/c018a601-fd63-4e85-a94e-582acf4fa03b-kube-api-access-mc9mb\") pod \"auto-csr-approver-29535128-k4mvw\" (UID: \"c018a601-fd63-4e85-a94e-582acf4fa03b\") " pod="openshift-infra/auto-csr-approver-29535128-k4mvw" Feb 26 12:08:00 crc kubenswrapper[4724]: I0226 12:08:00.541276 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc9mb\" (UniqueName: \"kubernetes.io/projected/c018a601-fd63-4e85-a94e-582acf4fa03b-kube-api-access-mc9mb\") pod \"auto-csr-approver-29535128-k4mvw\" (UID: \"c018a601-fd63-4e85-a94e-582acf4fa03b\") " pod="openshift-infra/auto-csr-approver-29535128-k4mvw" Feb 26 12:08:00 crc kubenswrapper[4724]: I0226 12:08:00.562370 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535128-k4mvw" Feb 26 12:08:02 crc kubenswrapper[4724]: I0226 12:08:02.048779 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535128-k4mvw"] Feb 26 12:08:02 crc kubenswrapper[4724]: I0226 12:08:02.578913 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535128-k4mvw" event={"ID":"c018a601-fd63-4e85-a94e-582acf4fa03b","Type":"ContainerStarted","Data":"cc2dafb6b8fb0cce6cdcdd123f52d8bd00f55cad649ede5237c32d05bd3d989e"} Feb 26 12:08:05 crc kubenswrapper[4724]: I0226 12:08:05.601413 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535128-k4mvw" event={"ID":"c018a601-fd63-4e85-a94e-582acf4fa03b","Type":"ContainerStarted","Data":"c4cb52490a068714475dcd5a7244517ead2079b3b4adb52c6fc184bc0dc064c8"} Feb 26 12:08:05 crc kubenswrapper[4724]: I0226 12:08:05.626106 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535128-k4mvw" podStartSLOduration=3.488258798 podStartE2EDuration="5.626088237s" podCreationTimestamp="2026-02-26 12:08:00 +0000 UTC" firstStartedPulling="2026-02-26 12:08:02.061498418 +0000 UTC m=+3748.717237533" lastFinishedPulling="2026-02-26 12:08:04.199327857 +0000 UTC m=+3750.855066972" observedRunningTime="2026-02-26 12:08:05.618753839 +0000 UTC m=+3752.274492954" watchObservedRunningTime="2026-02-26 12:08:05.626088237 +0000 UTC m=+3752.281827352" Feb 26 12:08:09 crc kubenswrapper[4724]: I0226 12:08:09.640404 4724 generic.go:334] "Generic (PLEG): container finished" podID="c018a601-fd63-4e85-a94e-582acf4fa03b" containerID="c4cb52490a068714475dcd5a7244517ead2079b3b4adb52c6fc184bc0dc064c8" exitCode=0 Feb 26 12:08:09 crc kubenswrapper[4724]: I0226 12:08:09.640875 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535128-k4mvw" event={"ID":"c018a601-fd63-4e85-a94e-582acf4fa03b","Type":"ContainerDied","Data":"c4cb52490a068714475dcd5a7244517ead2079b3b4adb52c6fc184bc0dc064c8"} Feb 26 12:08:11 crc kubenswrapper[4724]: I0226 12:08:11.164531 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535128-k4mvw" Feb 26 12:08:11 crc kubenswrapper[4724]: I0226 12:08:11.219318 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc9mb\" (UniqueName: \"kubernetes.io/projected/c018a601-fd63-4e85-a94e-582acf4fa03b-kube-api-access-mc9mb\") pod \"c018a601-fd63-4e85-a94e-582acf4fa03b\" (UID: \"c018a601-fd63-4e85-a94e-582acf4fa03b\") " Feb 26 12:08:11 crc kubenswrapper[4724]: I0226 12:08:11.257886 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c018a601-fd63-4e85-a94e-582acf4fa03b-kube-api-access-mc9mb" (OuterVolumeSpecName: "kube-api-access-mc9mb") pod "c018a601-fd63-4e85-a94e-582acf4fa03b" (UID: "c018a601-fd63-4e85-a94e-582acf4fa03b"). InnerVolumeSpecName "kube-api-access-mc9mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:08:11 crc kubenswrapper[4724]: I0226 12:08:11.322088 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc9mb\" (UniqueName: \"kubernetes.io/projected/c018a601-fd63-4e85-a94e-582acf4fa03b-kube-api-access-mc9mb\") on node \"crc\" DevicePath \"\"" Feb 26 12:08:11 crc kubenswrapper[4724]: I0226 12:08:11.669643 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535128-k4mvw" event={"ID":"c018a601-fd63-4e85-a94e-582acf4fa03b","Type":"ContainerDied","Data":"cc2dafb6b8fb0cce6cdcdd123f52d8bd00f55cad649ede5237c32d05bd3d989e"} Feb 26 12:08:11 crc kubenswrapper[4724]: I0226 12:08:11.669689 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc2dafb6b8fb0cce6cdcdd123f52d8bd00f55cad649ede5237c32d05bd3d989e" Feb 26 12:08:11 crc kubenswrapper[4724]: I0226 12:08:11.669696 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535128-k4mvw" Feb 26 12:08:11 crc kubenswrapper[4724]: I0226 12:08:11.737098 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535122-zpprr"] Feb 26 12:08:11 crc kubenswrapper[4724]: I0226 12:08:11.747348 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535122-zpprr"] Feb 26 12:08:11 crc kubenswrapper[4724]: I0226 12:08:11.991171 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ea65c59-e7cb-443d-8450-65fc9d963caf" path="/var/lib/kubelet/pods/5ea65c59-e7cb-443d-8450-65fc9d963caf/volumes" Feb 26 12:08:15 crc kubenswrapper[4724]: I0226 12:08:15.046644 4724 scope.go:117] "RemoveContainer" containerID="ff18a4aaa12faf39d325f50bfab8dc39a758e4a2f88cf6410bdfb38ac733e7ec" Feb 26 12:08:21 crc kubenswrapper[4724]: I0226 12:08:21.289079 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qxk4j"] Feb 26 12:08:21 crc kubenswrapper[4724]: E0226 12:08:21.290482 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c018a601-fd63-4e85-a94e-582acf4fa03b" containerName="oc" Feb 26 12:08:21 crc kubenswrapper[4724]: I0226 12:08:21.290673 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c018a601-fd63-4e85-a94e-582acf4fa03b" containerName="oc" Feb 26 12:08:21 crc kubenswrapper[4724]: I0226 12:08:21.290902 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c018a601-fd63-4e85-a94e-582acf4fa03b" containerName="oc" Feb 26 12:08:21 crc kubenswrapper[4724]: I0226 12:08:21.311985 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:08:21 crc kubenswrapper[4724]: I0226 12:08:21.418684 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e29ef5-5447-419e-a920-87e255b48d1a-catalog-content\") pod \"certified-operators-qxk4j\" (UID: \"b4e29ef5-5447-419e-a920-87e255b48d1a\") " pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:08:21 crc kubenswrapper[4724]: I0226 12:08:21.418762 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e29ef5-5447-419e-a920-87e255b48d1a-utilities\") pod \"certified-operators-qxk4j\" (UID: \"b4e29ef5-5447-419e-a920-87e255b48d1a\") " pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:08:21 crc kubenswrapper[4724]: I0226 12:08:21.418943 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvlqp\" (UniqueName: \"kubernetes.io/projected/b4e29ef5-5447-419e-a920-87e255b48d1a-kube-api-access-zvlqp\") pod \"certified-operators-qxk4j\" (UID: \"b4e29ef5-5447-419e-a920-87e255b48d1a\") " pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:08:21 crc kubenswrapper[4724]: I0226 12:08:21.481982 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxk4j"] Feb 26 12:08:21 crc kubenswrapper[4724]: I0226 12:08:21.521412 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e29ef5-5447-419e-a920-87e255b48d1a-utilities\") pod \"certified-operators-qxk4j\" (UID: \"b4e29ef5-5447-419e-a920-87e255b48d1a\") " pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:08:21 crc kubenswrapper[4724]: I0226 12:08:21.521500 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvlqp\" (UniqueName: \"kubernetes.io/projected/b4e29ef5-5447-419e-a920-87e255b48d1a-kube-api-access-zvlqp\") pod \"certified-operators-qxk4j\" (UID: \"b4e29ef5-5447-419e-a920-87e255b48d1a\") " pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:08:21 crc kubenswrapper[4724]: I0226 12:08:21.521663 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e29ef5-5447-419e-a920-87e255b48d1a-catalog-content\") pod \"certified-operators-qxk4j\" (UID: \"b4e29ef5-5447-419e-a920-87e255b48d1a\") " pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:08:21 crc kubenswrapper[4724]: I0226 12:08:21.522318 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e29ef5-5447-419e-a920-87e255b48d1a-catalog-content\") pod \"certified-operators-qxk4j\" (UID: \"b4e29ef5-5447-419e-a920-87e255b48d1a\") " pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:08:21 crc kubenswrapper[4724]: I0226 12:08:21.522589 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e29ef5-5447-419e-a920-87e255b48d1a-utilities\") pod \"certified-operators-qxk4j\" (UID: \"b4e29ef5-5447-419e-a920-87e255b48d1a\") " pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:08:21 crc kubenswrapper[4724]: I0226 12:08:21.560724 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvlqp\" (UniqueName: \"kubernetes.io/projected/b4e29ef5-5447-419e-a920-87e255b48d1a-kube-api-access-zvlqp\") pod \"certified-operators-qxk4j\" (UID: \"b4e29ef5-5447-419e-a920-87e255b48d1a\") " pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:08:21 crc kubenswrapper[4724]: I0226 12:08:21.642035 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:08:22 crc kubenswrapper[4724]: I0226 12:08:22.678998 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxk4j"] Feb 26 12:08:22 crc kubenswrapper[4724]: W0226 12:08:22.708263 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4e29ef5_5447_419e_a920_87e255b48d1a.slice/crio-54189caa82e2db6a5206d1504565255b8e719a3b3ba539beae3f8273fd321202 WatchSource:0}: Error finding container 54189caa82e2db6a5206d1504565255b8e719a3b3ba539beae3f8273fd321202: Status 404 returned error can't find the container with id 54189caa82e2db6a5206d1504565255b8e719a3b3ba539beae3f8273fd321202 Feb 26 12:08:22 crc kubenswrapper[4724]: I0226 12:08:22.786712 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxk4j" event={"ID":"b4e29ef5-5447-419e-a920-87e255b48d1a","Type":"ContainerStarted","Data":"54189caa82e2db6a5206d1504565255b8e719a3b3ba539beae3f8273fd321202"} Feb 26 12:08:23 crc kubenswrapper[4724]: I0226 12:08:23.797480 4724 generic.go:334] "Generic (PLEG): container finished" podID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerID="945de0dadb2fb84b9f4dcca7bca7d4478fa1aab96ddb0a950d671429a13aead3" exitCode=0 Feb 26 12:08:23 crc kubenswrapper[4724]: I0226 12:08:23.797600 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxk4j" event={"ID":"b4e29ef5-5447-419e-a920-87e255b48d1a","Type":"ContainerDied","Data":"945de0dadb2fb84b9f4dcca7bca7d4478fa1aab96ddb0a950d671429a13aead3"} Feb 26 12:08:25 crc kubenswrapper[4724]: I0226 12:08:25.819270 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxk4j" event={"ID":"b4e29ef5-5447-419e-a920-87e255b48d1a","Type":"ContainerStarted","Data":"9e66fad6d08cbf3fa514a5df759526b573b96a4905bd18c8eeedce72da572235"} Feb 26 12:08:31 crc kubenswrapper[4724]: I0226 12:08:31.878019 4724 generic.go:334] "Generic (PLEG): container finished" podID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerID="9e66fad6d08cbf3fa514a5df759526b573b96a4905bd18c8eeedce72da572235" exitCode=0 Feb 26 12:08:31 crc kubenswrapper[4724]: I0226 12:08:31.878589 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxk4j" event={"ID":"b4e29ef5-5447-419e-a920-87e255b48d1a","Type":"ContainerDied","Data":"9e66fad6d08cbf3fa514a5df759526b573b96a4905bd18c8eeedce72da572235"} Feb 26 12:08:32 crc kubenswrapper[4724]: I0226 12:08:32.893396 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxk4j" event={"ID":"b4e29ef5-5447-419e-a920-87e255b48d1a","Type":"ContainerStarted","Data":"1a30428400f5e105bd03a3d77b85441eb7b0eb85addb9325e7dda6674ccade4b"} Feb 26 12:08:32 crc kubenswrapper[4724]: I0226 12:08:32.922725 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qxk4j" podStartSLOduration=3.328487378 podStartE2EDuration="11.922700956s" podCreationTimestamp="2026-02-26 12:08:21 +0000 UTC" firstStartedPulling="2026-02-26 12:08:23.799753405 +0000 UTC m=+3770.455492520" lastFinishedPulling="2026-02-26 12:08:32.393966983 +0000 UTC m=+3779.049706098" observedRunningTime="2026-02-26 12:08:32.916934088 +0000 UTC m=+3779.572673213" watchObservedRunningTime="2026-02-26 12:08:32.922700956 +0000 UTC m=+3779.578440081" Feb 26 12:08:41 crc kubenswrapper[4724]: I0226 12:08:41.644071 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:08:41 crc kubenswrapper[4724]: I0226 12:08:41.645962 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:08:42 crc kubenswrapper[4724]: I0226 12:08:42.725958 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qxk4j" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerName="registry-server" probeResult="failure" output=< Feb 26 12:08:42 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:08:42 crc kubenswrapper[4724]: > Feb 26 12:08:52 crc kubenswrapper[4724]: I0226 12:08:52.685863 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qxk4j" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerName="registry-server" probeResult="failure" output=< Feb 26 12:08:52 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:08:52 crc kubenswrapper[4724]: > Feb 26 12:09:02 crc kubenswrapper[4724]: I0226 12:09:02.732918 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qxk4j" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerName="registry-server" probeResult="failure" output=< Feb 26 12:09:02 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:09:02 crc kubenswrapper[4724]: > Feb 26 12:09:12 crc kubenswrapper[4724]: I0226 12:09:12.692231 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qxk4j" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerName="registry-server" probeResult="failure" output=< Feb 26 12:09:12 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:09:12 crc kubenswrapper[4724]: > Feb 26 12:09:16 crc kubenswrapper[4724]: I0226 12:09:16.834010 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="6abc9b19-0018-46d1-a119-0ffb069a1795" containerName="galera" probeResult="failure" output="command timed out" Feb 26 12:09:16 crc kubenswrapper[4724]: I0226 12:09:16.834016 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="6abc9b19-0018-46d1-a119-0ffb069a1795" containerName="galera" probeResult="failure" output="command timed out" Feb 26 12:09:22 crc kubenswrapper[4724]: I0226 12:09:22.691727 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qxk4j" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerName="registry-server" probeResult="failure" output=< Feb 26 12:09:22 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:09:22 crc kubenswrapper[4724]: > Feb 26 12:09:25 crc kubenswrapper[4724]: I0226 12:09:25.783887 4724 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.056943221s: [/var/lib/containers/storage/overlay/c2f89af4fc482472f58c8accf04952b9d2a3007863812c2d4024180be8190cc4/diff /var/log/pods/openstack_horizon-57977849d4-8s5ds_e4c4b3ae-030b-4e33-9779-2ffa39196a76/horizon/2.log]; will not log again for this container unless duration exceeds 2s Feb 26 12:09:32 crc kubenswrapper[4724]: I0226 12:09:32.746879 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qxk4j" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerName="registry-server" probeResult="failure" output=< Feb 26 12:09:32 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:09:32 crc kubenswrapper[4724]: > Feb 26 12:09:42 crc kubenswrapper[4724]: I0226 12:09:42.725545 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qxk4j" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerName="registry-server" probeResult="failure" output=< Feb 26 12:09:42 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:09:42 crc kubenswrapper[4724]: > Feb 26 12:09:51 crc kubenswrapper[4724]: I0226 12:09:51.759881 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:09:51 crc kubenswrapper[4724]: I0226 12:09:51.837692 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:09:52 crc kubenswrapper[4724]: I0226 12:09:52.285959 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxk4j"] Feb 26 12:09:52 crc kubenswrapper[4724]: I0226 12:09:52.843098 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qxk4j" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerName="registry-server" containerID="cri-o://1a30428400f5e105bd03a3d77b85441eb7b0eb85addb9325e7dda6674ccade4b" gracePeriod=2 Feb 26 12:09:53 crc kubenswrapper[4724]: I0226 12:09:53.859988 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxk4j" event={"ID":"b4e29ef5-5447-419e-a920-87e255b48d1a","Type":"ContainerDied","Data":"1a30428400f5e105bd03a3d77b85441eb7b0eb85addb9325e7dda6674ccade4b"} Feb 26 12:09:53 crc kubenswrapper[4724]: I0226 12:09:53.861455 4724 generic.go:334] "Generic (PLEG): container finished" podID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerID="1a30428400f5e105bd03a3d77b85441eb7b0eb85addb9325e7dda6674ccade4b" exitCode=0 Feb 26 12:09:56 crc kubenswrapper[4724]: I0226 12:09:56.611395 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:09:56 crc kubenswrapper[4724]: I0226 12:09:56.723909 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvlqp\" (UniqueName: \"kubernetes.io/projected/b4e29ef5-5447-419e-a920-87e255b48d1a-kube-api-access-zvlqp\") pod \"b4e29ef5-5447-419e-a920-87e255b48d1a\" (UID: \"b4e29ef5-5447-419e-a920-87e255b48d1a\") " Feb 26 12:09:56 crc kubenswrapper[4724]: I0226 12:09:56.724091 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e29ef5-5447-419e-a920-87e255b48d1a-catalog-content\") pod \"b4e29ef5-5447-419e-a920-87e255b48d1a\" (UID: \"b4e29ef5-5447-419e-a920-87e255b48d1a\") " Feb 26 12:09:56 crc kubenswrapper[4724]: I0226 12:09:56.724314 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e29ef5-5447-419e-a920-87e255b48d1a-utilities\") pod \"b4e29ef5-5447-419e-a920-87e255b48d1a\" (UID: \"b4e29ef5-5447-419e-a920-87e255b48d1a\") " Feb 26 12:09:56 crc kubenswrapper[4724]: I0226 12:09:56.782318 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4e29ef5-5447-419e-a920-87e255b48d1a-utilities" (OuterVolumeSpecName: "utilities") pod "b4e29ef5-5447-419e-a920-87e255b48d1a" (UID: "b4e29ef5-5447-419e-a920-87e255b48d1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:09:56 crc kubenswrapper[4724]: I0226 12:09:56.827019 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e29ef5-5447-419e-a920-87e255b48d1a-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:09:56 crc kubenswrapper[4724]: I0226 12:09:56.908861 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e29ef5-5447-419e-a920-87e255b48d1a-kube-api-access-zvlqp" (OuterVolumeSpecName: "kube-api-access-zvlqp") pod "b4e29ef5-5447-419e-a920-87e255b48d1a" (UID: "b4e29ef5-5447-419e-a920-87e255b48d1a"). InnerVolumeSpecName "kube-api-access-zvlqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:09:56 crc kubenswrapper[4724]: I0226 12:09:56.928798 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvlqp\" (UniqueName: \"kubernetes.io/projected/b4e29ef5-5447-419e-a920-87e255b48d1a-kube-api-access-zvlqp\") on node \"crc\" DevicePath \"\"" Feb 26 12:09:56 crc kubenswrapper[4724]: I0226 12:09:56.936352 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxk4j" event={"ID":"b4e29ef5-5447-419e-a920-87e255b48d1a","Type":"ContainerDied","Data":"54189caa82e2db6a5206d1504565255b8e719a3b3ba539beae3f8273fd321202"} Feb 26 12:09:56 crc kubenswrapper[4724]: I0226 12:09:56.936417 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxk4j" Feb 26 12:09:56 crc kubenswrapper[4724]: I0226 12:09:56.936886 4724 scope.go:117] "RemoveContainer" containerID="1a30428400f5e105bd03a3d77b85441eb7b0eb85addb9325e7dda6674ccade4b" Feb 26 12:09:57 crc kubenswrapper[4724]: I0226 12:09:57.097213 4724 scope.go:117] "RemoveContainer" containerID="9e66fad6d08cbf3fa514a5df759526b573b96a4905bd18c8eeedce72da572235" Feb 26 12:09:57 crc kubenswrapper[4724]: I0226 12:09:57.181592 4724 scope.go:117] "RemoveContainer" containerID="945de0dadb2fb84b9f4dcca7bca7d4478fa1aab96ddb0a950d671429a13aead3" Feb 26 12:09:57 crc kubenswrapper[4724]: I0226 12:09:57.279364 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4e29ef5-5447-419e-a920-87e255b48d1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4e29ef5-5447-419e-a920-87e255b48d1a" (UID: "b4e29ef5-5447-419e-a920-87e255b48d1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:09:57 crc kubenswrapper[4724]: I0226 12:09:57.335913 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e29ef5-5447-419e-a920-87e255b48d1a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:09:57 crc kubenswrapper[4724]: I0226 12:09:57.691844 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxk4j"] Feb 26 12:09:57 crc kubenswrapper[4724]: I0226 12:09:57.701931 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qxk4j"] Feb 26 12:09:58 crc kubenswrapper[4724]: I0226 12:09:58.061718 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" path="/var/lib/kubelet/pods/b4e29ef5-5447-419e-a920-87e255b48d1a/volumes" Feb 26 12:10:01 crc kubenswrapper[4724]: I0226 12:10:01.145883 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535130-hgz2x"] Feb 26 12:10:01 crc kubenswrapper[4724]: E0226 12:10:01.152393 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerName="extract-utilities" Feb 26 12:10:01 crc kubenswrapper[4724]: I0226 12:10:01.152454 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerName="extract-utilities" Feb 26 12:10:01 crc kubenswrapper[4724]: E0226 12:10:01.156129 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerName="registry-server" Feb 26 12:10:01 crc kubenswrapper[4724]: I0226 12:10:01.156142 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerName="registry-server" Feb 26 12:10:01 crc kubenswrapper[4724]: E0226 12:10:01.156159 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerName="extract-content" Feb 26 12:10:01 crc kubenswrapper[4724]: I0226 12:10:01.156169 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerName="extract-content" Feb 26 12:10:01 crc kubenswrapper[4724]: I0226 12:10:01.159926 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e29ef5-5447-419e-a920-87e255b48d1a" containerName="registry-server" Feb 26 12:10:01 crc kubenswrapper[4724]: I0226 12:10:01.224603 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535130-hgz2x" Feb 26 12:10:01 crc kubenswrapper[4724]: I0226 12:10:01.328988 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:10:01 crc kubenswrapper[4724]: I0226 12:10:01.329028 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:10:01 crc kubenswrapper[4724]: I0226 12:10:01.328990 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:10:01 crc kubenswrapper[4724]: I0226 12:10:01.339747 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vfwj\" (UniqueName: \"kubernetes.io/projected/a6a414da-dd5a-4384-818d-50f8d04e5c65-kube-api-access-5vfwj\") pod \"auto-csr-approver-29535130-hgz2x\" (UID: \"a6a414da-dd5a-4384-818d-50f8d04e5c65\") " pod="openshift-infra/auto-csr-approver-29535130-hgz2x" Feb 26 12:10:01 crc kubenswrapper[4724]: I0226 12:10:01.443588 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vfwj\" (UniqueName: \"kubernetes.io/projected/a6a414da-dd5a-4384-818d-50f8d04e5c65-kube-api-access-5vfwj\") pod \"auto-csr-approver-29535130-hgz2x\" (UID: \"a6a414da-dd5a-4384-818d-50f8d04e5c65\") " pod="openshift-infra/auto-csr-approver-29535130-hgz2x" Feb 26 12:10:01 crc kubenswrapper[4724]: I0226 12:10:01.788112 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vfwj\" (UniqueName: \"kubernetes.io/projected/a6a414da-dd5a-4384-818d-50f8d04e5c65-kube-api-access-5vfwj\") pod \"auto-csr-approver-29535130-hgz2x\" (UID: \"a6a414da-dd5a-4384-818d-50f8d04e5c65\") " pod="openshift-infra/auto-csr-approver-29535130-hgz2x" Feb 26 12:10:01 crc kubenswrapper[4724]: I0226 12:10:01.947026 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535130-hgz2x" Feb 26 12:10:02 crc kubenswrapper[4724]: I0226 12:10:02.045758 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535130-hgz2x"] Feb 26 12:10:06 crc kubenswrapper[4724]: I0226 12:10:06.964127 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-8h6mc" podUID="e8868abd-2431-4e5b-98d6-574ca6449d4b" containerName="registry-server" probeResult="failure" output=< Feb 26 12:10:06 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:10:06 crc kubenswrapper[4724]: > Feb 26 12:10:06 crc kubenswrapper[4724]: I0226 12:10:06.964248 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-8h6mc" podUID="e8868abd-2431-4e5b-98d6-574ca6449d4b" containerName="registry-server" probeResult="failure" output=< Feb 26 12:10:06 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:10:06 crc kubenswrapper[4724]: > Feb 26 12:10:08 crc kubenswrapper[4724]: I0226 12:10:08.163511 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" podUID="39700bc5-43f0-49b6-b510-523322e34eb5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:08 crc kubenswrapper[4724]: I0226 12:10:08.179605 4724 patch_prober.go:28] interesting pod/console-operator-58897d9998-rrbmc container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 12:10:08 crc kubenswrapper[4724]: I0226 12:10:08.215539 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-rrbmc" podUID="9063c94b-5e44-4a4a-9c85-e122cf7751b9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:08 crc kubenswrapper[4724]: I0226 12:10:08.220357 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l" podUID="39700bc5-43f0-49b6-b510-523322e34eb5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:08 crc kubenswrapper[4724]: I0226 12:10:08.220490 4724 patch_prober.go:28] interesting pod/console-operator-58897d9998-rrbmc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 12:10:08 crc kubenswrapper[4724]: I0226 12:10:08.220512 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rrbmc" podUID="9063c94b-5e44-4a4a-9c85-e122cf7751b9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:09 crc kubenswrapper[4724]: I0226 12:10:09.097088 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" podUID="48de473d-2e43-44ee-b0d1-db2c8e11fc2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:09 crc kubenswrapper[4724]: I0226 12:10:09.997418 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535130-hgz2x"] Feb 26 12:10:10 crc kubenswrapper[4724]: I0226 12:10:10.782736 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 12:10:11 crc kubenswrapper[4724]: I0226 12:10:11.160827 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535130-hgz2x" event={"ID":"a6a414da-dd5a-4384-818d-50f8d04e5c65","Type":"ContainerStarted","Data":"40dffe02ae40f622a7f2917cad3a04d36b6fcf683333bbd5e3d1c7e18c64b42a"} Feb 26 12:10:16 crc kubenswrapper[4724]: I0226 12:10:16.946000 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:10:16 crc kubenswrapper[4724]: I0226 12:10:16.951280 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:10:19 crc kubenswrapper[4724]: I0226 12:10:19.278558 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535130-hgz2x" event={"ID":"a6a414da-dd5a-4384-818d-50f8d04e5c65","Type":"ContainerStarted","Data":"274aeced4678bce8e03aec25bea200a90199fbe6a0adea156efd24335682d69f"} Feb 26 12:10:19 crc kubenswrapper[4724]: I0226 12:10:19.424431 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535130-hgz2x" podStartSLOduration=16.471580244 podStartE2EDuration="19.419301678s" podCreationTimestamp="2026-02-26 12:10:00 +0000 UTC" firstStartedPulling="2026-02-26 12:10:10.744378141 +0000 UTC m=+3877.400117256" lastFinishedPulling="2026-02-26 12:10:13.692099575 +0000 UTC m=+3880.347838690" observedRunningTime="2026-02-26 12:10:19.405299359 +0000 UTC m=+3886.061038474" watchObservedRunningTime="2026-02-26 12:10:19.419301678 +0000 UTC m=+3886.075040793" Feb 26 12:10:25 crc kubenswrapper[4724]: I0226 12:10:25.462920 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535130-hgz2x" event={"ID":"a6a414da-dd5a-4384-818d-50f8d04e5c65","Type":"ContainerDied","Data":"274aeced4678bce8e03aec25bea200a90199fbe6a0adea156efd24335682d69f"} Feb 26 12:10:25 crc kubenswrapper[4724]: I0226 12:10:25.462395 4724 generic.go:334] "Generic (PLEG): container finished" podID="a6a414da-dd5a-4384-818d-50f8d04e5c65" containerID="274aeced4678bce8e03aec25bea200a90199fbe6a0adea156efd24335682d69f" exitCode=0 Feb 26 12:10:26 crc kubenswrapper[4724]: I0226 12:10:26.822779 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="6abc9b19-0018-46d1-a119-0ffb069a1795" containerName="galera" probeResult="failure" output="command timed out" Feb 26 12:10:26 crc kubenswrapper[4724]: I0226 12:10:26.824227 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="6abc9b19-0018-46d1-a119-0ffb069a1795" containerName="galera" probeResult="failure" output="command timed out" Feb 26 12:10:27 crc kubenswrapper[4724]: I0226 12:10:27.134459 4724 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zxggv container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 12:10:27 crc kubenswrapper[4724]: I0226 12:10:27.134467 4724 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zxggv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 12:10:27 crc kubenswrapper[4724]: I0226 12:10:27.162722 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" podUID="6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:27 crc kubenswrapper[4724]: I0226 12:10:27.162720 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" podUID="6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:28 crc kubenswrapper[4724]: I0226 12:10:28.933483 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-75d9b57894-2862v" podUID="48de473d-2e43-44ee-b0d1-db2c8e11fc2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:32 crc kubenswrapper[4724]: I0226 12:10:32.322910 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535130-hgz2x" Feb 26 12:10:32 crc kubenswrapper[4724]: I0226 12:10:32.390506 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vfwj\" (UniqueName: \"kubernetes.io/projected/a6a414da-dd5a-4384-818d-50f8d04e5c65-kube-api-access-5vfwj\") pod \"a6a414da-dd5a-4384-818d-50f8d04e5c65\" (UID: \"a6a414da-dd5a-4384-818d-50f8d04e5c65\") " Feb 26 12:10:32 crc kubenswrapper[4724]: I0226 12:10:32.462228 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6a414da-dd5a-4384-818d-50f8d04e5c65-kube-api-access-5vfwj" (OuterVolumeSpecName: "kube-api-access-5vfwj") pod "a6a414da-dd5a-4384-818d-50f8d04e5c65" (UID: "a6a414da-dd5a-4384-818d-50f8d04e5c65"). InnerVolumeSpecName "kube-api-access-5vfwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:10:32 crc kubenswrapper[4724]: I0226 12:10:32.572094 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535130-hgz2x" Feb 26 12:10:32 crc kubenswrapper[4724]: I0226 12:10:32.574452 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535130-hgz2x" event={"ID":"a6a414da-dd5a-4384-818d-50f8d04e5c65","Type":"ContainerDied","Data":"40dffe02ae40f622a7f2917cad3a04d36b6fcf683333bbd5e3d1c7e18c64b42a"} Feb 26 12:10:32 crc kubenswrapper[4724]: I0226 12:10:32.574538 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40dffe02ae40f622a7f2917cad3a04d36b6fcf683333bbd5e3d1c7e18c64b42a" Feb 26 12:10:32 crc kubenswrapper[4724]: I0226 12:10:32.597746 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vfwj\" (UniqueName: \"kubernetes.io/projected/a6a414da-dd5a-4384-818d-50f8d04e5c65-kube-api-access-5vfwj\") on node \"crc\" DevicePath \"\"" Feb 26 12:10:33 crc kubenswrapper[4724]: I0226 12:10:33.836499 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-b86hc" podUID="d848b417-9306-4564-b059-0dc84bd7ec1a" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:35 crc kubenswrapper[4724]: I0226 12:10:35.757612 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535124-wgsrt"] Feb 26 12:10:35 crc kubenswrapper[4724]: I0226 12:10:35.902142 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535124-wgsrt"] Feb 26 12:10:35 crc kubenswrapper[4724]: I0226 12:10:35.946398 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp" podUID="da86929c-f438-4994-80be-1a7aa3b7b76e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:35 crc kubenswrapper[4724]: I0226 12:10:35.946414 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vp8fp" podUID="da86929c-f438-4994-80be-1a7aa3b7b76e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:36 crc kubenswrapper[4724]: I0226 12:10:36.021097 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17982b22-2b96-4fee-8902-c3e25989021b" path="/var/lib/kubelet/pods/17982b22-2b96-4fee-8902-c3e25989021b/volumes" Feb 26 12:10:36 crc kubenswrapper[4724]: I0226 12:10:36.227776 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 12:10:36 crc kubenswrapper[4724]: I0226 12:10:36.232896 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:36 crc kubenswrapper[4724]: I0226 12:10:36.227764 4724 patch_prober.go:28] interesting pod/router-default-5444994796-h27ll container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 12:10:36 crc kubenswrapper[4724]: I0226 12:10:36.233359 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-h27ll" podUID="45069e17-f50a-47d5-9552-b32b9eecadce" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:36 crc kubenswrapper[4724]: I0226 12:10:36.319210 4724 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-mckmm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 12:10:36 crc kubenswrapper[4724]: I0226 12:10:36.319310 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-mckmm" podUID="74c6d322-04d4-4a3e-b3d7-fa6157c5a696" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:36 crc kubenswrapper[4724]: I0226 12:10:36.734143 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-8h6mc" podUID="e8868abd-2431-4e5b-98d6-574ca6449d4b" containerName="registry-server" probeResult="failure" output=< Feb 26 12:10:36 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:10:36 crc kubenswrapper[4724]: > Feb 26 12:10:36 crc kubenswrapper[4724]: I0226 12:10:36.801773 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="6abc9b19-0018-46d1-a119-0ffb069a1795" containerName="galera" probeResult="failure" output="command timed out" Feb 26 12:10:36 crc kubenswrapper[4724]: I0226 12:10:36.802248 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="6abc9b19-0018-46d1-a119-0ffb069a1795" containerName="galera" probeResult="failure" output="command timed out" Feb 26 12:10:36 crc kubenswrapper[4724]: I0226 12:10:36.824063 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-8h6mc" podUID="e8868abd-2431-4e5b-98d6-574ca6449d4b" containerName="registry-server" probeResult="failure" output=< Feb 26 12:10:36 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:10:36 crc kubenswrapper[4724]: > Feb 26 12:10:37 crc kubenswrapper[4724]: I0226 12:10:37.132956 4724 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zxggv container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 12:10:37 crc kubenswrapper[4724]: I0226 12:10:37.133044 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" podUID="6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:37 crc kubenswrapper[4724]: I0226 12:10:37.133054 4724 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zxggv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 12:10:37 crc kubenswrapper[4724]: I0226 12:10:37.133097 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zxggv" podUID="6f17bea2-a6c9-4d5b-a61e-95ebacfbaf52" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 12:10:37 crc kubenswrapper[4724]: I0226 12:10:37.814157 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="b0d66ab1-513b-452a-9f31-bfc4b4be6c18" containerName="galera" probeResult="failure" output="command timed out" Feb 26 12:10:37 crc kubenswrapper[4724]: I0226 12:10:37.814156 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b0d66ab1-513b-452a-9f31-bfc4b4be6c18" containerName="galera" probeResult="failure" output="command timed out" Feb 26 12:10:46 crc kubenswrapper[4724]: I0226 12:10:46.978284 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:10:47 crc kubenswrapper[4724]: I0226 12:10:46.989482 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:11:16 crc kubenswrapper[4724]: I0226 12:11:16.912675 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:11:16 crc kubenswrapper[4724]: I0226 12:11:16.919670 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:11:16 crc kubenswrapper[4724]: I0226 12:11:16.923889 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 12:11:16 crc kubenswrapper[4724]: I0226 12:11:16.929119 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 12:11:16 crc kubenswrapper[4724]: I0226 12:11:16.930572 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" gracePeriod=600 Feb 26 12:11:17 crc kubenswrapper[4724]: I0226 12:11:17.089562 4724 scope.go:117] "RemoveContainer" containerID="2977cccbc649299cba59ee3978435667753095004b3cdf83ff15ed846198b37a" Feb 26 12:11:17 crc kubenswrapper[4724]: E0226 12:11:17.116997 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:11:17 crc kubenswrapper[4724]: I0226 12:11:17.256419 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" exitCode=0 Feb 26 12:11:17 crc kubenswrapper[4724]: I0226 12:11:17.256483 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a"} Feb 26 12:11:17 crc kubenswrapper[4724]: I0226 12:11:17.257652 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:11:17 crc kubenswrapper[4724]: E0226 12:11:17.257965 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:11:17 crc kubenswrapper[4724]: I0226 12:11:17.258203 4724 scope.go:117] "RemoveContainer" containerID="4df5c2ef2a1caf17aafa325fb1254464a251fba6b0d8441497b238e575c08bc9" Feb 26 12:11:31 crc kubenswrapper[4724]: I0226 12:11:31.982653 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:11:31 crc kubenswrapper[4724]: E0226 12:11:31.990860 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:11:41 crc kubenswrapper[4724]: E0226 12:11:41.761271 4724 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.145:53128->38.102.83.145:45037: write tcp 38.102.83.145:53128->38.102.83.145:45037: write: connection reset by peer Feb 26 12:11:43 crc kubenswrapper[4724]: I0226 12:11:43.588769 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8z4p4"] Feb 26 12:11:43 crc kubenswrapper[4724]: E0226 12:11:43.596632 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a414da-dd5a-4384-818d-50f8d04e5c65" containerName="oc" Feb 26 12:11:43 crc kubenswrapper[4724]: I0226 12:11:43.596685 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a414da-dd5a-4384-818d-50f8d04e5c65" containerName="oc" Feb 26 12:11:43 crc kubenswrapper[4724]: I0226 12:11:43.601819 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6a414da-dd5a-4384-818d-50f8d04e5c65" containerName="oc" Feb 26 12:11:43 crc kubenswrapper[4724]: I0226 12:11:43.609323 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:11:43 crc kubenswrapper[4724]: I0226 12:11:43.712155 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78h2c\" (UniqueName: \"kubernetes.io/projected/913496db-dd8f-413c-8cee-5b99de20e179-kube-api-access-78h2c\") pod \"community-operators-8z4p4\" (UID: \"913496db-dd8f-413c-8cee-5b99de20e179\") " pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:11:43 crc kubenswrapper[4724]: I0226 12:11:43.712889 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/913496db-dd8f-413c-8cee-5b99de20e179-utilities\") pod \"community-operators-8z4p4\" (UID: \"913496db-dd8f-413c-8cee-5b99de20e179\") " pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:11:43 crc kubenswrapper[4724]: I0226 12:11:43.713111 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/913496db-dd8f-413c-8cee-5b99de20e179-catalog-content\") pod \"community-operators-8z4p4\" (UID: \"913496db-dd8f-413c-8cee-5b99de20e179\") " pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:11:43 crc kubenswrapper[4724]: I0226 12:11:43.814679 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78h2c\" (UniqueName: \"kubernetes.io/projected/913496db-dd8f-413c-8cee-5b99de20e179-kube-api-access-78h2c\") pod \"community-operators-8z4p4\" (UID: \"913496db-dd8f-413c-8cee-5b99de20e179\") " pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:11:43 crc kubenswrapper[4724]: I0226 12:11:43.814747 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/913496db-dd8f-413c-8cee-5b99de20e179-utilities\") pod \"community-operators-8z4p4\" (UID: \"913496db-dd8f-413c-8cee-5b99de20e179\") " pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:11:43 crc kubenswrapper[4724]: I0226 12:11:43.814801 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/913496db-dd8f-413c-8cee-5b99de20e179-catalog-content\") pod \"community-operators-8z4p4\" (UID: \"913496db-dd8f-413c-8cee-5b99de20e179\") " pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:11:43 crc kubenswrapper[4724]: I0226 12:11:43.829022 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/913496db-dd8f-413c-8cee-5b99de20e179-utilities\") pod \"community-operators-8z4p4\" (UID: \"913496db-dd8f-413c-8cee-5b99de20e179\") " pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:11:43 crc kubenswrapper[4724]: I0226 12:11:43.836233 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/913496db-dd8f-413c-8cee-5b99de20e179-catalog-content\") pod \"community-operators-8z4p4\" (UID: \"913496db-dd8f-413c-8cee-5b99de20e179\") " pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:11:43 crc kubenswrapper[4724]: I0226 12:11:43.874866 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78h2c\" (UniqueName: \"kubernetes.io/projected/913496db-dd8f-413c-8cee-5b99de20e179-kube-api-access-78h2c\") pod \"community-operators-8z4p4\" (UID: \"913496db-dd8f-413c-8cee-5b99de20e179\") " pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:11:43 crc kubenswrapper[4724]: I0226 12:11:43.883609 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8z4p4"] Feb 26 12:11:43 crc kubenswrapper[4724]: I0226 12:11:43.982773 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:11:45 crc kubenswrapper[4724]: I0226 12:11:45.682945 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8z4p4"] Feb 26 12:11:45 crc kubenswrapper[4724]: I0226 12:11:45.975860 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:11:45 crc kubenswrapper[4724]: E0226 12:11:45.976420 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:11:46 crc kubenswrapper[4724]: I0226 12:11:46.528679 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z4p4" event={"ID":"913496db-dd8f-413c-8cee-5b99de20e179","Type":"ContainerDied","Data":"c98ea57e0b555d3ae2e4851171f19655940e2abc937755ad758ce50e39ff32d8"} Feb 26 12:11:46 crc kubenswrapper[4724]: I0226 12:11:46.528573 4724 generic.go:334] "Generic (PLEG): container finished" podID="913496db-dd8f-413c-8cee-5b99de20e179" containerID="c98ea57e0b555d3ae2e4851171f19655940e2abc937755ad758ce50e39ff32d8" exitCode=0 Feb 26 12:11:46 crc kubenswrapper[4724]: I0226 12:11:46.529919 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z4p4" event={"ID":"913496db-dd8f-413c-8cee-5b99de20e179","Type":"ContainerStarted","Data":"8bc09056aca21f8d00d8bbe8172c0c270249aa547609d59ed39449d7e9f6cc09"} Feb 26 12:11:50 crc kubenswrapper[4724]: I0226 12:11:50.642889 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z4p4" event={"ID":"913496db-dd8f-413c-8cee-5b99de20e179","Type":"ContainerStarted","Data":"b7d2eb2263d33a7d7565b59f1231cabefd0f7dd2a685351a78860ad5a2fd34f1"} Feb 26 12:11:53 crc kubenswrapper[4724]: I0226 12:11:53.670801 4724 generic.go:334] "Generic (PLEG): container finished" podID="913496db-dd8f-413c-8cee-5b99de20e179" containerID="b7d2eb2263d33a7d7565b59f1231cabefd0f7dd2a685351a78860ad5a2fd34f1" exitCode=0 Feb 26 12:11:53 crc kubenswrapper[4724]: I0226 12:11:53.670937 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z4p4" event={"ID":"913496db-dd8f-413c-8cee-5b99de20e179","Type":"ContainerDied","Data":"b7d2eb2263d33a7d7565b59f1231cabefd0f7dd2a685351a78860ad5a2fd34f1"} Feb 26 12:11:54 crc kubenswrapper[4724]: I0226 12:11:54.681083 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z4p4" event={"ID":"913496db-dd8f-413c-8cee-5b99de20e179","Type":"ContainerStarted","Data":"b9fec49f8865f70431c21ca99db08c46047104d3ab58d523dc3a5f5b45954934"} Feb 26 12:11:54 crc kubenswrapper[4724]: I0226 12:11:54.702953 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8z4p4" podStartSLOduration=4.062876765 podStartE2EDuration="11.702155994s" podCreationTimestamp="2026-02-26 12:11:43 +0000 UTC" firstStartedPulling="2026-02-26 12:11:46.530529098 +0000 UTC m=+3973.186268213" lastFinishedPulling="2026-02-26 12:11:54.169808327 +0000 UTC m=+3980.825547442" observedRunningTime="2026-02-26 12:11:54.699085835 +0000 UTC m=+3981.354824970" watchObservedRunningTime="2026-02-26 12:11:54.702155994 +0000 UTC m=+3981.357895109" Feb 26 12:11:57 crc kubenswrapper[4724]: I0226 12:11:57.978294 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:11:57 crc kubenswrapper[4724]: E0226 12:11:57.982351 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:12:00 crc kubenswrapper[4724]: I0226 12:12:00.583718 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535132-9tqc8"] Feb 26 12:12:00 crc kubenswrapper[4724]: I0226 12:12:00.605907 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535132-9tqc8" Feb 26 12:12:00 crc kubenswrapper[4724]: I0226 12:12:00.631559 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:12:00 crc kubenswrapper[4724]: I0226 12:12:00.631577 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:12:00 crc kubenswrapper[4724]: I0226 12:12:00.632624 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:12:00 crc kubenswrapper[4724]: I0226 12:12:00.785239 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfnpb\" (UniqueName: \"kubernetes.io/projected/862527d2-8e1c-41c3-a3fb-25b48262f2d0-kube-api-access-rfnpb\") pod \"auto-csr-approver-29535132-9tqc8\" (UID: \"862527d2-8e1c-41c3-a3fb-25b48262f2d0\") " pod="openshift-infra/auto-csr-approver-29535132-9tqc8" Feb 26 12:12:00 crc kubenswrapper[4724]: I0226 12:12:00.887545 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfnpb\" (UniqueName: \"kubernetes.io/projected/862527d2-8e1c-41c3-a3fb-25b48262f2d0-kube-api-access-rfnpb\") pod \"auto-csr-approver-29535132-9tqc8\" (UID: \"862527d2-8e1c-41c3-a3fb-25b48262f2d0\") " pod="openshift-infra/auto-csr-approver-29535132-9tqc8" Feb 26 12:12:00 crc kubenswrapper[4724]: I0226 12:12:00.920619 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535132-9tqc8"] Feb 26 12:12:00 crc kubenswrapper[4724]: I0226 12:12:00.979104 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfnpb\" (UniqueName: \"kubernetes.io/projected/862527d2-8e1c-41c3-a3fb-25b48262f2d0-kube-api-access-rfnpb\") pod \"auto-csr-approver-29535132-9tqc8\" (UID: \"862527d2-8e1c-41c3-a3fb-25b48262f2d0\") " pod="openshift-infra/auto-csr-approver-29535132-9tqc8" Feb 26 12:12:00 crc kubenswrapper[4724]: I0226 12:12:00.995149 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535132-9tqc8" Feb 26 12:12:02 crc kubenswrapper[4724]: I0226 12:12:02.879421 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535132-9tqc8"] Feb 26 12:12:03 crc kubenswrapper[4724]: I0226 12:12:03.783376 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535132-9tqc8" event={"ID":"862527d2-8e1c-41c3-a3fb-25b48262f2d0","Type":"ContainerStarted","Data":"7dd347579a767a221e04669e54e0851ef09e6902c874904fe3866546b31baec0"} Feb 26 12:12:04 crc kubenswrapper[4724]: I0226 12:12:04.009482 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:12:04 crc kubenswrapper[4724]: I0226 12:12:04.009529 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:12:05 crc kubenswrapper[4724]: I0226 12:12:05.078302 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8z4p4" podUID="913496db-dd8f-413c-8cee-5b99de20e179" containerName="registry-server" probeResult="failure" output=< Feb 26 12:12:05 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:12:05 crc kubenswrapper[4724]: > Feb 26 12:12:06 crc kubenswrapper[4724]: I0226 12:12:06.831757 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535132-9tqc8" event={"ID":"862527d2-8e1c-41c3-a3fb-25b48262f2d0","Type":"ContainerStarted","Data":"13863c96354220ba47d4c7137ea1f8de10c786006eb41f873544316cc4717907"} Feb 26 12:12:06 crc kubenswrapper[4724]: I0226 12:12:06.871959 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535132-9tqc8" podStartSLOduration=5.2788575810000005 podStartE2EDuration="6.851416486s" podCreationTimestamp="2026-02-26 12:12:00 +0000 UTC" firstStartedPulling="2026-02-26 12:12:02.933161536 +0000 UTC m=+3989.588900651" lastFinishedPulling="2026-02-26 12:12:04.505720441 +0000 UTC m=+3991.161459556" observedRunningTime="2026-02-26 12:12:06.848954363 +0000 UTC m=+3993.504693508" watchObservedRunningTime="2026-02-26 12:12:06.851416486 +0000 UTC m=+3993.507155601" Feb 26 12:12:08 crc kubenswrapper[4724]: I0226 12:12:08.855801 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535132-9tqc8" event={"ID":"862527d2-8e1c-41c3-a3fb-25b48262f2d0","Type":"ContainerDied","Data":"13863c96354220ba47d4c7137ea1f8de10c786006eb41f873544316cc4717907"} Feb 26 12:12:08 crc kubenswrapper[4724]: I0226 12:12:08.857394 4724 generic.go:334] "Generic (PLEG): container finished" podID="862527d2-8e1c-41c3-a3fb-25b48262f2d0" containerID="13863c96354220ba47d4c7137ea1f8de10c786006eb41f873544316cc4717907" exitCode=0 Feb 26 12:12:09 crc kubenswrapper[4724]: I0226 12:12:09.981124 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:12:09 crc kubenswrapper[4724]: E0226 12:12:09.984416 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:12:11 crc kubenswrapper[4724]: I0226 12:12:11.501714 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535132-9tqc8" Feb 26 12:12:11 crc kubenswrapper[4724]: I0226 12:12:11.606662 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfnpb\" (UniqueName: \"kubernetes.io/projected/862527d2-8e1c-41c3-a3fb-25b48262f2d0-kube-api-access-rfnpb\") pod \"862527d2-8e1c-41c3-a3fb-25b48262f2d0\" (UID: \"862527d2-8e1c-41c3-a3fb-25b48262f2d0\") " Feb 26 12:12:11 crc kubenswrapper[4724]: I0226 12:12:11.642806 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862527d2-8e1c-41c3-a3fb-25b48262f2d0-kube-api-access-rfnpb" (OuterVolumeSpecName: "kube-api-access-rfnpb") pod "862527d2-8e1c-41c3-a3fb-25b48262f2d0" (UID: "862527d2-8e1c-41c3-a3fb-25b48262f2d0"). InnerVolumeSpecName "kube-api-access-rfnpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:12:11 crc kubenswrapper[4724]: I0226 12:12:11.709979 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfnpb\" (UniqueName: \"kubernetes.io/projected/862527d2-8e1c-41c3-a3fb-25b48262f2d0-kube-api-access-rfnpb\") on node \"crc\" DevicePath \"\"" Feb 26 12:12:11 crc kubenswrapper[4724]: I0226 12:12:11.887361 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535132-9tqc8" event={"ID":"862527d2-8e1c-41c3-a3fb-25b48262f2d0","Type":"ContainerDied","Data":"7dd347579a767a221e04669e54e0851ef09e6902c874904fe3866546b31baec0"} Feb 26 12:12:11 crc kubenswrapper[4724]: I0226 12:12:11.887448 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535132-9tqc8" Feb 26 12:12:11 crc kubenswrapper[4724]: I0226 12:12:11.888317 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dd347579a767a221e04669e54e0851ef09e6902c874904fe3866546b31baec0" Feb 26 12:12:12 crc kubenswrapper[4724]: I0226 12:12:12.722548 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535126-qvn9j"] Feb 26 12:12:12 crc kubenswrapper[4724]: I0226 12:12:12.730979 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535126-qvn9j"] Feb 26 12:12:13 crc kubenswrapper[4724]: I0226 12:12:13.993426 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5633c143-692a-4e6a-993c-ed35be3b9c1a" path="/var/lib/kubelet/pods/5633c143-692a-4e6a-993c-ed35be3b9c1a/volumes" Feb 26 12:12:15 crc kubenswrapper[4724]: I0226 12:12:15.036188 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8z4p4" podUID="913496db-dd8f-413c-8cee-5b99de20e179" containerName="registry-server" probeResult="failure" output=< Feb 26 12:12:15 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:12:15 crc kubenswrapper[4724]: > Feb 26 12:12:17 crc kubenswrapper[4724]: I0226 12:12:17.685862 4724 scope.go:117] "RemoveContainer" containerID="b8217b27a0985f502cebfd4f527819aacb555806f90f3af090f00dabc15d7bd0" Feb 26 12:12:23 crc kubenswrapper[4724]: I0226 12:12:23.983828 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:12:23 crc kubenswrapper[4724]: E0226 12:12:23.986210 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.044738 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8z4p4" podUID="913496db-dd8f-413c-8cee-5b99de20e179" containerName="registry-server" probeResult="failure" output=< Feb 26 12:12:25 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:12:25 crc kubenswrapper[4724]: > Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.395706 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wv4jx"] Feb 26 12:12:25 crc kubenswrapper[4724]: E0226 12:12:25.402235 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862527d2-8e1c-41c3-a3fb-25b48262f2d0" containerName="oc" Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.402331 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="862527d2-8e1c-41c3-a3fb-25b48262f2d0" containerName="oc" Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.408781 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="862527d2-8e1c-41c3-a3fb-25b48262f2d0" containerName="oc" Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.437802 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.555672 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8grkc\" (UniqueName: \"kubernetes.io/projected/63cc268e-6a04-438c-a3c5-0d17f3437487-kube-api-access-8grkc\") pod \"redhat-marketplace-wv4jx\" (UID: \"63cc268e-6a04-438c-a3c5-0d17f3437487\") " pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.556303 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63cc268e-6a04-438c-a3c5-0d17f3437487-catalog-content\") pod \"redhat-marketplace-wv4jx\" (UID: \"63cc268e-6a04-438c-a3c5-0d17f3437487\") " pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.556425 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63cc268e-6a04-438c-a3c5-0d17f3437487-utilities\") pod \"redhat-marketplace-wv4jx\" (UID: \"63cc268e-6a04-438c-a3c5-0d17f3437487\") " pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.658141 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8grkc\" (UniqueName: \"kubernetes.io/projected/63cc268e-6a04-438c-a3c5-0d17f3437487-kube-api-access-8grkc\") pod \"redhat-marketplace-wv4jx\" (UID: \"63cc268e-6a04-438c-a3c5-0d17f3437487\") " pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.658251 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63cc268e-6a04-438c-a3c5-0d17f3437487-catalog-content\") pod \"redhat-marketplace-wv4jx\" (UID: \"63cc268e-6a04-438c-a3c5-0d17f3437487\") " pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.658297 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63cc268e-6a04-438c-a3c5-0d17f3437487-utilities\") pod \"redhat-marketplace-wv4jx\" (UID: \"63cc268e-6a04-438c-a3c5-0d17f3437487\") " pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.669915 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63cc268e-6a04-438c-a3c5-0d17f3437487-utilities\") pod \"redhat-marketplace-wv4jx\" (UID: \"63cc268e-6a04-438c-a3c5-0d17f3437487\") " pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.671091 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63cc268e-6a04-438c-a3c5-0d17f3437487-catalog-content\") pod \"redhat-marketplace-wv4jx\" (UID: \"63cc268e-6a04-438c-a3c5-0d17f3437487\") " pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.788375 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8grkc\" (UniqueName: \"kubernetes.io/projected/63cc268e-6a04-438c-a3c5-0d17f3437487-kube-api-access-8grkc\") pod \"redhat-marketplace-wv4jx\" (UID: \"63cc268e-6a04-438c-a3c5-0d17f3437487\") " pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.809672 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wv4jx"] Feb 26 12:12:25 crc kubenswrapper[4724]: I0226 12:12:25.846030 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:27 crc kubenswrapper[4724]: I0226 12:12:27.910697 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wv4jx"] Feb 26 12:12:28 crc kubenswrapper[4724]: I0226 12:12:28.071027 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv4jx" event={"ID":"63cc268e-6a04-438c-a3c5-0d17f3437487","Type":"ContainerStarted","Data":"f32344356c20e166f67db5671a8854516f23f81c5c8832760e3ce091c85a3ed0"} Feb 26 12:12:29 crc kubenswrapper[4724]: I0226 12:12:29.081230 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv4jx" event={"ID":"63cc268e-6a04-438c-a3c5-0d17f3437487","Type":"ContainerDied","Data":"3d93dd9c148e70e19aa4455b7f08452a34195c0d2d37023af9a7aa7b6d1d2f6e"} Feb 26 12:12:29 crc kubenswrapper[4724]: I0226 12:12:29.088224 4724 generic.go:334] "Generic (PLEG): container finished" podID="63cc268e-6a04-438c-a3c5-0d17f3437487" containerID="3d93dd9c148e70e19aa4455b7f08452a34195c0d2d37023af9a7aa7b6d1d2f6e" exitCode=0 Feb 26 12:12:32 crc kubenswrapper[4724]: I0226 12:12:32.119335 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv4jx" event={"ID":"63cc268e-6a04-438c-a3c5-0d17f3437487","Type":"ContainerStarted","Data":"f3794282dc7a6415c6bb412637446198eeed622079217ee66c4673f5726e23da"} Feb 26 12:12:34 crc kubenswrapper[4724]: I0226 12:12:34.082103 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:12:34 crc kubenswrapper[4724]: I0226 12:12:34.138333 4724 generic.go:334] "Generic (PLEG): container finished" podID="63cc268e-6a04-438c-a3c5-0d17f3437487" containerID="f3794282dc7a6415c6bb412637446198eeed622079217ee66c4673f5726e23da" exitCode=0 Feb 26 12:12:34 crc kubenswrapper[4724]: I0226 12:12:34.138389 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv4jx" event={"ID":"63cc268e-6a04-438c-a3c5-0d17f3437487","Type":"ContainerDied","Data":"f3794282dc7a6415c6bb412637446198eeed622079217ee66c4673f5726e23da"} Feb 26 12:12:34 crc kubenswrapper[4724]: I0226 12:12:34.150721 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:12:35 crc kubenswrapper[4724]: I0226 12:12:35.363576 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8z4p4"] Feb 26 12:12:35 crc kubenswrapper[4724]: I0226 12:12:35.383861 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8z4p4" podUID="913496db-dd8f-413c-8cee-5b99de20e179" containerName="registry-server" containerID="cri-o://b9fec49f8865f70431c21ca99db08c46047104d3ab58d523dc3a5f5b45954934" gracePeriod=2 Feb 26 12:12:36 crc kubenswrapper[4724]: I0226 12:12:36.188392 4724 generic.go:334] "Generic (PLEG): container finished" podID="913496db-dd8f-413c-8cee-5b99de20e179" containerID="b9fec49f8865f70431c21ca99db08c46047104d3ab58d523dc3a5f5b45954934" exitCode=0 Feb 26 12:12:36 crc kubenswrapper[4724]: I0226 12:12:36.188757 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z4p4" event={"ID":"913496db-dd8f-413c-8cee-5b99de20e179","Type":"ContainerDied","Data":"b9fec49f8865f70431c21ca99db08c46047104d3ab58d523dc3a5f5b45954934"} Feb 26 12:12:36 crc kubenswrapper[4724]: I0226 12:12:36.976341 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:12:36 crc kubenswrapper[4724]: E0226 12:12:36.976751 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:12:37 crc kubenswrapper[4724]: I0226 12:12:37.199702 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv4jx" event={"ID":"63cc268e-6a04-438c-a3c5-0d17f3437487","Type":"ContainerStarted","Data":"3241d20e94557bf2fdc08e987e081ac469ee461a8b7555b575cfa8efdbef43fe"} Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.087232 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.222292 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z4p4" Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.222606 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z4p4" event={"ID":"913496db-dd8f-413c-8cee-5b99de20e179","Type":"ContainerDied","Data":"8bc09056aca21f8d00d8bbe8172c0c270249aa547609d59ed39449d7e9f6cc09"} Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.222696 4724 scope.go:117] "RemoveContainer" containerID="b9fec49f8865f70431c21ca99db08c46047104d3ab58d523dc3a5f5b45954934" Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.253686 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/913496db-dd8f-413c-8cee-5b99de20e179-utilities\") pod \"913496db-dd8f-413c-8cee-5b99de20e179\" (UID: \"913496db-dd8f-413c-8cee-5b99de20e179\") " Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.253731 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78h2c\" (UniqueName: \"kubernetes.io/projected/913496db-dd8f-413c-8cee-5b99de20e179-kube-api-access-78h2c\") pod \"913496db-dd8f-413c-8cee-5b99de20e179\" (UID: \"913496db-dd8f-413c-8cee-5b99de20e179\") " Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.253885 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/913496db-dd8f-413c-8cee-5b99de20e179-catalog-content\") pod \"913496db-dd8f-413c-8cee-5b99de20e179\" (UID: \"913496db-dd8f-413c-8cee-5b99de20e179\") " Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.257577 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wv4jx" podStartSLOduration=6.410143255 podStartE2EDuration="13.25547559s" podCreationTimestamp="2026-02-26 12:12:25 +0000 UTC" firstStartedPulling="2026-02-26 12:12:29.082569341 +0000 UTC m=+4015.738308456" lastFinishedPulling="2026-02-26 12:12:35.927901636 +0000 UTC m=+4022.583640791" observedRunningTime="2026-02-26 12:12:38.242801726 +0000 UTC m=+4024.898540851" watchObservedRunningTime="2026-02-26 12:12:38.25547559 +0000 UTC m=+4024.911214715" Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.257498 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/913496db-dd8f-413c-8cee-5b99de20e179-utilities" (OuterVolumeSpecName: "utilities") pod "913496db-dd8f-413c-8cee-5b99de20e179" (UID: "913496db-dd8f-413c-8cee-5b99de20e179"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.269435 4724 scope.go:117] "RemoveContainer" containerID="b7d2eb2263d33a7d7565b59f1231cabefd0f7dd2a685351a78860ad5a2fd34f1" Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.290462 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/913496db-dd8f-413c-8cee-5b99de20e179-kube-api-access-78h2c" (OuterVolumeSpecName: "kube-api-access-78h2c") pod "913496db-dd8f-413c-8cee-5b99de20e179" (UID: "913496db-dd8f-413c-8cee-5b99de20e179"). InnerVolumeSpecName "kube-api-access-78h2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.356355 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/913496db-dd8f-413c-8cee-5b99de20e179-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.356395 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78h2c\" (UniqueName: \"kubernetes.io/projected/913496db-dd8f-413c-8cee-5b99de20e179-kube-api-access-78h2c\") on node \"crc\" DevicePath \"\"" Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.361953 4724 scope.go:117] "RemoveContainer" containerID="c98ea57e0b555d3ae2e4851171f19655940e2abc937755ad758ce50e39ff32d8" Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.458994 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/913496db-dd8f-413c-8cee-5b99de20e179-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "913496db-dd8f-413c-8cee-5b99de20e179" (UID: "913496db-dd8f-413c-8cee-5b99de20e179"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.561128 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/913496db-dd8f-413c-8cee-5b99de20e179-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.575194 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8z4p4"] Feb 26 12:12:38 crc kubenswrapper[4724]: I0226 12:12:38.586537 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8z4p4"] Feb 26 12:12:40 crc kubenswrapper[4724]: I0226 12:12:40.025870 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="913496db-dd8f-413c-8cee-5b99de20e179" path="/var/lib/kubelet/pods/913496db-dd8f-413c-8cee-5b99de20e179/volumes" Feb 26 12:12:45 crc kubenswrapper[4724]: I0226 12:12:45.847133 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:45 crc kubenswrapper[4724]: I0226 12:12:45.848139 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:47 crc kubenswrapper[4724]: I0226 12:12:47.002009 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wv4jx" podUID="63cc268e-6a04-438c-a3c5-0d17f3437487" containerName="registry-server" probeResult="failure" output=< Feb 26 12:12:47 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:12:47 crc kubenswrapper[4724]: > Feb 26 12:12:47 crc kubenswrapper[4724]: I0226 12:12:47.980981 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:12:47 crc kubenswrapper[4724]: E0226 12:12:47.986531 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:12:55 crc kubenswrapper[4724]: I0226 12:12:55.909649 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:55 crc kubenswrapper[4724]: I0226 12:12:55.973613 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:56 crc kubenswrapper[4724]: I0226 12:12:56.493432 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wv4jx"] Feb 26 12:12:57 crc kubenswrapper[4724]: I0226 12:12:57.172479 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wv4jx" podUID="63cc268e-6a04-438c-a3c5-0d17f3437487" containerName="registry-server" containerID="cri-o://3241d20e94557bf2fdc08e987e081ac469ee461a8b7555b575cfa8efdbef43fe" gracePeriod=2 Feb 26 12:12:58 crc kubenswrapper[4724]: I0226 12:12:58.192309 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv4jx" event={"ID":"63cc268e-6a04-438c-a3c5-0d17f3437487","Type":"ContainerDied","Data":"3241d20e94557bf2fdc08e987e081ac469ee461a8b7555b575cfa8efdbef43fe"} Feb 26 12:12:58 crc kubenswrapper[4724]: I0226 12:12:58.194262 4724 generic.go:334] "Generic (PLEG): container finished" podID="63cc268e-6a04-438c-a3c5-0d17f3437487" containerID="3241d20e94557bf2fdc08e987e081ac469ee461a8b7555b575cfa8efdbef43fe" exitCode=0 Feb 26 12:12:58 crc kubenswrapper[4724]: I0226 12:12:58.451745 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:58 crc kubenswrapper[4724]: I0226 12:12:58.601575 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63cc268e-6a04-438c-a3c5-0d17f3437487-utilities\") pod \"63cc268e-6a04-438c-a3c5-0d17f3437487\" (UID: \"63cc268e-6a04-438c-a3c5-0d17f3437487\") " Feb 26 12:12:58 crc kubenswrapper[4724]: I0226 12:12:58.601642 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63cc268e-6a04-438c-a3c5-0d17f3437487-catalog-content\") pod \"63cc268e-6a04-438c-a3c5-0d17f3437487\" (UID: \"63cc268e-6a04-438c-a3c5-0d17f3437487\") " Feb 26 12:12:58 crc kubenswrapper[4724]: I0226 12:12:58.601701 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8grkc\" (UniqueName: \"kubernetes.io/projected/63cc268e-6a04-438c-a3c5-0d17f3437487-kube-api-access-8grkc\") pod \"63cc268e-6a04-438c-a3c5-0d17f3437487\" (UID: \"63cc268e-6a04-438c-a3c5-0d17f3437487\") " Feb 26 12:12:58 crc kubenswrapper[4724]: I0226 12:12:58.610552 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63cc268e-6a04-438c-a3c5-0d17f3437487-utilities" (OuterVolumeSpecName: "utilities") pod "63cc268e-6a04-438c-a3c5-0d17f3437487" (UID: "63cc268e-6a04-438c-a3c5-0d17f3437487"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:12:58 crc kubenswrapper[4724]: I0226 12:12:58.641664 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63cc268e-6a04-438c-a3c5-0d17f3437487-kube-api-access-8grkc" (OuterVolumeSpecName: "kube-api-access-8grkc") pod "63cc268e-6a04-438c-a3c5-0d17f3437487" (UID: "63cc268e-6a04-438c-a3c5-0d17f3437487"). InnerVolumeSpecName "kube-api-access-8grkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:12:58 crc kubenswrapper[4724]: I0226 12:12:58.704432 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63cc268e-6a04-438c-a3c5-0d17f3437487-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63cc268e-6a04-438c-a3c5-0d17f3437487" (UID: "63cc268e-6a04-438c-a3c5-0d17f3437487"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:12:58 crc kubenswrapper[4724]: I0226 12:12:58.706817 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63cc268e-6a04-438c-a3c5-0d17f3437487-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:12:58 crc kubenswrapper[4724]: I0226 12:12:58.706889 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63cc268e-6a04-438c-a3c5-0d17f3437487-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:12:58 crc kubenswrapper[4724]: I0226 12:12:58.706911 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8grkc\" (UniqueName: \"kubernetes.io/projected/63cc268e-6a04-438c-a3c5-0d17f3437487-kube-api-access-8grkc\") on node \"crc\" DevicePath \"\"" Feb 26 12:12:59 crc kubenswrapper[4724]: I0226 12:12:59.212334 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv4jx" event={"ID":"63cc268e-6a04-438c-a3c5-0d17f3437487","Type":"ContainerDied","Data":"f32344356c20e166f67db5671a8854516f23f81c5c8832760e3ce091c85a3ed0"} Feb 26 12:12:59 crc kubenswrapper[4724]: I0226 12:12:59.212439 4724 scope.go:117] "RemoveContainer" containerID="3241d20e94557bf2fdc08e987e081ac469ee461a8b7555b575cfa8efdbef43fe" Feb 26 12:12:59 crc kubenswrapper[4724]: I0226 12:12:59.212691 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wv4jx" Feb 26 12:12:59 crc kubenswrapper[4724]: I0226 12:12:59.298960 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wv4jx"] Feb 26 12:12:59 crc kubenswrapper[4724]: I0226 12:12:59.311973 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wv4jx"] Feb 26 12:12:59 crc kubenswrapper[4724]: I0226 12:12:59.346366 4724 scope.go:117] "RemoveContainer" containerID="f3794282dc7a6415c6bb412637446198eeed622079217ee66c4673f5726e23da" Feb 26 12:12:59 crc kubenswrapper[4724]: I0226 12:12:59.390809 4724 scope.go:117] "RemoveContainer" containerID="3d93dd9c148e70e19aa4455b7f08452a34195c0d2d37023af9a7aa7b6d1d2f6e" Feb 26 12:12:59 crc kubenswrapper[4724]: I0226 12:12:59.991741 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63cc268e-6a04-438c-a3c5-0d17f3437487" path="/var/lib/kubelet/pods/63cc268e-6a04-438c-a3c5-0d17f3437487/volumes" Feb 26 12:13:00 crc kubenswrapper[4724]: I0226 12:13:00.976275 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:13:00 crc kubenswrapper[4724]: E0226 12:13:00.977871 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.146191 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-chvg4"] Feb 26 12:13:10 crc kubenswrapper[4724]: E0226 12:13:10.152639 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63cc268e-6a04-438c-a3c5-0d17f3437487" containerName="extract-utilities" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.152678 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="63cc268e-6a04-438c-a3c5-0d17f3437487" containerName="extract-utilities" Feb 26 12:13:10 crc kubenswrapper[4724]: E0226 12:13:10.152703 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63cc268e-6a04-438c-a3c5-0d17f3437487" containerName="registry-server" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.152711 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="63cc268e-6a04-438c-a3c5-0d17f3437487" containerName="registry-server" Feb 26 12:13:10 crc kubenswrapper[4724]: E0226 12:13:10.152730 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913496db-dd8f-413c-8cee-5b99de20e179" containerName="registry-server" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.152737 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="913496db-dd8f-413c-8cee-5b99de20e179" containerName="registry-server" Feb 26 12:13:10 crc kubenswrapper[4724]: E0226 12:13:10.152760 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913496db-dd8f-413c-8cee-5b99de20e179" containerName="extract-utilities" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.152790 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="913496db-dd8f-413c-8cee-5b99de20e179" containerName="extract-utilities" Feb 26 12:13:10 crc kubenswrapper[4724]: E0226 12:13:10.152800 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63cc268e-6a04-438c-a3c5-0d17f3437487" containerName="extract-content" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.152806 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="63cc268e-6a04-438c-a3c5-0d17f3437487" containerName="extract-content" Feb 26 12:13:10 crc kubenswrapper[4724]: E0226 12:13:10.152826 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913496db-dd8f-413c-8cee-5b99de20e179" containerName="extract-content" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.152832 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="913496db-dd8f-413c-8cee-5b99de20e179" containerName="extract-content" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.154408 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="63cc268e-6a04-438c-a3c5-0d17f3437487" containerName="registry-server" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.155311 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="913496db-dd8f-413c-8cee-5b99de20e179" containerName="registry-server" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.165474 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.264005 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chvg4"] Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.283835 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2t7j\" (UniqueName: \"kubernetes.io/projected/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-kube-api-access-c2t7j\") pod \"redhat-operators-chvg4\" (UID: \"a1c50f5f-291f-4743-92e3-30f9c4b9fad0\") " pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.284003 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-catalog-content\") pod \"redhat-operators-chvg4\" (UID: \"a1c50f5f-291f-4743-92e3-30f9c4b9fad0\") " pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.284060 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-utilities\") pod \"redhat-operators-chvg4\" (UID: \"a1c50f5f-291f-4743-92e3-30f9c4b9fad0\") " pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.386079 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2t7j\" (UniqueName: \"kubernetes.io/projected/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-kube-api-access-c2t7j\") pod \"redhat-operators-chvg4\" (UID: \"a1c50f5f-291f-4743-92e3-30f9c4b9fad0\") " pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.386208 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-catalog-content\") pod \"redhat-operators-chvg4\" (UID: \"a1c50f5f-291f-4743-92e3-30f9c4b9fad0\") " pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.386253 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-utilities\") pod \"redhat-operators-chvg4\" (UID: \"a1c50f5f-291f-4743-92e3-30f9c4b9fad0\") " pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.392084 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-utilities\") pod \"redhat-operators-chvg4\" (UID: \"a1c50f5f-291f-4743-92e3-30f9c4b9fad0\") " pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.393410 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-catalog-content\") pod \"redhat-operators-chvg4\" (UID: \"a1c50f5f-291f-4743-92e3-30f9c4b9fad0\") " pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.417646 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2t7j\" (UniqueName: \"kubernetes.io/projected/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-kube-api-access-c2t7j\") pod \"redhat-operators-chvg4\" (UID: \"a1c50f5f-291f-4743-92e3-30f9c4b9fad0\") " pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:13:10 crc kubenswrapper[4724]: I0226 12:13:10.515812 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:13:11 crc kubenswrapper[4724]: I0226 12:13:11.263757 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chvg4"] Feb 26 12:13:11 crc kubenswrapper[4724]: W0226 12:13:11.313110 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1c50f5f_291f_4743_92e3_30f9c4b9fad0.slice/crio-15bbf9bbe2107e7262330590c2d649a615e75819a495ef374599c982e4ed6f7f WatchSource:0}: Error finding container 15bbf9bbe2107e7262330590c2d649a615e75819a495ef374599c982e4ed6f7f: Status 404 returned error can't find the container with id 15bbf9bbe2107e7262330590c2d649a615e75819a495ef374599c982e4ed6f7f Feb 26 12:13:12 crc kubenswrapper[4724]: I0226 12:13:12.343813 4724 generic.go:334] "Generic (PLEG): container finished" podID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerID="429ea2aec60df9ee184756ff5276524742a6b9956a99de63fe91f7ce4a90b051" exitCode=0 Feb 26 12:13:12 crc kubenswrapper[4724]: I0226 12:13:12.343900 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chvg4" event={"ID":"a1c50f5f-291f-4743-92e3-30f9c4b9fad0","Type":"ContainerDied","Data":"429ea2aec60df9ee184756ff5276524742a6b9956a99de63fe91f7ce4a90b051"} Feb 26 12:13:12 crc kubenswrapper[4724]: I0226 12:13:12.344244 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chvg4" event={"ID":"a1c50f5f-291f-4743-92e3-30f9c4b9fad0","Type":"ContainerStarted","Data":"15bbf9bbe2107e7262330590c2d649a615e75819a495ef374599c982e4ed6f7f"} Feb 26 12:13:13 crc kubenswrapper[4724]: I0226 12:13:13.983349 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:13:13 crc kubenswrapper[4724]: E0226 12:13:13.984681 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:13:14 crc kubenswrapper[4724]: I0226 12:13:14.366171 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chvg4" event={"ID":"a1c50f5f-291f-4743-92e3-30f9c4b9fad0","Type":"ContainerStarted","Data":"33282c9ca72d3c695c774073bdd67d1416908bec9dd82f531f7a329d688ba431"} Feb 26 12:13:21 crc kubenswrapper[4724]: I0226 12:13:21.594658 4724 generic.go:334] "Generic (PLEG): container finished" podID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerID="33282c9ca72d3c695c774073bdd67d1416908bec9dd82f531f7a329d688ba431" exitCode=0 Feb 26 12:13:21 crc kubenswrapper[4724]: I0226 12:13:21.594805 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chvg4" event={"ID":"a1c50f5f-291f-4743-92e3-30f9c4b9fad0","Type":"ContainerDied","Data":"33282c9ca72d3c695c774073bdd67d1416908bec9dd82f531f7a329d688ba431"} Feb 26 12:13:22 crc kubenswrapper[4724]: I0226 12:13:22.607669 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chvg4" event={"ID":"a1c50f5f-291f-4743-92e3-30f9c4b9fad0","Type":"ContainerStarted","Data":"920952294d4bfa032db76ecf2839f1cb490bae508efce28ed5dc6a92409da4af"} Feb 26 12:13:22 crc kubenswrapper[4724]: I0226 12:13:22.635193 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-chvg4" podStartSLOduration=3.971376503 podStartE2EDuration="13.63058666s" podCreationTimestamp="2026-02-26 12:13:09 +0000 UTC" firstStartedPulling="2026-02-26 12:13:12.346536992 +0000 UTC m=+4059.002276107" lastFinishedPulling="2026-02-26 12:13:22.005747149 +0000 UTC m=+4068.661486264" observedRunningTime="2026-02-26 12:13:22.626962218 +0000 UTC m=+4069.282701363" watchObservedRunningTime="2026-02-26 12:13:22.63058666 +0000 UTC m=+4069.286325795" Feb 26 12:13:27 crc kubenswrapper[4724]: I0226 12:13:27.976472 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:13:27 crc kubenswrapper[4724]: E0226 12:13:27.977261 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:13:30 crc kubenswrapper[4724]: I0226 12:13:30.517500 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:13:30 crc kubenswrapper[4724]: I0226 12:13:30.519143 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:13:31 crc kubenswrapper[4724]: I0226 12:13:31.568678 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chvg4" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="registry-server" probeResult="failure" output=< Feb 26 12:13:31 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:13:31 crc kubenswrapper[4724]: > Feb 26 12:13:40 crc kubenswrapper[4724]: I0226 12:13:40.976859 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:13:40 crc kubenswrapper[4724]: E0226 12:13:40.977683 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:13:41 crc kubenswrapper[4724]: I0226 12:13:41.890705 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chvg4" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="registry-server" probeResult="failure" output=< Feb 26 12:13:41 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:13:41 crc kubenswrapper[4724]: > Feb 26 12:13:51 crc kubenswrapper[4724]: I0226 12:13:51.572439 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chvg4" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="registry-server" probeResult="failure" output=< Feb 26 12:13:51 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:13:51 crc kubenswrapper[4724]: > Feb 26 12:13:55 crc kubenswrapper[4724]: I0226 12:13:55.976825 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:13:55 crc kubenswrapper[4724]: E0226 12:13:55.977682 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:14:00 crc kubenswrapper[4724]: I0226 12:14:00.713526 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535134-kztbl"] Feb 26 12:14:00 crc kubenswrapper[4724]: I0226 12:14:00.737547 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535134-kztbl" Feb 26 12:14:00 crc kubenswrapper[4724]: I0226 12:14:00.751348 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:14:00 crc kubenswrapper[4724]: I0226 12:14:00.751367 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:14:00 crc kubenswrapper[4724]: I0226 12:14:00.751393 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:14:00 crc kubenswrapper[4724]: I0226 12:14:00.837168 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5gmz\" (UniqueName: \"kubernetes.io/projected/71a546ef-6f29-493e-b21f-629d765a0bf6-kube-api-access-r5gmz\") pod \"auto-csr-approver-29535134-kztbl\" (UID: \"71a546ef-6f29-493e-b21f-629d765a0bf6\") " pod="openshift-infra/auto-csr-approver-29535134-kztbl" Feb 26 12:14:00 crc kubenswrapper[4724]: I0226 12:14:00.878706 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535134-kztbl"] Feb 26 12:14:00 crc kubenswrapper[4724]: I0226 12:14:00.945562 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5gmz\" (UniqueName: \"kubernetes.io/projected/71a546ef-6f29-493e-b21f-629d765a0bf6-kube-api-access-r5gmz\") pod \"auto-csr-approver-29535134-kztbl\" (UID: \"71a546ef-6f29-493e-b21f-629d765a0bf6\") " pod="openshift-infra/auto-csr-approver-29535134-kztbl" Feb 26 12:14:01 crc kubenswrapper[4724]: I0226 12:14:01.003101 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5gmz\" (UniqueName: \"kubernetes.io/projected/71a546ef-6f29-493e-b21f-629d765a0bf6-kube-api-access-r5gmz\") pod \"auto-csr-approver-29535134-kztbl\" (UID: \"71a546ef-6f29-493e-b21f-629d765a0bf6\") " pod="openshift-infra/auto-csr-approver-29535134-kztbl" Feb 26 12:14:01 crc kubenswrapper[4724]: I0226 12:14:01.103281 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535134-kztbl" Feb 26 12:14:01 crc kubenswrapper[4724]: I0226 12:14:01.579706 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chvg4" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="registry-server" probeResult="failure" output=< Feb 26 12:14:01 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:14:01 crc kubenswrapper[4724]: > Feb 26 12:14:02 crc kubenswrapper[4724]: I0226 12:14:02.608113 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535134-kztbl"] Feb 26 12:14:03 crc kubenswrapper[4724]: I0226 12:14:03.034832 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535134-kztbl" event={"ID":"71a546ef-6f29-493e-b21f-629d765a0bf6","Type":"ContainerStarted","Data":"1f3de1667030cdd28ad571ab654c5178d4ec69073b7406e62fda64efd335337a"} Feb 26 12:14:07 crc kubenswrapper[4724]: I0226 12:14:07.078191 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535134-kztbl" event={"ID":"71a546ef-6f29-493e-b21f-629d765a0bf6","Type":"ContainerStarted","Data":"98f670b0db67fb1a7858d7e24c2c0dcfb1491baf7e33cf4444cdfda8dcacacf6"} Feb 26 12:14:07 crc kubenswrapper[4724]: I0226 12:14:07.105742 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535134-kztbl" podStartSLOduration=5.361709863 podStartE2EDuration="7.103703248s" podCreationTimestamp="2026-02-26 12:14:00 +0000 UTC" firstStartedPulling="2026-02-26 12:14:02.644882801 +0000 UTC m=+4109.300621906" lastFinishedPulling="2026-02-26 12:14:04.386876176 +0000 UTC m=+4111.042615291" observedRunningTime="2026-02-26 12:14:07.093820106 +0000 UTC m=+4113.749559231" watchObservedRunningTime="2026-02-26 12:14:07.103703248 +0000 UTC m=+4113.759442363" Feb 26 12:14:09 crc kubenswrapper[4724]: I0226 12:14:09.098711 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535134-kztbl" event={"ID":"71a546ef-6f29-493e-b21f-629d765a0bf6","Type":"ContainerDied","Data":"98f670b0db67fb1a7858d7e24c2c0dcfb1491baf7e33cf4444cdfda8dcacacf6"} Feb 26 12:14:09 crc kubenswrapper[4724]: I0226 12:14:09.098903 4724 generic.go:334] "Generic (PLEG): container finished" podID="71a546ef-6f29-493e-b21f-629d765a0bf6" containerID="98f670b0db67fb1a7858d7e24c2c0dcfb1491baf7e33cf4444cdfda8dcacacf6" exitCode=0 Feb 26 12:14:09 crc kubenswrapper[4724]: I0226 12:14:09.975798 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:14:09 crc kubenswrapper[4724]: E0226 12:14:09.976434 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:14:10 crc kubenswrapper[4724]: I0226 12:14:10.816301 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535134-kztbl" Feb 26 12:14:10 crc kubenswrapper[4724]: I0226 12:14:10.882011 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5gmz\" (UniqueName: \"kubernetes.io/projected/71a546ef-6f29-493e-b21f-629d765a0bf6-kube-api-access-r5gmz\") pod \"71a546ef-6f29-493e-b21f-629d765a0bf6\" (UID: \"71a546ef-6f29-493e-b21f-629d765a0bf6\") " Feb 26 12:14:10 crc kubenswrapper[4724]: I0226 12:14:10.895454 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a546ef-6f29-493e-b21f-629d765a0bf6-kube-api-access-r5gmz" (OuterVolumeSpecName: "kube-api-access-r5gmz") pod "71a546ef-6f29-493e-b21f-629d765a0bf6" (UID: "71a546ef-6f29-493e-b21f-629d765a0bf6"). InnerVolumeSpecName "kube-api-access-r5gmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:14:10 crc kubenswrapper[4724]: I0226 12:14:10.992415 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5gmz\" (UniqueName: \"kubernetes.io/projected/71a546ef-6f29-493e-b21f-629d765a0bf6-kube-api-access-r5gmz\") on node \"crc\" DevicePath \"\"" Feb 26 12:14:11 crc kubenswrapper[4724]: I0226 12:14:11.122523 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535134-kztbl" event={"ID":"71a546ef-6f29-493e-b21f-629d765a0bf6","Type":"ContainerDied","Data":"1f3de1667030cdd28ad571ab654c5178d4ec69073b7406e62fda64efd335337a"} Feb 26 12:14:11 crc kubenswrapper[4724]: I0226 12:14:11.122656 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535134-kztbl" Feb 26 12:14:11 crc kubenswrapper[4724]: I0226 12:14:11.123295 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f3de1667030cdd28ad571ab654c5178d4ec69073b7406e62fda64efd335337a" Feb 26 12:14:11 crc kubenswrapper[4724]: I0226 12:14:11.272885 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535128-k4mvw"] Feb 26 12:14:11 crc kubenswrapper[4724]: I0226 12:14:11.282032 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535128-k4mvw"] Feb 26 12:14:11 crc kubenswrapper[4724]: I0226 12:14:11.624429 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chvg4" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="registry-server" probeResult="failure" output=< Feb 26 12:14:11 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:14:11 crc kubenswrapper[4724]: > Feb 26 12:14:12 crc kubenswrapper[4724]: I0226 12:14:12.023239 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c018a601-fd63-4e85-a94e-582acf4fa03b" path="/var/lib/kubelet/pods/c018a601-fd63-4e85-a94e-582acf4fa03b/volumes" Feb 26 12:14:18 crc kubenswrapper[4724]: I0226 12:14:18.917255 4724 scope.go:117] "RemoveContainer" containerID="c4cb52490a068714475dcd5a7244517ead2079b3b4adb52c6fc184bc0dc064c8" Feb 26 12:14:21 crc kubenswrapper[4724]: I0226 12:14:21.610286 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chvg4" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="registry-server" probeResult="failure" output=< Feb 26 12:14:21 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:14:21 crc kubenswrapper[4724]: > Feb 26 12:14:23 crc kubenswrapper[4724]: I0226 12:14:23.982105 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:14:23 crc kubenswrapper[4724]: E0226 12:14:23.982749 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:14:31 crc kubenswrapper[4724]: I0226 12:14:31.567787 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chvg4" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="registry-server" probeResult="failure" output=< Feb 26 12:14:31 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:14:31 crc kubenswrapper[4724]: > Feb 26 12:14:38 crc kubenswrapper[4724]: I0226 12:14:38.976742 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:14:38 crc kubenswrapper[4724]: E0226 12:14:38.977597 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:14:41 crc kubenswrapper[4724]: I0226 12:14:41.562410 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chvg4" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="registry-server" probeResult="failure" output=< Feb 26 12:14:41 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:14:41 crc kubenswrapper[4724]: > Feb 26 12:14:49 crc kubenswrapper[4724]: I0226 12:14:49.976598 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:14:49 crc kubenswrapper[4724]: E0226 12:14:49.977454 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:14:51 crc kubenswrapper[4724]: I0226 12:14:51.570882 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chvg4" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="registry-server" probeResult="failure" output=< Feb 26 12:14:51 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:14:51 crc kubenswrapper[4724]: > Feb 26 12:15:00 crc kubenswrapper[4724]: I0226 12:15:00.642946 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:15:00 crc kubenswrapper[4724]: I0226 12:15:00.709088 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:15:00 crc kubenswrapper[4724]: I0226 12:15:00.754750 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh"] Feb 26 12:15:00 crc kubenswrapper[4724]: E0226 12:15:00.769314 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a546ef-6f29-493e-b21f-629d765a0bf6" containerName="oc" Feb 26 12:15:00 crc kubenswrapper[4724]: I0226 12:15:00.769363 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a546ef-6f29-493e-b21f-629d765a0bf6" containerName="oc" Feb 26 12:15:00 crc kubenswrapper[4724]: I0226 12:15:00.772416 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a546ef-6f29-493e-b21f-629d765a0bf6" containerName="oc" Feb 26 12:15:00 crc kubenswrapper[4724]: I0226 12:15:00.779562 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" Feb 26 12:15:00 crc kubenswrapper[4724]: I0226 12:15:00.813261 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 12:15:00 crc kubenswrapper[4724]: I0226 12:15:00.813370 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 12:15:00 crc kubenswrapper[4724]: I0226 12:15:00.908992 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh"] Feb 26 12:15:00 crc kubenswrapper[4724]: I0226 12:15:00.967449 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7019362b-8ced-4f02-9bcc-c92fcc157acd-secret-volume\") pod \"collect-profiles-29535135-gs2kh\" (UID: \"7019362b-8ced-4f02-9bcc-c92fcc157acd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" Feb 26 12:15:00 crc kubenswrapper[4724]: I0226 12:15:00.967594 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc62l\" (UniqueName: \"kubernetes.io/projected/7019362b-8ced-4f02-9bcc-c92fcc157acd-kube-api-access-hc62l\") pod \"collect-profiles-29535135-gs2kh\" (UID: \"7019362b-8ced-4f02-9bcc-c92fcc157acd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" Feb 26 12:15:00 crc kubenswrapper[4724]: I0226 12:15:00.967665 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7019362b-8ced-4f02-9bcc-c92fcc157acd-config-volume\") pod \"collect-profiles-29535135-gs2kh\" (UID: \"7019362b-8ced-4f02-9bcc-c92fcc157acd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" Feb 26 12:15:01 crc kubenswrapper[4724]: I0226 12:15:01.069749 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc62l\" (UniqueName: \"kubernetes.io/projected/7019362b-8ced-4f02-9bcc-c92fcc157acd-kube-api-access-hc62l\") pod \"collect-profiles-29535135-gs2kh\" (UID: \"7019362b-8ced-4f02-9bcc-c92fcc157acd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" Feb 26 12:15:01 crc kubenswrapper[4724]: I0226 12:15:01.069883 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7019362b-8ced-4f02-9bcc-c92fcc157acd-config-volume\") pod \"collect-profiles-29535135-gs2kh\" (UID: \"7019362b-8ced-4f02-9bcc-c92fcc157acd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" Feb 26 12:15:01 crc kubenswrapper[4724]: I0226 12:15:01.070060 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7019362b-8ced-4f02-9bcc-c92fcc157acd-secret-volume\") pod \"collect-profiles-29535135-gs2kh\" (UID: \"7019362b-8ced-4f02-9bcc-c92fcc157acd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" Feb 26 12:15:01 crc kubenswrapper[4724]: I0226 12:15:01.074541 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chvg4"] Feb 26 12:15:01 crc kubenswrapper[4724]: I0226 12:15:01.079898 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7019362b-8ced-4f02-9bcc-c92fcc157acd-config-volume\") pod \"collect-profiles-29535135-gs2kh\" (UID: \"7019362b-8ced-4f02-9bcc-c92fcc157acd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" Feb 26 12:15:01 crc kubenswrapper[4724]: I0226 12:15:01.108931 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7019362b-8ced-4f02-9bcc-c92fcc157acd-secret-volume\") pod \"collect-profiles-29535135-gs2kh\" (UID: \"7019362b-8ced-4f02-9bcc-c92fcc157acd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" Feb 26 12:15:01 crc kubenswrapper[4724]: I0226 12:15:01.110077 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc62l\" (UniqueName: \"kubernetes.io/projected/7019362b-8ced-4f02-9bcc-c92fcc157acd-kube-api-access-hc62l\") pod \"collect-profiles-29535135-gs2kh\" (UID: \"7019362b-8ced-4f02-9bcc-c92fcc157acd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" Feb 26 12:15:01 crc kubenswrapper[4724]: I0226 12:15:01.163713 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" Feb 26 12:15:01 crc kubenswrapper[4724]: I0226 12:15:01.684140 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-chvg4" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="registry-server" containerID="cri-o://920952294d4bfa032db76ecf2839f1cb490bae508efce28ed5dc6a92409da4af" gracePeriod=2 Feb 26 12:15:02 crc kubenswrapper[4724]: I0226 12:15:02.702957 4724 generic.go:334] "Generic (PLEG): container finished" podID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerID="920952294d4bfa032db76ecf2839f1cb490bae508efce28ed5dc6a92409da4af" exitCode=0 Feb 26 12:15:02 crc kubenswrapper[4724]: I0226 12:15:02.703088 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chvg4" event={"ID":"a1c50f5f-291f-4743-92e3-30f9c4b9fad0","Type":"ContainerDied","Data":"920952294d4bfa032db76ecf2839f1cb490bae508efce28ed5dc6a92409da4af"} Feb 26 12:15:03 crc kubenswrapper[4724]: I0226 12:15:03.693127 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:15:03 crc kubenswrapper[4724]: I0226 12:15:03.701480 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh"] Feb 26 12:15:03 crc kubenswrapper[4724]: I0226 12:15:03.716301 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chvg4" event={"ID":"a1c50f5f-291f-4743-92e3-30f9c4b9fad0","Type":"ContainerDied","Data":"15bbf9bbe2107e7262330590c2d649a615e75819a495ef374599c982e4ed6f7f"} Feb 26 12:15:03 crc kubenswrapper[4724]: I0226 12:15:03.716384 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chvg4" Feb 26 12:15:03 crc kubenswrapper[4724]: I0226 12:15:03.716403 4724 scope.go:117] "RemoveContainer" containerID="920952294d4bfa032db76ecf2839f1cb490bae508efce28ed5dc6a92409da4af" Feb 26 12:15:03 crc kubenswrapper[4724]: I0226 12:15:03.831468 4724 scope.go:117] "RemoveContainer" containerID="33282c9ca72d3c695c774073bdd67d1416908bec9dd82f531f7a329d688ba431" Feb 26 12:15:03 crc kubenswrapper[4724]: I0226 12:15:03.841987 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-catalog-content\") pod \"a1c50f5f-291f-4743-92e3-30f9c4b9fad0\" (UID: \"a1c50f5f-291f-4743-92e3-30f9c4b9fad0\") " Feb 26 12:15:03 crc kubenswrapper[4724]: I0226 12:15:03.842389 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2t7j\" (UniqueName: \"kubernetes.io/projected/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-kube-api-access-c2t7j\") pod \"a1c50f5f-291f-4743-92e3-30f9c4b9fad0\" (UID: \"a1c50f5f-291f-4743-92e3-30f9c4b9fad0\") " Feb 26 12:15:03 crc kubenswrapper[4724]: I0226 12:15:03.842433 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-utilities\") pod \"a1c50f5f-291f-4743-92e3-30f9c4b9fad0\" (UID: \"a1c50f5f-291f-4743-92e3-30f9c4b9fad0\") " Feb 26 12:15:03 crc kubenswrapper[4724]: I0226 12:15:03.857365 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-utilities" (OuterVolumeSpecName: "utilities") pod "a1c50f5f-291f-4743-92e3-30f9c4b9fad0" (UID: "a1c50f5f-291f-4743-92e3-30f9c4b9fad0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:15:03 crc kubenswrapper[4724]: I0226 12:15:03.879501 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-kube-api-access-c2t7j" (OuterVolumeSpecName: "kube-api-access-c2t7j") pod "a1c50f5f-291f-4743-92e3-30f9c4b9fad0" (UID: "a1c50f5f-291f-4743-92e3-30f9c4b9fad0"). InnerVolumeSpecName "kube-api-access-c2t7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:15:03 crc kubenswrapper[4724]: I0226 12:15:03.896574 4724 scope.go:117] "RemoveContainer" containerID="429ea2aec60df9ee184756ff5276524742a6b9956a99de63fe91f7ce4a90b051" Feb 26 12:15:03 crc kubenswrapper[4724]: I0226 12:15:03.946575 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:15:03 crc kubenswrapper[4724]: I0226 12:15:03.946622 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2t7j\" (UniqueName: \"kubernetes.io/projected/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-kube-api-access-c2t7j\") on node \"crc\" DevicePath \"\"" Feb 26 12:15:03 crc kubenswrapper[4724]: I0226 12:15:03.990654 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:15:03 crc kubenswrapper[4724]: E0226 12:15:03.991073 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:15:04 crc kubenswrapper[4724]: I0226 12:15:04.102784 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1c50f5f-291f-4743-92e3-30f9c4b9fad0" (UID: "a1c50f5f-291f-4743-92e3-30f9c4b9fad0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:15:04 crc kubenswrapper[4724]: I0226 12:15:04.151534 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c50f5f-291f-4743-92e3-30f9c4b9fad0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:15:04 crc kubenswrapper[4724]: I0226 12:15:04.370169 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chvg4"] Feb 26 12:15:04 crc kubenswrapper[4724]: I0226 12:15:04.382200 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-chvg4"] Feb 26 12:15:04 crc kubenswrapper[4724]: I0226 12:15:04.728650 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" event={"ID":"7019362b-8ced-4f02-9bcc-c92fcc157acd","Type":"ContainerStarted","Data":"17ce60416393169d241b2cd08f67771a0ef14f3697375a2cbdc8e422e4d28deb"} Feb 26 12:15:04 crc kubenswrapper[4724]: I0226 12:15:04.728710 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" event={"ID":"7019362b-8ced-4f02-9bcc-c92fcc157acd","Type":"ContainerStarted","Data":"1a2aeef36e6be23e813296d6ee300b354a68719c1a3921dad88acb1f836c0a11"} Feb 26 12:15:04 crc kubenswrapper[4724]: I0226 12:15:04.758391 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" podStartSLOduration=4.754133425 podStartE2EDuration="4.754133425s" podCreationTimestamp="2026-02-26 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 12:15:04.747508455 +0000 UTC m=+4171.403247570" watchObservedRunningTime="2026-02-26 12:15:04.754133425 +0000 UTC m=+4171.409872540" Feb 26 12:15:05 crc kubenswrapper[4724]: I0226 12:15:05.745401 4724 generic.go:334] "Generic (PLEG): container finished" podID="7019362b-8ced-4f02-9bcc-c92fcc157acd" containerID="17ce60416393169d241b2cd08f67771a0ef14f3697375a2cbdc8e422e4d28deb" exitCode=0 Feb 26 12:15:05 crc kubenswrapper[4724]: I0226 12:15:05.745513 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" event={"ID":"7019362b-8ced-4f02-9bcc-c92fcc157acd","Type":"ContainerDied","Data":"17ce60416393169d241b2cd08f67771a0ef14f3697375a2cbdc8e422e4d28deb"} Feb 26 12:15:06 crc kubenswrapper[4724]: I0226 12:15:06.003394 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" path="/var/lib/kubelet/pods/a1c50f5f-291f-4743-92e3-30f9c4b9fad0/volumes" Feb 26 12:15:07 crc kubenswrapper[4724]: I0226 12:15:07.354363 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" Feb 26 12:15:07 crc kubenswrapper[4724]: I0226 12:15:07.530325 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7019362b-8ced-4f02-9bcc-c92fcc157acd-secret-volume\") pod \"7019362b-8ced-4f02-9bcc-c92fcc157acd\" (UID: \"7019362b-8ced-4f02-9bcc-c92fcc157acd\") " Feb 26 12:15:07 crc kubenswrapper[4724]: I0226 12:15:07.530500 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc62l\" (UniqueName: \"kubernetes.io/projected/7019362b-8ced-4f02-9bcc-c92fcc157acd-kube-api-access-hc62l\") pod \"7019362b-8ced-4f02-9bcc-c92fcc157acd\" (UID: \"7019362b-8ced-4f02-9bcc-c92fcc157acd\") " Feb 26 12:15:07 crc kubenswrapper[4724]: I0226 12:15:07.530672 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7019362b-8ced-4f02-9bcc-c92fcc157acd-config-volume\") pod \"7019362b-8ced-4f02-9bcc-c92fcc157acd\" (UID: \"7019362b-8ced-4f02-9bcc-c92fcc157acd\") " Feb 26 12:15:07 crc kubenswrapper[4724]: I0226 12:15:07.532289 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7019362b-8ced-4f02-9bcc-c92fcc157acd-config-volume" (OuterVolumeSpecName: "config-volume") pod "7019362b-8ced-4f02-9bcc-c92fcc157acd" (UID: "7019362b-8ced-4f02-9bcc-c92fcc157acd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 12:15:07 crc kubenswrapper[4724]: I0226 12:15:07.633639 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7019362b-8ced-4f02-9bcc-c92fcc157acd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 12:15:07 crc kubenswrapper[4724]: I0226 12:15:07.795811 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" event={"ID":"7019362b-8ced-4f02-9bcc-c92fcc157acd","Type":"ContainerDied","Data":"1a2aeef36e6be23e813296d6ee300b354a68719c1a3921dad88acb1f836c0a11"} Feb 26 12:15:07 crc kubenswrapper[4724]: I0226 12:15:07.796360 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2aeef36e6be23e813296d6ee300b354a68719c1a3921dad88acb1f836c0a11" Feb 26 12:15:07 crc kubenswrapper[4724]: I0226 12:15:07.795894 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh" Feb 26 12:15:07 crc kubenswrapper[4724]: I0226 12:15:07.950056 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd"] Feb 26 12:15:07 crc kubenswrapper[4724]: I0226 12:15:07.962765 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535090-229qd"] Feb 26 12:15:07 crc kubenswrapper[4724]: I0226 12:15:07.992808 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a41aac00-2bbf-4232-bd75-5bf0f9f69f70" path="/var/lib/kubelet/pods/a41aac00-2bbf-4232-bd75-5bf0f9f69f70/volumes" Feb 26 12:15:08 crc kubenswrapper[4724]: I0226 12:15:08.120208 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7019362b-8ced-4f02-9bcc-c92fcc157acd-kube-api-access-hc62l" (OuterVolumeSpecName: "kube-api-access-hc62l") pod "7019362b-8ced-4f02-9bcc-c92fcc157acd" (UID: "7019362b-8ced-4f02-9bcc-c92fcc157acd"). InnerVolumeSpecName "kube-api-access-hc62l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:15:08 crc kubenswrapper[4724]: I0226 12:15:08.124276 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7019362b-8ced-4f02-9bcc-c92fcc157acd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7019362b-8ced-4f02-9bcc-c92fcc157acd" (UID: "7019362b-8ced-4f02-9bcc-c92fcc157acd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 12:15:08 crc kubenswrapper[4724]: I0226 12:15:08.144908 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7019362b-8ced-4f02-9bcc-c92fcc157acd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 12:15:08 crc kubenswrapper[4724]: I0226 12:15:08.144952 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc62l\" (UniqueName: \"kubernetes.io/projected/7019362b-8ced-4f02-9bcc-c92fcc157acd-kube-api-access-hc62l\") on node \"crc\" DevicePath \"\"" Feb 26 12:15:16 crc kubenswrapper[4724]: I0226 12:15:16.979329 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:15:16 crc kubenswrapper[4724]: E0226 12:15:16.980081 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:15:19 crc kubenswrapper[4724]: I0226 12:15:19.414889 4724 scope.go:117] "RemoveContainer" containerID="30ebceb35be98207d7a43c771910f507ae7bd49438dbca66d2bedcdf5387c759" Feb 26 12:15:27 crc kubenswrapper[4724]: I0226 12:15:27.976084 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:15:27 crc kubenswrapper[4724]: E0226 12:15:27.976767 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:15:39 crc kubenswrapper[4724]: I0226 12:15:39.976595 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:15:39 crc kubenswrapper[4724]: E0226 12:15:39.978046 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:15:50 crc kubenswrapper[4724]: I0226 12:15:50.975823 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:15:50 crc kubenswrapper[4724]: E0226 12:15:50.976669 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.286168 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535136-9z4wg"] Feb 26 12:16:00 crc kubenswrapper[4724]: E0226 12:16:00.287806 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="registry-server" Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.287827 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="registry-server" Feb 26 12:16:00 crc kubenswrapper[4724]: E0226 12:16:00.287860 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="extract-content" Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.287866 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="extract-content" Feb 26 12:16:00 crc kubenswrapper[4724]: E0226 12:16:00.287874 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="extract-utilities" Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.287880 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="extract-utilities" Feb 26 12:16:00 crc kubenswrapper[4724]: E0226 12:16:00.287898 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7019362b-8ced-4f02-9bcc-c92fcc157acd" containerName="collect-profiles" Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.287903 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7019362b-8ced-4f02-9bcc-c92fcc157acd" containerName="collect-profiles" Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.288817 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7019362b-8ced-4f02-9bcc-c92fcc157acd" containerName="collect-profiles" Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.288839 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c50f5f-291f-4743-92e3-30f9c4b9fad0" containerName="registry-server" Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.292720 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535136-9z4wg" Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.307541 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.307542 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.309558 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.333399 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxcd2\" (UniqueName: \"kubernetes.io/projected/829ab740-aea5-4c9d-a741-0c37cee92652-kube-api-access-zxcd2\") pod \"auto-csr-approver-29535136-9z4wg\" (UID: \"829ab740-aea5-4c9d-a741-0c37cee92652\") " pod="openshift-infra/auto-csr-approver-29535136-9z4wg" Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.361496 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535136-9z4wg"] Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.435325 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxcd2\" (UniqueName: \"kubernetes.io/projected/829ab740-aea5-4c9d-a741-0c37cee92652-kube-api-access-zxcd2\") pod \"auto-csr-approver-29535136-9z4wg\" (UID: \"829ab740-aea5-4c9d-a741-0c37cee92652\") " pod="openshift-infra/auto-csr-approver-29535136-9z4wg" Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.520946 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxcd2\" (UniqueName: \"kubernetes.io/projected/829ab740-aea5-4c9d-a741-0c37cee92652-kube-api-access-zxcd2\") pod \"auto-csr-approver-29535136-9z4wg\" (UID: \"829ab740-aea5-4c9d-a741-0c37cee92652\") " pod="openshift-infra/auto-csr-approver-29535136-9z4wg" Feb 26 12:16:00 crc kubenswrapper[4724]: I0226 12:16:00.631787 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535136-9z4wg" Feb 26 12:16:02 crc kubenswrapper[4724]: I0226 12:16:02.215144 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535136-9z4wg"] Feb 26 12:16:02 crc kubenswrapper[4724]: I0226 12:16:02.262016 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 12:16:02 crc kubenswrapper[4724]: I0226 12:16:02.496474 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535136-9z4wg" event={"ID":"829ab740-aea5-4c9d-a741-0c37cee92652","Type":"ContainerStarted","Data":"d0b948d01615684fdef6f9e33e1c33631ef85cf55564c5b150b289c02110d410"} Feb 26 12:16:05 crc kubenswrapper[4724]: I0226 12:16:05.524757 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535136-9z4wg" event={"ID":"829ab740-aea5-4c9d-a741-0c37cee92652","Type":"ContainerStarted","Data":"2a98365f516de59309981b6609cdc4e482d207d60289640a4bb0247b55b015c8"} Feb 26 12:16:05 crc kubenswrapper[4724]: I0226 12:16:05.541694 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535136-9z4wg" podStartSLOduration=4.144019212 podStartE2EDuration="5.541666415s" podCreationTimestamp="2026-02-26 12:16:00 +0000 UTC" firstStartedPulling="2026-02-26 12:16:02.254622238 +0000 UTC m=+4228.910361353" lastFinishedPulling="2026-02-26 12:16:03.652269441 +0000 UTC m=+4230.308008556" observedRunningTime="2026-02-26 12:16:05.538052742 +0000 UTC m=+4232.193791877" watchObservedRunningTime="2026-02-26 12:16:05.541666415 +0000 UTC m=+4232.197405530" Feb 26 12:16:05 crc kubenswrapper[4724]: I0226 12:16:05.976842 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:16:05 crc kubenswrapper[4724]: E0226 12:16:05.977206 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:16:07 crc kubenswrapper[4724]: I0226 12:16:07.544803 4724 generic.go:334] "Generic (PLEG): container finished" podID="829ab740-aea5-4c9d-a741-0c37cee92652" containerID="2a98365f516de59309981b6609cdc4e482d207d60289640a4bb0247b55b015c8" exitCode=0 Feb 26 12:16:07 crc kubenswrapper[4724]: I0226 12:16:07.545239 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535136-9z4wg" event={"ID":"829ab740-aea5-4c9d-a741-0c37cee92652","Type":"ContainerDied","Data":"2a98365f516de59309981b6609cdc4e482d207d60289640a4bb0247b55b015c8"} Feb 26 12:16:09 crc kubenswrapper[4724]: I0226 12:16:09.098468 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535136-9z4wg" Feb 26 12:16:09 crc kubenswrapper[4724]: I0226 12:16:09.232220 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxcd2\" (UniqueName: \"kubernetes.io/projected/829ab740-aea5-4c9d-a741-0c37cee92652-kube-api-access-zxcd2\") pod \"829ab740-aea5-4c9d-a741-0c37cee92652\" (UID: \"829ab740-aea5-4c9d-a741-0c37cee92652\") " Feb 26 12:16:09 crc kubenswrapper[4724]: I0226 12:16:09.247111 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829ab740-aea5-4c9d-a741-0c37cee92652-kube-api-access-zxcd2" (OuterVolumeSpecName: "kube-api-access-zxcd2") pod "829ab740-aea5-4c9d-a741-0c37cee92652" (UID: "829ab740-aea5-4c9d-a741-0c37cee92652"). InnerVolumeSpecName "kube-api-access-zxcd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:16:09 crc kubenswrapper[4724]: I0226 12:16:09.334882 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxcd2\" (UniqueName: \"kubernetes.io/projected/829ab740-aea5-4c9d-a741-0c37cee92652-kube-api-access-zxcd2\") on node \"crc\" DevicePath \"\"" Feb 26 12:16:09 crc kubenswrapper[4724]: I0226 12:16:09.565752 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535136-9z4wg" event={"ID":"829ab740-aea5-4c9d-a741-0c37cee92652","Type":"ContainerDied","Data":"d0b948d01615684fdef6f9e33e1c33631ef85cf55564c5b150b289c02110d410"} Feb 26 12:16:09 crc kubenswrapper[4724]: I0226 12:16:09.566020 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0b948d01615684fdef6f9e33e1c33631ef85cf55564c5b150b289c02110d410" Feb 26 12:16:09 crc kubenswrapper[4724]: I0226 12:16:09.565799 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535136-9z4wg" Feb 26 12:16:10 crc kubenswrapper[4724]: I0226 12:16:10.208212 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535130-hgz2x"] Feb 26 12:16:10 crc kubenswrapper[4724]: I0226 12:16:10.226487 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535130-hgz2x"] Feb 26 12:16:11 crc kubenswrapper[4724]: I0226 12:16:11.988361 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6a414da-dd5a-4384-818d-50f8d04e5c65" path="/var/lib/kubelet/pods/a6a414da-dd5a-4384-818d-50f8d04e5c65/volumes" Feb 26 12:16:17 crc kubenswrapper[4724]: I0226 12:16:17.976727 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:16:18 crc kubenswrapper[4724]: I0226 12:16:18.664451 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"c4bf722a10be17c5ca8701930f6141f02fd4435ac6cdb580548158873afa50dd"} Feb 26 12:16:19 crc kubenswrapper[4724]: I0226 12:16:19.663126 4724 scope.go:117] "RemoveContainer" containerID="274aeced4678bce8e03aec25bea200a90199fbe6a0adea156efd24335682d69f" Feb 26 12:16:44 crc kubenswrapper[4724]: E0226 12:16:44.534114 4724 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.145:45718->38.102.83.145:45037: write tcp 38.102.83.145:45718->38.102.83.145:45037: write: connection reset by peer Feb 26 12:18:00 crc kubenswrapper[4724]: I0226 12:18:00.495587 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535138-lwvnb"] Feb 26 12:18:00 crc kubenswrapper[4724]: E0226 12:18:00.500293 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829ab740-aea5-4c9d-a741-0c37cee92652" containerName="oc" Feb 26 12:18:00 crc kubenswrapper[4724]: I0226 12:18:00.500349 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="829ab740-aea5-4c9d-a741-0c37cee92652" containerName="oc" Feb 26 12:18:00 crc kubenswrapper[4724]: I0226 12:18:00.503950 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="829ab740-aea5-4c9d-a741-0c37cee92652" containerName="oc" Feb 26 12:18:00 crc kubenswrapper[4724]: I0226 12:18:00.512361 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535138-lwvnb" Feb 26 12:18:00 crc kubenswrapper[4724]: I0226 12:18:00.523492 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:18:00 crc kubenswrapper[4724]: I0226 12:18:00.523598 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:18:00 crc kubenswrapper[4724]: I0226 12:18:00.523501 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:18:00 crc kubenswrapper[4724]: I0226 12:18:00.582505 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp7qb\" (UniqueName: \"kubernetes.io/projected/ca17bd1b-4cf2-46d2-9550-0676c7e9d40c-kube-api-access-hp7qb\") pod \"auto-csr-approver-29535138-lwvnb\" (UID: \"ca17bd1b-4cf2-46d2-9550-0676c7e9d40c\") " pod="openshift-infra/auto-csr-approver-29535138-lwvnb" Feb 26 12:18:00 crc kubenswrapper[4724]: I0226 12:18:00.646044 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535138-lwvnb"] Feb 26 12:18:00 crc kubenswrapper[4724]: I0226 12:18:00.685247 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp7qb\" (UniqueName: \"kubernetes.io/projected/ca17bd1b-4cf2-46d2-9550-0676c7e9d40c-kube-api-access-hp7qb\") pod \"auto-csr-approver-29535138-lwvnb\" (UID: \"ca17bd1b-4cf2-46d2-9550-0676c7e9d40c\") " pod="openshift-infra/auto-csr-approver-29535138-lwvnb" Feb 26 12:18:00 crc kubenswrapper[4724]: I0226 12:18:00.715112 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp7qb\" (UniqueName: \"kubernetes.io/projected/ca17bd1b-4cf2-46d2-9550-0676c7e9d40c-kube-api-access-hp7qb\") pod \"auto-csr-approver-29535138-lwvnb\" (UID: \"ca17bd1b-4cf2-46d2-9550-0676c7e9d40c\") " pod="openshift-infra/auto-csr-approver-29535138-lwvnb" Feb 26 12:18:00 crc kubenswrapper[4724]: I0226 12:18:00.852363 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535138-lwvnb" Feb 26 12:18:02 crc kubenswrapper[4724]: I0226 12:18:02.837298 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535138-lwvnb"] Feb 26 12:18:03 crc kubenswrapper[4724]: I0226 12:18:03.704212 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535138-lwvnb" event={"ID":"ca17bd1b-4cf2-46d2-9550-0676c7e9d40c","Type":"ContainerStarted","Data":"7991f07fc8e401d2315a3726807fb762dd1cfcd7a5f85199ac80a1011e1230dc"} Feb 26 12:18:05 crc kubenswrapper[4724]: I0226 12:18:05.723524 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535138-lwvnb" event={"ID":"ca17bd1b-4cf2-46d2-9550-0676c7e9d40c","Type":"ContainerStarted","Data":"a6653805bf0e81d58345e6ef5cb86c56ee6417cbd5e02048c8be4f6982d9cd37"} Feb 26 12:18:05 crc kubenswrapper[4724]: I0226 12:18:05.744738 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535138-lwvnb" podStartSLOduration=4.526974634 podStartE2EDuration="5.744695438s" podCreationTimestamp="2026-02-26 12:18:00 +0000 UTC" firstStartedPulling="2026-02-26 12:18:03.070992749 +0000 UTC m=+4349.726731864" lastFinishedPulling="2026-02-26 12:18:04.288713553 +0000 UTC m=+4350.944452668" observedRunningTime="2026-02-26 12:18:05.738860119 +0000 UTC m=+4352.394599254" watchObservedRunningTime="2026-02-26 12:18:05.744695438 +0000 UTC m=+4352.400434563" Feb 26 12:18:07 crc kubenswrapper[4724]: I0226 12:18:07.744395 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535138-lwvnb" event={"ID":"ca17bd1b-4cf2-46d2-9550-0676c7e9d40c","Type":"ContainerDied","Data":"a6653805bf0e81d58345e6ef5cb86c56ee6417cbd5e02048c8be4f6982d9cd37"} Feb 26 12:18:07 crc kubenswrapper[4724]: I0226 12:18:07.745392 4724 generic.go:334] "Generic (PLEG): container finished" podID="ca17bd1b-4cf2-46d2-9550-0676c7e9d40c" containerID="a6653805bf0e81d58345e6ef5cb86c56ee6417cbd5e02048c8be4f6982d9cd37" exitCode=0 Feb 26 12:18:09 crc kubenswrapper[4724]: I0226 12:18:09.348307 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535138-lwvnb" Feb 26 12:18:09 crc kubenswrapper[4724]: I0226 12:18:09.471296 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp7qb\" (UniqueName: \"kubernetes.io/projected/ca17bd1b-4cf2-46d2-9550-0676c7e9d40c-kube-api-access-hp7qb\") pod \"ca17bd1b-4cf2-46d2-9550-0676c7e9d40c\" (UID: \"ca17bd1b-4cf2-46d2-9550-0676c7e9d40c\") " Feb 26 12:18:09 crc kubenswrapper[4724]: I0226 12:18:09.498936 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca17bd1b-4cf2-46d2-9550-0676c7e9d40c-kube-api-access-hp7qb" (OuterVolumeSpecName: "kube-api-access-hp7qb") pod "ca17bd1b-4cf2-46d2-9550-0676c7e9d40c" (UID: "ca17bd1b-4cf2-46d2-9550-0676c7e9d40c"). InnerVolumeSpecName "kube-api-access-hp7qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:18:09 crc kubenswrapper[4724]: I0226 12:18:09.574780 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp7qb\" (UniqueName: \"kubernetes.io/projected/ca17bd1b-4cf2-46d2-9550-0676c7e9d40c-kube-api-access-hp7qb\") on node \"crc\" DevicePath \"\"" Feb 26 12:18:09 crc kubenswrapper[4724]: I0226 12:18:09.792861 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535138-lwvnb" event={"ID":"ca17bd1b-4cf2-46d2-9550-0676c7e9d40c","Type":"ContainerDied","Data":"7991f07fc8e401d2315a3726807fb762dd1cfcd7a5f85199ac80a1011e1230dc"} Feb 26 12:18:09 crc kubenswrapper[4724]: I0226 12:18:09.793241 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7991f07fc8e401d2315a3726807fb762dd1cfcd7a5f85199ac80a1011e1230dc" Feb 26 12:18:09 crc kubenswrapper[4724]: I0226 12:18:09.792983 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535138-lwvnb" Feb 26 12:18:09 crc kubenswrapper[4724]: I0226 12:18:09.866948 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535132-9tqc8"] Feb 26 12:18:09 crc kubenswrapper[4724]: I0226 12:18:09.877134 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535132-9tqc8"] Feb 26 12:18:09 crc kubenswrapper[4724]: I0226 12:18:09.987780 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862527d2-8e1c-41c3-a3fb-25b48262f2d0" path="/var/lib/kubelet/pods/862527d2-8e1c-41c3-a3fb-25b48262f2d0/volumes" Feb 26 12:18:19 crc kubenswrapper[4724]: I0226 12:18:19.917486 4724 scope.go:117] "RemoveContainer" containerID="13863c96354220ba47d4c7137ea1f8de10c786006eb41f873544316cc4717907" Feb 26 12:18:46 crc kubenswrapper[4724]: I0226 12:18:46.908243 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:18:46 crc kubenswrapper[4724]: I0226 12:18:46.909585 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:19:16 crc kubenswrapper[4724]: I0226 12:19:16.911149 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:19:16 crc kubenswrapper[4724]: I0226 12:19:16.911831 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:19:17 crc kubenswrapper[4724]: E0226 12:19:17.872440 4724 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.145:33844->38.102.83.145:45037: read tcp 38.102.83.145:33844->38.102.83.145:45037: read: connection reset by peer Feb 26 12:19:46 crc kubenswrapper[4724]: I0226 12:19:46.906398 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:19:46 crc kubenswrapper[4724]: I0226 12:19:46.906993 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:19:46 crc kubenswrapper[4724]: I0226 12:19:46.907108 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 12:19:46 crc kubenswrapper[4724]: I0226 12:19:46.910331 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4bf722a10be17c5ca8701930f6141f02fd4435ac6cdb580548158873afa50dd"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 12:19:46 crc kubenswrapper[4724]: I0226 12:19:46.910865 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://c4bf722a10be17c5ca8701930f6141f02fd4435ac6cdb580548158873afa50dd" gracePeriod=600 Feb 26 12:19:47 crc kubenswrapper[4724]: I0226 12:19:47.751735 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="c4bf722a10be17c5ca8701930f6141f02fd4435ac6cdb580548158873afa50dd" exitCode=0 Feb 26 12:19:47 crc kubenswrapper[4724]: I0226 12:19:47.751806 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"c4bf722a10be17c5ca8701930f6141f02fd4435ac6cdb580548158873afa50dd"} Feb 26 12:19:47 crc kubenswrapper[4724]: I0226 12:19:47.752482 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db"} Feb 26 12:19:47 crc kubenswrapper[4724]: I0226 12:19:47.752579 4724 scope.go:117] "RemoveContainer" containerID="6488b078049b23f2f632316d56a5d71f8f50ff7409c4a106dad903e6b280418a" Feb 26 12:20:00 crc kubenswrapper[4724]: I0226 12:20:00.188629 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535140-dw7jr"] Feb 26 12:20:00 crc kubenswrapper[4724]: E0226 12:20:00.194777 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca17bd1b-4cf2-46d2-9550-0676c7e9d40c" containerName="oc" Feb 26 12:20:00 crc kubenswrapper[4724]: I0226 12:20:00.194832 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca17bd1b-4cf2-46d2-9550-0676c7e9d40c" containerName="oc" Feb 26 12:20:00 crc kubenswrapper[4724]: I0226 12:20:00.196258 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca17bd1b-4cf2-46d2-9550-0676c7e9d40c" containerName="oc" Feb 26 12:20:00 crc kubenswrapper[4724]: I0226 12:20:00.199590 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535140-dw7jr" Feb 26 12:20:00 crc kubenswrapper[4724]: I0226 12:20:00.203708 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535140-dw7jr"] Feb 26 12:20:00 crc kubenswrapper[4724]: I0226 12:20:00.211605 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:20:00 crc kubenswrapper[4724]: I0226 12:20:00.211807 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:20:00 crc kubenswrapper[4724]: I0226 12:20:00.211627 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:20:00 crc kubenswrapper[4724]: I0226 12:20:00.370865 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnrmv\" (UniqueName: \"kubernetes.io/projected/d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5-kube-api-access-dnrmv\") pod \"auto-csr-approver-29535140-dw7jr\" (UID: \"d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5\") " pod="openshift-infra/auto-csr-approver-29535140-dw7jr" Feb 26 12:20:00 crc kubenswrapper[4724]: I0226 12:20:00.472766 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnrmv\" (UniqueName: \"kubernetes.io/projected/d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5-kube-api-access-dnrmv\") pod \"auto-csr-approver-29535140-dw7jr\" (UID: \"d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5\") " pod="openshift-infra/auto-csr-approver-29535140-dw7jr" Feb 26 12:20:00 crc kubenswrapper[4724]: I0226 12:20:00.848148 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnrmv\" (UniqueName: \"kubernetes.io/projected/d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5-kube-api-access-dnrmv\") pod \"auto-csr-approver-29535140-dw7jr\" (UID: \"d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5\") " pod="openshift-infra/auto-csr-approver-29535140-dw7jr" Feb 26 12:20:00 crc kubenswrapper[4724]: I0226 12:20:00.861653 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535140-dw7jr" Feb 26 12:20:01 crc kubenswrapper[4724]: I0226 12:20:01.884439 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535140-dw7jr"] Feb 26 12:20:02 crc kubenswrapper[4724]: I0226 12:20:02.906119 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535140-dw7jr" event={"ID":"d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5","Type":"ContainerStarted","Data":"22615833623a255b000669f18b1f7f48b946714fedc8d838571e112812f527cb"} Feb 26 12:20:05 crc kubenswrapper[4724]: I0226 12:20:05.938999 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535140-dw7jr" event={"ID":"d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5","Type":"ContainerStarted","Data":"17cfd3d0e76efe6bb3b49e58f6d3b231187564902c10e6e728bfc15c4730dbaa"} Feb 26 12:20:05 crc kubenswrapper[4724]: I0226 12:20:05.977834 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535140-dw7jr" podStartSLOduration=4.434144166 podStartE2EDuration="5.977796851s" podCreationTimestamp="2026-02-26 12:20:00 +0000 UTC" firstStartedPulling="2026-02-26 12:20:01.915787577 +0000 UTC m=+4468.571526692" lastFinishedPulling="2026-02-26 12:20:03.459440262 +0000 UTC m=+4470.115179377" observedRunningTime="2026-02-26 12:20:05.968471753 +0000 UTC m=+4472.624210868" watchObservedRunningTime="2026-02-26 12:20:05.977796851 +0000 UTC m=+4472.633535966" Feb 26 12:20:07 crc kubenswrapper[4724]: I0226 12:20:07.968671 4724 generic.go:334] "Generic (PLEG): container finished" podID="d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5" containerID="17cfd3d0e76efe6bb3b49e58f6d3b231187564902c10e6e728bfc15c4730dbaa" exitCode=0 Feb 26 12:20:07 crc kubenswrapper[4724]: I0226 12:20:07.969159 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535140-dw7jr" event={"ID":"d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5","Type":"ContainerDied","Data":"17cfd3d0e76efe6bb3b49e58f6d3b231187564902c10e6e728bfc15c4730dbaa"} Feb 26 12:20:09 crc kubenswrapper[4724]: I0226 12:20:09.804722 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fkx66"] Feb 26 12:20:09 crc kubenswrapper[4724]: I0226 12:20:09.812806 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:09 crc kubenswrapper[4724]: I0226 12:20:09.836356 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fkx66"] Feb 26 12:20:09 crc kubenswrapper[4724]: I0226 12:20:09.909114 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc877c2-2433-4421-83c0-65e82f447f04-utilities\") pod \"certified-operators-fkx66\" (UID: \"5bc877c2-2433-4421-83c0-65e82f447f04\") " pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:09 crc kubenswrapper[4724]: I0226 12:20:09.909313 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc877c2-2433-4421-83c0-65e82f447f04-catalog-content\") pod \"certified-operators-fkx66\" (UID: \"5bc877c2-2433-4421-83c0-65e82f447f04\") " pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:09 crc kubenswrapper[4724]: I0226 12:20:09.909360 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdr7b\" (UniqueName: \"kubernetes.io/projected/5bc877c2-2433-4421-83c0-65e82f447f04-kube-api-access-zdr7b\") pod \"certified-operators-fkx66\" (UID: \"5bc877c2-2433-4421-83c0-65e82f447f04\") " pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:09 crc kubenswrapper[4724]: I0226 12:20:09.996984 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535140-dw7jr" event={"ID":"d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5","Type":"ContainerDied","Data":"22615833623a255b000669f18b1f7f48b946714fedc8d838571e112812f527cb"} Feb 26 12:20:09 crc kubenswrapper[4724]: I0226 12:20:09.997101 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22615833623a255b000669f18b1f7f48b946714fedc8d838571e112812f527cb" Feb 26 12:20:10 crc kubenswrapper[4724]: I0226 12:20:10.010682 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc877c2-2433-4421-83c0-65e82f447f04-catalog-content\") pod \"certified-operators-fkx66\" (UID: \"5bc877c2-2433-4421-83c0-65e82f447f04\") " pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:10 crc kubenswrapper[4724]: I0226 12:20:10.010771 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdr7b\" (UniqueName: \"kubernetes.io/projected/5bc877c2-2433-4421-83c0-65e82f447f04-kube-api-access-zdr7b\") pod \"certified-operators-fkx66\" (UID: \"5bc877c2-2433-4421-83c0-65e82f447f04\") " pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:10 crc kubenswrapper[4724]: I0226 12:20:10.010860 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc877c2-2433-4421-83c0-65e82f447f04-utilities\") pod \"certified-operators-fkx66\" (UID: \"5bc877c2-2433-4421-83c0-65e82f447f04\") " pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:10 crc kubenswrapper[4724]: I0226 12:20:10.011922 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc877c2-2433-4421-83c0-65e82f447f04-catalog-content\") pod \"certified-operators-fkx66\" (UID: \"5bc877c2-2433-4421-83c0-65e82f447f04\") " pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:10 crc kubenswrapper[4724]: I0226 12:20:10.012315 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc877c2-2433-4421-83c0-65e82f447f04-utilities\") pod \"certified-operators-fkx66\" (UID: \"5bc877c2-2433-4421-83c0-65e82f447f04\") " pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:10 crc kubenswrapper[4724]: I0226 12:20:10.021864 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535140-dw7jr" Feb 26 12:20:10 crc kubenswrapper[4724]: I0226 12:20:10.031379 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdr7b\" (UniqueName: \"kubernetes.io/projected/5bc877c2-2433-4421-83c0-65e82f447f04-kube-api-access-zdr7b\") pod \"certified-operators-fkx66\" (UID: \"5bc877c2-2433-4421-83c0-65e82f447f04\") " pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:10 crc kubenswrapper[4724]: I0226 12:20:10.112660 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnrmv\" (UniqueName: \"kubernetes.io/projected/d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5-kube-api-access-dnrmv\") pod \"d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5\" (UID: \"d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5\") " Feb 26 12:20:10 crc kubenswrapper[4724]: I0226 12:20:10.137827 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:10 crc kubenswrapper[4724]: I0226 12:20:10.142910 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5-kube-api-access-dnrmv" (OuterVolumeSpecName: "kube-api-access-dnrmv") pod "d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5" (UID: "d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5"). InnerVolumeSpecName "kube-api-access-dnrmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:20:10 crc kubenswrapper[4724]: I0226 12:20:10.221145 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnrmv\" (UniqueName: \"kubernetes.io/projected/d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5-kube-api-access-dnrmv\") on node \"crc\" DevicePath \"\"" Feb 26 12:20:11 crc kubenswrapper[4724]: I0226 12:20:11.005675 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535140-dw7jr" Feb 26 12:20:11 crc kubenswrapper[4724]: I0226 12:20:11.218079 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fkx66"] Feb 26 12:20:11 crc kubenswrapper[4724]: I0226 12:20:11.282085 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535134-kztbl"] Feb 26 12:20:11 crc kubenswrapper[4724]: I0226 12:20:11.291666 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535134-kztbl"] Feb 26 12:20:11 crc kubenswrapper[4724]: I0226 12:20:11.991009 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71a546ef-6f29-493e-b21f-629d765a0bf6" path="/var/lib/kubelet/pods/71a546ef-6f29-493e-b21f-629d765a0bf6/volumes" Feb 26 12:20:12 crc kubenswrapper[4724]: I0226 12:20:12.018511 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkx66" event={"ID":"5bc877c2-2433-4421-83c0-65e82f447f04","Type":"ContainerStarted","Data":"f3b0ad12afb62f0fa2190ce12ee33efd6362a21f9f25acab24800a0efb163182"} Feb 26 12:20:12 crc kubenswrapper[4724]: I0226 12:20:12.018560 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkx66" event={"ID":"5bc877c2-2433-4421-83c0-65e82f447f04","Type":"ContainerStarted","Data":"9ca87c28287d41e3f75869a72d073076c349c67a6616c61cd69bf385720e72d5"} Feb 26 12:20:13 crc kubenswrapper[4724]: I0226 12:20:13.033636 4724 generic.go:334] "Generic (PLEG): container finished" podID="5bc877c2-2433-4421-83c0-65e82f447f04" containerID="f3b0ad12afb62f0fa2190ce12ee33efd6362a21f9f25acab24800a0efb163182" exitCode=0 Feb 26 12:20:13 crc kubenswrapper[4724]: I0226 12:20:13.033752 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkx66" event={"ID":"5bc877c2-2433-4421-83c0-65e82f447f04","Type":"ContainerDied","Data":"f3b0ad12afb62f0fa2190ce12ee33efd6362a21f9f25acab24800a0efb163182"} Feb 26 12:20:14 crc kubenswrapper[4724]: I0226 12:20:14.047459 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkx66" event={"ID":"5bc877c2-2433-4421-83c0-65e82f447f04","Type":"ContainerStarted","Data":"1a4fee8fa553d967a8834cbbb05568b5e7872cd77ed1685d3adcac5e3754c755"} Feb 26 12:20:18 crc kubenswrapper[4724]: I0226 12:20:18.098587 4724 generic.go:334] "Generic (PLEG): container finished" podID="5bc877c2-2433-4421-83c0-65e82f447f04" containerID="1a4fee8fa553d967a8834cbbb05568b5e7872cd77ed1685d3adcac5e3754c755" exitCode=0 Feb 26 12:20:18 crc kubenswrapper[4724]: I0226 12:20:18.098657 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkx66" event={"ID":"5bc877c2-2433-4421-83c0-65e82f447f04","Type":"ContainerDied","Data":"1a4fee8fa553d967a8834cbbb05568b5e7872cd77ed1685d3adcac5e3754c755"} Feb 26 12:20:19 crc kubenswrapper[4724]: I0226 12:20:19.110548 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkx66" event={"ID":"5bc877c2-2433-4421-83c0-65e82f447f04","Type":"ContainerStarted","Data":"78f46003c1de6196a98852ca8a062fc59d985a871c0156d9e40b30baa107ec19"} Feb 26 12:20:19 crc kubenswrapper[4724]: I0226 12:20:19.133708 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fkx66" podStartSLOduration=4.551868525 podStartE2EDuration="10.133688358s" podCreationTimestamp="2026-02-26 12:20:09 +0000 UTC" firstStartedPulling="2026-02-26 12:20:13.039284969 +0000 UTC m=+4479.695024084" lastFinishedPulling="2026-02-26 12:20:18.621104802 +0000 UTC m=+4485.276843917" observedRunningTime="2026-02-26 12:20:19.132541099 +0000 UTC m=+4485.788280214" watchObservedRunningTime="2026-02-26 12:20:19.133688358 +0000 UTC m=+4485.789427473" Feb 26 12:20:20 crc kubenswrapper[4724]: I0226 12:20:20.138735 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:20 crc kubenswrapper[4724]: I0226 12:20:20.138812 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:20 crc kubenswrapper[4724]: I0226 12:20:20.287328 4724 scope.go:117] "RemoveContainer" containerID="98f670b0db67fb1a7858d7e24c2c0dcfb1491baf7e33cf4444cdfda8dcacacf6" Feb 26 12:20:21 crc kubenswrapper[4724]: I0226 12:20:21.229831 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fkx66" podUID="5bc877c2-2433-4421-83c0-65e82f447f04" containerName="registry-server" probeResult="failure" output=< Feb 26 12:20:21 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:20:21 crc kubenswrapper[4724]: > Feb 26 12:20:31 crc kubenswrapper[4724]: I0226 12:20:31.191319 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fkx66" podUID="5bc877c2-2433-4421-83c0-65e82f447f04" containerName="registry-server" probeResult="failure" output=< Feb 26 12:20:31 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:20:31 crc kubenswrapper[4724]: > Feb 26 12:20:41 crc kubenswrapper[4724]: I0226 12:20:41.188172 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fkx66" podUID="5bc877c2-2433-4421-83c0-65e82f447f04" containerName="registry-server" probeResult="failure" output=< Feb 26 12:20:41 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:20:41 crc kubenswrapper[4724]: > Feb 26 12:20:50 crc kubenswrapper[4724]: I0226 12:20:50.220372 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:50 crc kubenswrapper[4724]: I0226 12:20:50.289746 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:50 crc kubenswrapper[4724]: I0226 12:20:50.457961 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fkx66"] Feb 26 12:20:51 crc kubenswrapper[4724]: I0226 12:20:51.416289 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fkx66" podUID="5bc877c2-2433-4421-83c0-65e82f447f04" containerName="registry-server" containerID="cri-o://78f46003c1de6196a98852ca8a062fc59d985a871c0156d9e40b30baa107ec19" gracePeriod=2 Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.397746 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.427082 4724 generic.go:334] "Generic (PLEG): container finished" podID="5bc877c2-2433-4421-83c0-65e82f447f04" containerID="78f46003c1de6196a98852ca8a062fc59d985a871c0156d9e40b30baa107ec19" exitCode=0 Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.427140 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkx66" event={"ID":"5bc877c2-2433-4421-83c0-65e82f447f04","Type":"ContainerDied","Data":"78f46003c1de6196a98852ca8a062fc59d985a871c0156d9e40b30baa107ec19"} Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.427193 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkx66" event={"ID":"5bc877c2-2433-4421-83c0-65e82f447f04","Type":"ContainerDied","Data":"9ca87c28287d41e3f75869a72d073076c349c67a6616c61cd69bf385720e72d5"} Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.427244 4724 scope.go:117] "RemoveContainer" containerID="78f46003c1de6196a98852ca8a062fc59d985a871c0156d9e40b30baa107ec19" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.427419 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkx66" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.469742 4724 scope.go:117] "RemoveContainer" containerID="1a4fee8fa553d967a8834cbbb05568b5e7872cd77ed1685d3adcac5e3754c755" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.501295 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc877c2-2433-4421-83c0-65e82f447f04-catalog-content\") pod \"5bc877c2-2433-4421-83c0-65e82f447f04\" (UID: \"5bc877c2-2433-4421-83c0-65e82f447f04\") " Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.501357 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdr7b\" (UniqueName: \"kubernetes.io/projected/5bc877c2-2433-4421-83c0-65e82f447f04-kube-api-access-zdr7b\") pod \"5bc877c2-2433-4421-83c0-65e82f447f04\" (UID: \"5bc877c2-2433-4421-83c0-65e82f447f04\") " Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.501858 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc877c2-2433-4421-83c0-65e82f447f04-utilities\") pod \"5bc877c2-2433-4421-83c0-65e82f447f04\" (UID: \"5bc877c2-2433-4421-83c0-65e82f447f04\") " Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.503308 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bc877c2-2433-4421-83c0-65e82f447f04-utilities" (OuterVolumeSpecName: "utilities") pod "5bc877c2-2433-4421-83c0-65e82f447f04" (UID: "5bc877c2-2433-4421-83c0-65e82f447f04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.504585 4724 scope.go:117] "RemoveContainer" containerID="f3b0ad12afb62f0fa2190ce12ee33efd6362a21f9f25acab24800a0efb163182" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.517956 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bc877c2-2433-4421-83c0-65e82f447f04-kube-api-access-zdr7b" (OuterVolumeSpecName: "kube-api-access-zdr7b") pod "5bc877c2-2433-4421-83c0-65e82f447f04" (UID: "5bc877c2-2433-4421-83c0-65e82f447f04"). InnerVolumeSpecName "kube-api-access-zdr7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.596120 4724 scope.go:117] "RemoveContainer" containerID="78f46003c1de6196a98852ca8a062fc59d985a871c0156d9e40b30baa107ec19" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.612452 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc877c2-2433-4421-83c0-65e82f447f04-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.612512 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdr7b\" (UniqueName: \"kubernetes.io/projected/5bc877c2-2433-4421-83c0-65e82f447f04-kube-api-access-zdr7b\") on node \"crc\" DevicePath \"\"" Feb 26 12:20:52 crc kubenswrapper[4724]: E0226 12:20:52.613686 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78f46003c1de6196a98852ca8a062fc59d985a871c0156d9e40b30baa107ec19\": container with ID starting with 78f46003c1de6196a98852ca8a062fc59d985a871c0156d9e40b30baa107ec19 not found: ID does not exist" containerID="78f46003c1de6196a98852ca8a062fc59d985a871c0156d9e40b30baa107ec19" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.613793 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78f46003c1de6196a98852ca8a062fc59d985a871c0156d9e40b30baa107ec19"} err="failed to get container status \"78f46003c1de6196a98852ca8a062fc59d985a871c0156d9e40b30baa107ec19\": rpc error: code = NotFound desc = could not find container \"78f46003c1de6196a98852ca8a062fc59d985a871c0156d9e40b30baa107ec19\": container with ID starting with 78f46003c1de6196a98852ca8a062fc59d985a871c0156d9e40b30baa107ec19 not found: ID does not exist" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.613822 4724 scope.go:117] "RemoveContainer" containerID="1a4fee8fa553d967a8834cbbb05568b5e7872cd77ed1685d3adcac5e3754c755" Feb 26 12:20:52 crc kubenswrapper[4724]: E0226 12:20:52.614437 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a4fee8fa553d967a8834cbbb05568b5e7872cd77ed1685d3adcac5e3754c755\": container with ID starting with 1a4fee8fa553d967a8834cbbb05568b5e7872cd77ed1685d3adcac5e3754c755 not found: ID does not exist" containerID="1a4fee8fa553d967a8834cbbb05568b5e7872cd77ed1685d3adcac5e3754c755" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.614483 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a4fee8fa553d967a8834cbbb05568b5e7872cd77ed1685d3adcac5e3754c755"} err="failed to get container status \"1a4fee8fa553d967a8834cbbb05568b5e7872cd77ed1685d3adcac5e3754c755\": rpc error: code = NotFound desc = could not find container \"1a4fee8fa553d967a8834cbbb05568b5e7872cd77ed1685d3adcac5e3754c755\": container with ID starting with 1a4fee8fa553d967a8834cbbb05568b5e7872cd77ed1685d3adcac5e3754c755 not found: ID does not exist" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.614513 4724 scope.go:117] "RemoveContainer" containerID="f3b0ad12afb62f0fa2190ce12ee33efd6362a21f9f25acab24800a0efb163182" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.616072 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bc877c2-2433-4421-83c0-65e82f447f04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bc877c2-2433-4421-83c0-65e82f447f04" (UID: "5bc877c2-2433-4421-83c0-65e82f447f04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:20:52 crc kubenswrapper[4724]: E0226 12:20:52.622314 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3b0ad12afb62f0fa2190ce12ee33efd6362a21f9f25acab24800a0efb163182\": container with ID starting with f3b0ad12afb62f0fa2190ce12ee33efd6362a21f9f25acab24800a0efb163182 not found: ID does not exist" containerID="f3b0ad12afb62f0fa2190ce12ee33efd6362a21f9f25acab24800a0efb163182" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.622364 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3b0ad12afb62f0fa2190ce12ee33efd6362a21f9f25acab24800a0efb163182"} err="failed to get container status \"f3b0ad12afb62f0fa2190ce12ee33efd6362a21f9f25acab24800a0efb163182\": rpc error: code = NotFound desc = could not find container \"f3b0ad12afb62f0fa2190ce12ee33efd6362a21f9f25acab24800a0efb163182\": container with ID starting with f3b0ad12afb62f0fa2190ce12ee33efd6362a21f9f25acab24800a0efb163182 not found: ID does not exist" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.713771 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc877c2-2433-4421-83c0-65e82f447f04-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.788294 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fkx66"] Feb 26 12:20:52 crc kubenswrapper[4724]: I0226 12:20:52.796346 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fkx66"] Feb 26 12:20:53 crc kubenswrapper[4724]: I0226 12:20:53.992335 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bc877c2-2433-4421-83c0-65e82f447f04" path="/var/lib/kubelet/pods/5bc877c2-2433-4421-83c0-65e82f447f04/volumes" Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.178962 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535142-stg7q"] Feb 26 12:22:00 crc kubenswrapper[4724]: E0226 12:22:00.180900 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc877c2-2433-4421-83c0-65e82f447f04" containerName="extract-content" Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.181108 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc877c2-2433-4421-83c0-65e82f447f04" containerName="extract-content" Feb 26 12:22:00 crc kubenswrapper[4724]: E0226 12:22:00.181170 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc877c2-2433-4421-83c0-65e82f447f04" containerName="registry-server" Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.181264 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc877c2-2433-4421-83c0-65e82f447f04" containerName="registry-server" Feb 26 12:22:00 crc kubenswrapper[4724]: E0226 12:22:00.181278 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5" containerName="oc" Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.181285 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5" containerName="oc" Feb 26 12:22:00 crc kubenswrapper[4724]: E0226 12:22:00.181297 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc877c2-2433-4421-83c0-65e82f447f04" containerName="extract-utilities" Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.181305 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc877c2-2433-4421-83c0-65e82f447f04" containerName="extract-utilities" Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.182805 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5" containerName="oc" Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.182846 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bc877c2-2433-4421-83c0-65e82f447f04" containerName="registry-server" Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.184407 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535142-stg7q" Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.190830 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535142-stg7q"] Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.193345 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.194123 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.195427 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.298276 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ngbk\" (UniqueName: \"kubernetes.io/projected/aac99134-c83a-468f-b4f7-61d2eed0b581-kube-api-access-8ngbk\") pod \"auto-csr-approver-29535142-stg7q\" (UID: \"aac99134-c83a-468f-b4f7-61d2eed0b581\") " pod="openshift-infra/auto-csr-approver-29535142-stg7q" Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.400553 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ngbk\" (UniqueName: \"kubernetes.io/projected/aac99134-c83a-468f-b4f7-61d2eed0b581-kube-api-access-8ngbk\") pod \"auto-csr-approver-29535142-stg7q\" (UID: \"aac99134-c83a-468f-b4f7-61d2eed0b581\") " pod="openshift-infra/auto-csr-approver-29535142-stg7q" Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.436140 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ngbk\" (UniqueName: \"kubernetes.io/projected/aac99134-c83a-468f-b4f7-61d2eed0b581-kube-api-access-8ngbk\") pod \"auto-csr-approver-29535142-stg7q\" (UID: \"aac99134-c83a-468f-b4f7-61d2eed0b581\") " pod="openshift-infra/auto-csr-approver-29535142-stg7q" Feb 26 12:22:00 crc kubenswrapper[4724]: I0226 12:22:00.520157 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535142-stg7q" Feb 26 12:22:01 crc kubenswrapper[4724]: I0226 12:22:01.116151 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535142-stg7q"] Feb 26 12:22:01 crc kubenswrapper[4724]: W0226 12:22:01.125102 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaac99134_c83a_468f_b4f7_61d2eed0b581.slice/crio-b4541f858b783172f0d7b7737899efc37b4cd7188a9fdd643727150e0086bd50 WatchSource:0}: Error finding container b4541f858b783172f0d7b7737899efc37b4cd7188a9fdd643727150e0086bd50: Status 404 returned error can't find the container with id b4541f858b783172f0d7b7737899efc37b4cd7188a9fdd643727150e0086bd50 Feb 26 12:22:01 crc kubenswrapper[4724]: I0226 12:22:01.129836 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 12:22:02 crc kubenswrapper[4724]: I0226 12:22:02.037433 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535142-stg7q" event={"ID":"aac99134-c83a-468f-b4f7-61d2eed0b581","Type":"ContainerStarted","Data":"b4541f858b783172f0d7b7737899efc37b4cd7188a9fdd643727150e0086bd50"} Feb 26 12:22:04 crc kubenswrapper[4724]: I0226 12:22:04.056754 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535142-stg7q" event={"ID":"aac99134-c83a-468f-b4f7-61d2eed0b581","Type":"ContainerStarted","Data":"e0b377088fb6a3b72727f3b7fed144f1c771e0f9b2af2a7737957bc86e2d46d8"} Feb 26 12:22:04 crc kubenswrapper[4724]: I0226 12:22:04.075999 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535142-stg7q" podStartSLOduration=2.235310979 podStartE2EDuration="4.075952s" podCreationTimestamp="2026-02-26 12:22:00 +0000 UTC" firstStartedPulling="2026-02-26 12:22:01.12802561 +0000 UTC m=+4587.783764725" lastFinishedPulling="2026-02-26 12:22:02.968666631 +0000 UTC m=+4589.624405746" observedRunningTime="2026-02-26 12:22:04.071696122 +0000 UTC m=+4590.727435237" watchObservedRunningTime="2026-02-26 12:22:04.075952 +0000 UTC m=+4590.731691115" Feb 26 12:22:06 crc kubenswrapper[4724]: I0226 12:22:06.079630 4724 generic.go:334] "Generic (PLEG): container finished" podID="aac99134-c83a-468f-b4f7-61d2eed0b581" containerID="e0b377088fb6a3b72727f3b7fed144f1c771e0f9b2af2a7737957bc86e2d46d8" exitCode=0 Feb 26 12:22:06 crc kubenswrapper[4724]: I0226 12:22:06.079685 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535142-stg7q" event={"ID":"aac99134-c83a-468f-b4f7-61d2eed0b581","Type":"ContainerDied","Data":"e0b377088fb6a3b72727f3b7fed144f1c771e0f9b2af2a7737957bc86e2d46d8"} Feb 26 12:22:07 crc kubenswrapper[4724]: I0226 12:22:07.575169 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535142-stg7q" Feb 26 12:22:07 crc kubenswrapper[4724]: I0226 12:22:07.662698 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ngbk\" (UniqueName: \"kubernetes.io/projected/aac99134-c83a-468f-b4f7-61d2eed0b581-kube-api-access-8ngbk\") pod \"aac99134-c83a-468f-b4f7-61d2eed0b581\" (UID: \"aac99134-c83a-468f-b4f7-61d2eed0b581\") " Feb 26 12:22:07 crc kubenswrapper[4724]: I0226 12:22:07.669383 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac99134-c83a-468f-b4f7-61d2eed0b581-kube-api-access-8ngbk" (OuterVolumeSpecName: "kube-api-access-8ngbk") pod "aac99134-c83a-468f-b4f7-61d2eed0b581" (UID: "aac99134-c83a-468f-b4f7-61d2eed0b581"). InnerVolumeSpecName "kube-api-access-8ngbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:22:07 crc kubenswrapper[4724]: I0226 12:22:07.765658 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ngbk\" (UniqueName: \"kubernetes.io/projected/aac99134-c83a-468f-b4f7-61d2eed0b581-kube-api-access-8ngbk\") on node \"crc\" DevicePath \"\"" Feb 26 12:22:08 crc kubenswrapper[4724]: I0226 12:22:08.098594 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535142-stg7q" event={"ID":"aac99134-c83a-468f-b4f7-61d2eed0b581","Type":"ContainerDied","Data":"b4541f858b783172f0d7b7737899efc37b4cd7188a9fdd643727150e0086bd50"} Feb 26 12:22:08 crc kubenswrapper[4724]: I0226 12:22:08.098838 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4541f858b783172f0d7b7737899efc37b4cd7188a9fdd643727150e0086bd50" Feb 26 12:22:08 crc kubenswrapper[4724]: I0226 12:22:08.099016 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535142-stg7q" Feb 26 12:22:08 crc kubenswrapper[4724]: I0226 12:22:08.160668 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535136-9z4wg"] Feb 26 12:22:08 crc kubenswrapper[4724]: I0226 12:22:08.171860 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535136-9z4wg"] Feb 26 12:22:09 crc kubenswrapper[4724]: I0226 12:22:09.989865 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829ab740-aea5-4c9d-a741-0c37cee92652" path="/var/lib/kubelet/pods/829ab740-aea5-4c9d-a741-0c37cee92652/volumes" Feb 26 12:22:16 crc kubenswrapper[4724]: I0226 12:22:16.906232 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:22:16 crc kubenswrapper[4724]: I0226 12:22:16.907014 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:22:20 crc kubenswrapper[4724]: I0226 12:22:20.508690 4724 scope.go:117] "RemoveContainer" containerID="2a98365f516de59309981b6609cdc4e482d207d60289640a4bb0247b55b015c8" Feb 26 12:22:27 crc kubenswrapper[4724]: I0226 12:22:27.893720 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bjpts"] Feb 26 12:22:27 crc kubenswrapper[4724]: E0226 12:22:27.894845 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac99134-c83a-468f-b4f7-61d2eed0b581" containerName="oc" Feb 26 12:22:27 crc kubenswrapper[4724]: I0226 12:22:27.894863 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac99134-c83a-468f-b4f7-61d2eed0b581" containerName="oc" Feb 26 12:22:27 crc kubenswrapper[4724]: I0226 12:22:27.895110 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac99134-c83a-468f-b4f7-61d2eed0b581" containerName="oc" Feb 26 12:22:27 crc kubenswrapper[4724]: I0226 12:22:27.896911 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:27 crc kubenswrapper[4724]: I0226 12:22:27.905571 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bjpts"] Feb 26 12:22:27 crc kubenswrapper[4724]: I0226 12:22:27.984034 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t8p7\" (UniqueName: \"kubernetes.io/projected/7c3276ef-d230-4079-9380-8dfa04c34a80-kube-api-access-7t8p7\") pod \"community-operators-bjpts\" (UID: \"7c3276ef-d230-4079-9380-8dfa04c34a80\") " pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:27 crc kubenswrapper[4724]: I0226 12:22:27.984133 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3276ef-d230-4079-9380-8dfa04c34a80-utilities\") pod \"community-operators-bjpts\" (UID: \"7c3276ef-d230-4079-9380-8dfa04c34a80\") " pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:27 crc kubenswrapper[4724]: I0226 12:22:27.984239 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3276ef-d230-4079-9380-8dfa04c34a80-catalog-content\") pod \"community-operators-bjpts\" (UID: \"7c3276ef-d230-4079-9380-8dfa04c34a80\") " pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:28 crc kubenswrapper[4724]: I0226 12:22:28.085710 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t8p7\" (UniqueName: \"kubernetes.io/projected/7c3276ef-d230-4079-9380-8dfa04c34a80-kube-api-access-7t8p7\") pod \"community-operators-bjpts\" (UID: \"7c3276ef-d230-4079-9380-8dfa04c34a80\") " pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:28 crc kubenswrapper[4724]: I0226 12:22:28.085864 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3276ef-d230-4079-9380-8dfa04c34a80-utilities\") pod \"community-operators-bjpts\" (UID: \"7c3276ef-d230-4079-9380-8dfa04c34a80\") " pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:28 crc kubenswrapper[4724]: I0226 12:22:28.086000 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3276ef-d230-4079-9380-8dfa04c34a80-catalog-content\") pod \"community-operators-bjpts\" (UID: \"7c3276ef-d230-4079-9380-8dfa04c34a80\") " pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:28 crc kubenswrapper[4724]: I0226 12:22:28.086544 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3276ef-d230-4079-9380-8dfa04c34a80-catalog-content\") pod \"community-operators-bjpts\" (UID: \"7c3276ef-d230-4079-9380-8dfa04c34a80\") " pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:28 crc kubenswrapper[4724]: I0226 12:22:28.086709 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3276ef-d230-4079-9380-8dfa04c34a80-utilities\") pod \"community-operators-bjpts\" (UID: \"7c3276ef-d230-4079-9380-8dfa04c34a80\") " pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:28 crc kubenswrapper[4724]: I0226 12:22:28.104416 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t8p7\" (UniqueName: \"kubernetes.io/projected/7c3276ef-d230-4079-9380-8dfa04c34a80-kube-api-access-7t8p7\") pod \"community-operators-bjpts\" (UID: \"7c3276ef-d230-4079-9380-8dfa04c34a80\") " pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:28 crc kubenswrapper[4724]: I0226 12:22:28.219565 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:28 crc kubenswrapper[4724]: I0226 12:22:28.773819 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bjpts"] Feb 26 12:22:29 crc kubenswrapper[4724]: I0226 12:22:29.347941 4724 generic.go:334] "Generic (PLEG): container finished" podID="7c3276ef-d230-4079-9380-8dfa04c34a80" containerID="605104cda9046d467889a3d15b00c321ada54899b6db2c63d63b2a4e8295c2a2" exitCode=0 Feb 26 12:22:29 crc kubenswrapper[4724]: I0226 12:22:29.348038 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjpts" event={"ID":"7c3276ef-d230-4079-9380-8dfa04c34a80","Type":"ContainerDied","Data":"605104cda9046d467889a3d15b00c321ada54899b6db2c63d63b2a4e8295c2a2"} Feb 26 12:22:29 crc kubenswrapper[4724]: I0226 12:22:29.348310 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjpts" event={"ID":"7c3276ef-d230-4079-9380-8dfa04c34a80","Type":"ContainerStarted","Data":"53a32bb07c237f4391812918f69d19878e3122db76d3e18e58730351a30c1a3b"} Feb 26 12:22:31 crc kubenswrapper[4724]: I0226 12:22:31.375544 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjpts" event={"ID":"7c3276ef-d230-4079-9380-8dfa04c34a80","Type":"ContainerStarted","Data":"257cc8ceabc343c03f883b3aebf1a29264d5b530d6b40d1ef7f749ab0a0f8053"} Feb 26 12:22:36 crc kubenswrapper[4724]: I0226 12:22:36.422028 4724 generic.go:334] "Generic (PLEG): container finished" podID="7c3276ef-d230-4079-9380-8dfa04c34a80" containerID="257cc8ceabc343c03f883b3aebf1a29264d5b530d6b40d1ef7f749ab0a0f8053" exitCode=0 Feb 26 12:22:36 crc kubenswrapper[4724]: I0226 12:22:36.422104 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjpts" event={"ID":"7c3276ef-d230-4079-9380-8dfa04c34a80","Type":"ContainerDied","Data":"257cc8ceabc343c03f883b3aebf1a29264d5b530d6b40d1ef7f749ab0a0f8053"} Feb 26 12:22:40 crc kubenswrapper[4724]: I0226 12:22:40.469555 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjpts" event={"ID":"7c3276ef-d230-4079-9380-8dfa04c34a80","Type":"ContainerStarted","Data":"d28e61d0d2f55693d122333015330ee2f63bc52ea7cb66a458707affe081b96a"} Feb 26 12:22:40 crc kubenswrapper[4724]: I0226 12:22:40.496391 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bjpts" podStartSLOduration=3.549344672 podStartE2EDuration="13.496367454s" podCreationTimestamp="2026-02-26 12:22:27 +0000 UTC" firstStartedPulling="2026-02-26 12:22:29.351405144 +0000 UTC m=+4616.007144259" lastFinishedPulling="2026-02-26 12:22:39.298427926 +0000 UTC m=+4625.954167041" observedRunningTime="2026-02-26 12:22:40.484023388 +0000 UTC m=+4627.139762503" watchObservedRunningTime="2026-02-26 12:22:40.496367454 +0000 UTC m=+4627.152106579" Feb 26 12:22:46 crc kubenswrapper[4724]: I0226 12:22:46.906096 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:22:46 crc kubenswrapper[4724]: I0226 12:22:46.906484 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:22:48 crc kubenswrapper[4724]: I0226 12:22:48.220066 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:48 crc kubenswrapper[4724]: I0226 12:22:48.220119 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:48 crc kubenswrapper[4724]: I0226 12:22:48.300629 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:48 crc kubenswrapper[4724]: I0226 12:22:48.591835 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:48 crc kubenswrapper[4724]: I0226 12:22:48.665617 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bjpts"] Feb 26 12:22:50 crc kubenswrapper[4724]: I0226 12:22:50.562231 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bjpts" podUID="7c3276ef-d230-4079-9380-8dfa04c34a80" containerName="registry-server" containerID="cri-o://d28e61d0d2f55693d122333015330ee2f63bc52ea7cb66a458707affe081b96a" gracePeriod=2 Feb 26 12:22:51 crc kubenswrapper[4724]: I0226 12:22:51.571783 4724 generic.go:334] "Generic (PLEG): container finished" podID="7c3276ef-d230-4079-9380-8dfa04c34a80" containerID="d28e61d0d2f55693d122333015330ee2f63bc52ea7cb66a458707affe081b96a" exitCode=0 Feb 26 12:22:51 crc kubenswrapper[4724]: I0226 12:22:51.571857 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjpts" event={"ID":"7c3276ef-d230-4079-9380-8dfa04c34a80","Type":"ContainerDied","Data":"d28e61d0d2f55693d122333015330ee2f63bc52ea7cb66a458707affe081b96a"} Feb 26 12:22:51 crc kubenswrapper[4724]: I0226 12:22:51.723467 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:51 crc kubenswrapper[4724]: I0226 12:22:51.822195 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3276ef-d230-4079-9380-8dfa04c34a80-catalog-content\") pod \"7c3276ef-d230-4079-9380-8dfa04c34a80\" (UID: \"7c3276ef-d230-4079-9380-8dfa04c34a80\") " Feb 26 12:22:51 crc kubenswrapper[4724]: I0226 12:22:51.822300 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t8p7\" (UniqueName: \"kubernetes.io/projected/7c3276ef-d230-4079-9380-8dfa04c34a80-kube-api-access-7t8p7\") pod \"7c3276ef-d230-4079-9380-8dfa04c34a80\" (UID: \"7c3276ef-d230-4079-9380-8dfa04c34a80\") " Feb 26 12:22:51 crc kubenswrapper[4724]: I0226 12:22:51.822399 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3276ef-d230-4079-9380-8dfa04c34a80-utilities\") pod \"7c3276ef-d230-4079-9380-8dfa04c34a80\" (UID: \"7c3276ef-d230-4079-9380-8dfa04c34a80\") " Feb 26 12:22:51 crc kubenswrapper[4724]: I0226 12:22:51.823271 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c3276ef-d230-4079-9380-8dfa04c34a80-utilities" (OuterVolumeSpecName: "utilities") pod "7c3276ef-d230-4079-9380-8dfa04c34a80" (UID: "7c3276ef-d230-4079-9380-8dfa04c34a80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:22:51 crc kubenswrapper[4724]: I0226 12:22:51.875553 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c3276ef-d230-4079-9380-8dfa04c34a80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c3276ef-d230-4079-9380-8dfa04c34a80" (UID: "7c3276ef-d230-4079-9380-8dfa04c34a80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:22:51 crc kubenswrapper[4724]: I0226 12:22:51.935982 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3276ef-d230-4079-9380-8dfa04c34a80-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:22:51 crc kubenswrapper[4724]: I0226 12:22:51.936036 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3276ef-d230-4079-9380-8dfa04c34a80-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:22:52 crc kubenswrapper[4724]: I0226 12:22:52.409790 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3276ef-d230-4079-9380-8dfa04c34a80-kube-api-access-7t8p7" (OuterVolumeSpecName: "kube-api-access-7t8p7") pod "7c3276ef-d230-4079-9380-8dfa04c34a80" (UID: "7c3276ef-d230-4079-9380-8dfa04c34a80"). InnerVolumeSpecName "kube-api-access-7t8p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:22:52 crc kubenswrapper[4724]: I0226 12:22:52.448629 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t8p7\" (UniqueName: \"kubernetes.io/projected/7c3276ef-d230-4079-9380-8dfa04c34a80-kube-api-access-7t8p7\") on node \"crc\" DevicePath \"\"" Feb 26 12:22:52 crc kubenswrapper[4724]: I0226 12:22:52.584303 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjpts" event={"ID":"7c3276ef-d230-4079-9380-8dfa04c34a80","Type":"ContainerDied","Data":"53a32bb07c237f4391812918f69d19878e3122db76d3e18e58730351a30c1a3b"} Feb 26 12:22:52 crc kubenswrapper[4724]: I0226 12:22:52.584643 4724 scope.go:117] "RemoveContainer" containerID="d28e61d0d2f55693d122333015330ee2f63bc52ea7cb66a458707affe081b96a" Feb 26 12:22:52 crc kubenswrapper[4724]: I0226 12:22:52.584383 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjpts" Feb 26 12:22:52 crc kubenswrapper[4724]: I0226 12:22:52.637485 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bjpts"] Feb 26 12:22:52 crc kubenswrapper[4724]: I0226 12:22:52.641417 4724 scope.go:117] "RemoveContainer" containerID="257cc8ceabc343c03f883b3aebf1a29264d5b530d6b40d1ef7f749ab0a0f8053" Feb 26 12:22:52 crc kubenswrapper[4724]: I0226 12:22:52.648776 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bjpts"] Feb 26 12:22:52 crc kubenswrapper[4724]: I0226 12:22:52.663389 4724 scope.go:117] "RemoveContainer" containerID="605104cda9046d467889a3d15b00c321ada54899b6db2c63d63b2a4e8295c2a2" Feb 26 12:22:53 crc kubenswrapper[4724]: I0226 12:22:53.988219 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c3276ef-d230-4079-9380-8dfa04c34a80" path="/var/lib/kubelet/pods/7c3276ef-d230-4079-9380-8dfa04c34a80/volumes" Feb 26 12:23:16 crc kubenswrapper[4724]: I0226 12:23:16.906502 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:23:16 crc kubenswrapper[4724]: I0226 12:23:16.907056 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:23:16 crc kubenswrapper[4724]: I0226 12:23:16.907100 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 12:23:16 crc kubenswrapper[4724]: I0226 12:23:16.907837 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 12:23:16 crc kubenswrapper[4724]: I0226 12:23:16.907888 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" gracePeriod=600 Feb 26 12:23:17 crc kubenswrapper[4724]: E0226 12:23:17.080997 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:23:17 crc kubenswrapper[4724]: I0226 12:23:17.893978 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" exitCode=0 Feb 26 12:23:17 crc kubenswrapper[4724]: I0226 12:23:17.894034 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db"} Feb 26 12:23:17 crc kubenswrapper[4724]: I0226 12:23:17.894075 4724 scope.go:117] "RemoveContainer" containerID="c4bf722a10be17c5ca8701930f6141f02fd4435ac6cdb580548158873afa50dd" Feb 26 12:23:17 crc kubenswrapper[4724]: I0226 12:23:17.894938 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:23:17 crc kubenswrapper[4724]: E0226 12:23:17.895350 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:23:30 crc kubenswrapper[4724]: I0226 12:23:30.975976 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:23:30 crc kubenswrapper[4724]: E0226 12:23:30.977013 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:23:42 crc kubenswrapper[4724]: I0226 12:23:42.976283 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:23:42 crc kubenswrapper[4724]: E0226 12:23:42.977352 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:23:55 crc kubenswrapper[4724]: I0226 12:23:55.975504 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:23:55 crc kubenswrapper[4724]: E0226 12:23:55.976414 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:24:00 crc kubenswrapper[4724]: I0226 12:24:00.152005 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535144-wk64c"] Feb 26 12:24:00 crc kubenswrapper[4724]: E0226 12:24:00.153087 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3276ef-d230-4079-9380-8dfa04c34a80" containerName="extract-utilities" Feb 26 12:24:00 crc kubenswrapper[4724]: I0226 12:24:00.153105 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3276ef-d230-4079-9380-8dfa04c34a80" containerName="extract-utilities" Feb 26 12:24:00 crc kubenswrapper[4724]: E0226 12:24:00.153152 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3276ef-d230-4079-9380-8dfa04c34a80" containerName="registry-server" Feb 26 12:24:00 crc kubenswrapper[4724]: I0226 12:24:00.153162 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3276ef-d230-4079-9380-8dfa04c34a80" containerName="registry-server" Feb 26 12:24:00 crc kubenswrapper[4724]: E0226 12:24:00.153214 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3276ef-d230-4079-9380-8dfa04c34a80" containerName="extract-content" Feb 26 12:24:00 crc kubenswrapper[4724]: I0226 12:24:00.153225 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3276ef-d230-4079-9380-8dfa04c34a80" containerName="extract-content" Feb 26 12:24:00 crc kubenswrapper[4724]: I0226 12:24:00.153518 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c3276ef-d230-4079-9380-8dfa04c34a80" containerName="registry-server" Feb 26 12:24:00 crc kubenswrapper[4724]: I0226 12:24:00.154346 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535144-wk64c" Feb 26 12:24:00 crc kubenswrapper[4724]: I0226 12:24:00.156325 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:24:00 crc kubenswrapper[4724]: I0226 12:24:00.156633 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:24:00 crc kubenswrapper[4724]: I0226 12:24:00.158396 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:24:00 crc kubenswrapper[4724]: I0226 12:24:00.177878 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535144-wk64c"] Feb 26 12:24:00 crc kubenswrapper[4724]: I0226 12:24:00.305460 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxmfm\" (UniqueName: \"kubernetes.io/projected/494cf401-faeb-4819-982f-fc5fe2bb0fdc-kube-api-access-dxmfm\") pod \"auto-csr-approver-29535144-wk64c\" (UID: \"494cf401-faeb-4819-982f-fc5fe2bb0fdc\") " pod="openshift-infra/auto-csr-approver-29535144-wk64c" Feb 26 12:24:00 crc kubenswrapper[4724]: I0226 12:24:00.408311 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxmfm\" (UniqueName: \"kubernetes.io/projected/494cf401-faeb-4819-982f-fc5fe2bb0fdc-kube-api-access-dxmfm\") pod \"auto-csr-approver-29535144-wk64c\" (UID: \"494cf401-faeb-4819-982f-fc5fe2bb0fdc\") " pod="openshift-infra/auto-csr-approver-29535144-wk64c" Feb 26 12:24:00 crc kubenswrapper[4724]: I0226 12:24:00.442423 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxmfm\" (UniqueName: \"kubernetes.io/projected/494cf401-faeb-4819-982f-fc5fe2bb0fdc-kube-api-access-dxmfm\") pod \"auto-csr-approver-29535144-wk64c\" (UID: \"494cf401-faeb-4819-982f-fc5fe2bb0fdc\") " pod="openshift-infra/auto-csr-approver-29535144-wk64c" Feb 26 12:24:00 crc kubenswrapper[4724]: I0226 12:24:00.489701 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535144-wk64c" Feb 26 12:24:01 crc kubenswrapper[4724]: I0226 12:24:01.269890 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535144-wk64c"] Feb 26 12:24:01 crc kubenswrapper[4724]: I0226 12:24:01.667752 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535144-wk64c" event={"ID":"494cf401-faeb-4819-982f-fc5fe2bb0fdc","Type":"ContainerStarted","Data":"37970ea5ef90d764717012beeae6434996a26ed275b725dfb9468fba0f1206b7"} Feb 26 12:24:03 crc kubenswrapper[4724]: I0226 12:24:03.689504 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535144-wk64c" event={"ID":"494cf401-faeb-4819-982f-fc5fe2bb0fdc","Type":"ContainerStarted","Data":"16768e9ec32464d16ffcc41738c8904d86967e7d7d307b1b5505cee4c4600396"} Feb 26 12:24:03 crc kubenswrapper[4724]: I0226 12:24:03.714581 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535144-wk64c" podStartSLOduration=2.584048662 podStartE2EDuration="3.714530655s" podCreationTimestamp="2026-02-26 12:24:00 +0000 UTC" firstStartedPulling="2026-02-26 12:24:01.279422709 +0000 UTC m=+4707.935161824" lastFinishedPulling="2026-02-26 12:24:02.409904702 +0000 UTC m=+4709.065643817" observedRunningTime="2026-02-26 12:24:03.703078712 +0000 UTC m=+4710.358817827" watchObservedRunningTime="2026-02-26 12:24:03.714530655 +0000 UTC m=+4710.370269770" Feb 26 12:24:04 crc kubenswrapper[4724]: I0226 12:24:04.699298 4724 generic.go:334] "Generic (PLEG): container finished" podID="494cf401-faeb-4819-982f-fc5fe2bb0fdc" containerID="16768e9ec32464d16ffcc41738c8904d86967e7d7d307b1b5505cee4c4600396" exitCode=0 Feb 26 12:24:04 crc kubenswrapper[4724]: I0226 12:24:04.699370 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535144-wk64c" event={"ID":"494cf401-faeb-4819-982f-fc5fe2bb0fdc","Type":"ContainerDied","Data":"16768e9ec32464d16ffcc41738c8904d86967e7d7d307b1b5505cee4c4600396"} Feb 26 12:24:06 crc kubenswrapper[4724]: I0226 12:24:06.554365 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535144-wk64c" Feb 26 12:24:06 crc kubenswrapper[4724]: I0226 12:24:06.643827 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxmfm\" (UniqueName: \"kubernetes.io/projected/494cf401-faeb-4819-982f-fc5fe2bb0fdc-kube-api-access-dxmfm\") pod \"494cf401-faeb-4819-982f-fc5fe2bb0fdc\" (UID: \"494cf401-faeb-4819-982f-fc5fe2bb0fdc\") " Feb 26 12:24:06 crc kubenswrapper[4724]: I0226 12:24:06.677619 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/494cf401-faeb-4819-982f-fc5fe2bb0fdc-kube-api-access-dxmfm" (OuterVolumeSpecName: "kube-api-access-dxmfm") pod "494cf401-faeb-4819-982f-fc5fe2bb0fdc" (UID: "494cf401-faeb-4819-982f-fc5fe2bb0fdc"). InnerVolumeSpecName "kube-api-access-dxmfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:24:06 crc kubenswrapper[4724]: I0226 12:24:06.721767 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535144-wk64c" event={"ID":"494cf401-faeb-4819-982f-fc5fe2bb0fdc","Type":"ContainerDied","Data":"37970ea5ef90d764717012beeae6434996a26ed275b725dfb9468fba0f1206b7"} Feb 26 12:24:06 crc kubenswrapper[4724]: I0226 12:24:06.721838 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37970ea5ef90d764717012beeae6434996a26ed275b725dfb9468fba0f1206b7" Feb 26 12:24:06 crc kubenswrapper[4724]: I0226 12:24:06.722124 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535144-wk64c" Feb 26 12:24:06 crc kubenswrapper[4724]: I0226 12:24:06.751608 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxmfm\" (UniqueName: \"kubernetes.io/projected/494cf401-faeb-4819-982f-fc5fe2bb0fdc-kube-api-access-dxmfm\") on node \"crc\" DevicePath \"\"" Feb 26 12:24:06 crc kubenswrapper[4724]: I0226 12:24:06.800606 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535138-lwvnb"] Feb 26 12:24:06 crc kubenswrapper[4724]: I0226 12:24:06.811174 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535138-lwvnb"] Feb 26 12:24:07 crc kubenswrapper[4724]: I0226 12:24:07.978099 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:24:07 crc kubenswrapper[4724]: E0226 12:24:07.980330 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:24:07 crc kubenswrapper[4724]: I0226 12:24:07.993537 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca17bd1b-4cf2-46d2-9550-0676c7e9d40c" path="/var/lib/kubelet/pods/ca17bd1b-4cf2-46d2-9550-0676c7e9d40c/volumes" Feb 26 12:24:11 crc kubenswrapper[4724]: I0226 12:24:11.556278 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t6dv9"] Feb 26 12:24:11 crc kubenswrapper[4724]: E0226 12:24:11.557854 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="494cf401-faeb-4819-982f-fc5fe2bb0fdc" containerName="oc" Feb 26 12:24:11 crc kubenswrapper[4724]: I0226 12:24:11.557923 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="494cf401-faeb-4819-982f-fc5fe2bb0fdc" containerName="oc" Feb 26 12:24:11 crc kubenswrapper[4724]: I0226 12:24:11.558156 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="494cf401-faeb-4819-982f-fc5fe2bb0fdc" containerName="oc" Feb 26 12:24:11 crc kubenswrapper[4724]: I0226 12:24:11.559579 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:11 crc kubenswrapper[4724]: I0226 12:24:11.579532 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6dv9"] Feb 26 12:24:11 crc kubenswrapper[4724]: I0226 12:24:11.671676 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12a72bc2-5fa9-4ce8-8072-37de7cc16370-catalog-content\") pod \"redhat-marketplace-t6dv9\" (UID: \"12a72bc2-5fa9-4ce8-8072-37de7cc16370\") " pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:11 crc kubenswrapper[4724]: I0226 12:24:11.671744 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12a72bc2-5fa9-4ce8-8072-37de7cc16370-utilities\") pod \"redhat-marketplace-t6dv9\" (UID: \"12a72bc2-5fa9-4ce8-8072-37de7cc16370\") " pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:11 crc kubenswrapper[4724]: I0226 12:24:11.671996 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6lc6\" (UniqueName: \"kubernetes.io/projected/12a72bc2-5fa9-4ce8-8072-37de7cc16370-kube-api-access-j6lc6\") pod \"redhat-marketplace-t6dv9\" (UID: \"12a72bc2-5fa9-4ce8-8072-37de7cc16370\") " pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:11 crc kubenswrapper[4724]: I0226 12:24:11.774126 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12a72bc2-5fa9-4ce8-8072-37de7cc16370-catalog-content\") pod \"redhat-marketplace-t6dv9\" (UID: \"12a72bc2-5fa9-4ce8-8072-37de7cc16370\") " pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:11 crc kubenswrapper[4724]: I0226 12:24:11.774215 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12a72bc2-5fa9-4ce8-8072-37de7cc16370-utilities\") pod \"redhat-marketplace-t6dv9\" (UID: \"12a72bc2-5fa9-4ce8-8072-37de7cc16370\") " pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:11 crc kubenswrapper[4724]: I0226 12:24:11.774272 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6lc6\" (UniqueName: \"kubernetes.io/projected/12a72bc2-5fa9-4ce8-8072-37de7cc16370-kube-api-access-j6lc6\") pod \"redhat-marketplace-t6dv9\" (UID: \"12a72bc2-5fa9-4ce8-8072-37de7cc16370\") " pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:11 crc kubenswrapper[4724]: I0226 12:24:11.775124 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12a72bc2-5fa9-4ce8-8072-37de7cc16370-utilities\") pod \"redhat-marketplace-t6dv9\" (UID: \"12a72bc2-5fa9-4ce8-8072-37de7cc16370\") " pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:11 crc kubenswrapper[4724]: I0226 12:24:11.775159 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12a72bc2-5fa9-4ce8-8072-37de7cc16370-catalog-content\") pod \"redhat-marketplace-t6dv9\" (UID: \"12a72bc2-5fa9-4ce8-8072-37de7cc16370\") " pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:11 crc kubenswrapper[4724]: I0226 12:24:11.814213 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6lc6\" (UniqueName: \"kubernetes.io/projected/12a72bc2-5fa9-4ce8-8072-37de7cc16370-kube-api-access-j6lc6\") pod \"redhat-marketplace-t6dv9\" (UID: \"12a72bc2-5fa9-4ce8-8072-37de7cc16370\") " pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:11 crc kubenswrapper[4724]: I0226 12:24:11.878874 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:12 crc kubenswrapper[4724]: I0226 12:24:12.596256 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6dv9"] Feb 26 12:24:12 crc kubenswrapper[4724]: W0226 12:24:12.900936 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12a72bc2_5fa9_4ce8_8072_37de7cc16370.slice/crio-8b8d585f183db37fc5ab5007aeea29dfe73eed6a31af4ffa70690987e97680a6 WatchSource:0}: Error finding container 8b8d585f183db37fc5ab5007aeea29dfe73eed6a31af4ffa70690987e97680a6: Status 404 returned error can't find the container with id 8b8d585f183db37fc5ab5007aeea29dfe73eed6a31af4ffa70690987e97680a6 Feb 26 12:24:13 crc kubenswrapper[4724]: E0226 12:24:13.317016 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12a72bc2_5fa9_4ce8_8072_37de7cc16370.slice/crio-conmon-b38a2e63ea2adc97b191b8c164a931a90bbe4b5fab6537a8b9bce7d2e25876ee.scope\": RecentStats: unable to find data in memory cache]" Feb 26 12:24:13 crc kubenswrapper[4724]: I0226 12:24:13.796638 4724 generic.go:334] "Generic (PLEG): container finished" podID="12a72bc2-5fa9-4ce8-8072-37de7cc16370" containerID="b38a2e63ea2adc97b191b8c164a931a90bbe4b5fab6537a8b9bce7d2e25876ee" exitCode=0 Feb 26 12:24:13 crc kubenswrapper[4724]: I0226 12:24:13.796689 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6dv9" event={"ID":"12a72bc2-5fa9-4ce8-8072-37de7cc16370","Type":"ContainerDied","Data":"b38a2e63ea2adc97b191b8c164a931a90bbe4b5fab6537a8b9bce7d2e25876ee"} Feb 26 12:24:13 crc kubenswrapper[4724]: I0226 12:24:13.796720 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6dv9" event={"ID":"12a72bc2-5fa9-4ce8-8072-37de7cc16370","Type":"ContainerStarted","Data":"8b8d585f183db37fc5ab5007aeea29dfe73eed6a31af4ffa70690987e97680a6"} Feb 26 12:24:14 crc kubenswrapper[4724]: I0226 12:24:14.806644 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6dv9" event={"ID":"12a72bc2-5fa9-4ce8-8072-37de7cc16370","Type":"ContainerStarted","Data":"f6d0e316fd0e4f8d44530fa8e99f1344704ec0523b865b5d674a6fbebc20aa05"} Feb 26 12:24:16 crc kubenswrapper[4724]: I0226 12:24:16.834323 4724 generic.go:334] "Generic (PLEG): container finished" podID="12a72bc2-5fa9-4ce8-8072-37de7cc16370" containerID="f6d0e316fd0e4f8d44530fa8e99f1344704ec0523b865b5d674a6fbebc20aa05" exitCode=0 Feb 26 12:24:16 crc kubenswrapper[4724]: I0226 12:24:16.834403 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6dv9" event={"ID":"12a72bc2-5fa9-4ce8-8072-37de7cc16370","Type":"ContainerDied","Data":"f6d0e316fd0e4f8d44530fa8e99f1344704ec0523b865b5d674a6fbebc20aa05"} Feb 26 12:24:17 crc kubenswrapper[4724]: I0226 12:24:17.846017 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6dv9" event={"ID":"12a72bc2-5fa9-4ce8-8072-37de7cc16370","Type":"ContainerStarted","Data":"4a645061e20485c8a6ddbc386e5353c595ceb382d86063e9f8814735d2863125"} Feb 26 12:24:17 crc kubenswrapper[4724]: I0226 12:24:17.866561 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t6dv9" podStartSLOduration=3.415127388 podStartE2EDuration="6.86654102s" podCreationTimestamp="2026-02-26 12:24:11 +0000 UTC" firstStartedPulling="2026-02-26 12:24:13.800273869 +0000 UTC m=+4720.456012984" lastFinishedPulling="2026-02-26 12:24:17.251687491 +0000 UTC m=+4723.907426616" observedRunningTime="2026-02-26 12:24:17.865637507 +0000 UTC m=+4724.521376642" watchObservedRunningTime="2026-02-26 12:24:17.86654102 +0000 UTC m=+4724.522280145" Feb 26 12:24:20 crc kubenswrapper[4724]: I0226 12:24:20.649225 4724 scope.go:117] "RemoveContainer" containerID="a6653805bf0e81d58345e6ef5cb86c56ee6417cbd5e02048c8be4f6982d9cd37" Feb 26 12:24:21 crc kubenswrapper[4724]: I0226 12:24:21.879895 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:21 crc kubenswrapper[4724]: I0226 12:24:21.880230 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:22 crc kubenswrapper[4724]: I0226 12:24:22.931580 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-t6dv9" podUID="12a72bc2-5fa9-4ce8-8072-37de7cc16370" containerName="registry-server" probeResult="failure" output=< Feb 26 12:24:22 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:24:22 crc kubenswrapper[4724]: > Feb 26 12:24:22 crc kubenswrapper[4724]: I0226 12:24:22.975833 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:24:22 crc kubenswrapper[4724]: E0226 12:24:22.976477 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:24:31 crc kubenswrapper[4724]: I0226 12:24:31.974575 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:32 crc kubenswrapper[4724]: I0226 12:24:32.062322 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:32 crc kubenswrapper[4724]: I0226 12:24:32.231057 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6dv9"] Feb 26 12:24:33 crc kubenswrapper[4724]: I0226 12:24:33.996226 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t6dv9" podUID="12a72bc2-5fa9-4ce8-8072-37de7cc16370" containerName="registry-server" containerID="cri-o://4a645061e20485c8a6ddbc386e5353c595ceb382d86063e9f8814735d2863125" gracePeriod=2 Feb 26 12:24:35 crc kubenswrapper[4724]: I0226 12:24:35.005890 4724 generic.go:334] "Generic (PLEG): container finished" podID="12a72bc2-5fa9-4ce8-8072-37de7cc16370" containerID="4a645061e20485c8a6ddbc386e5353c595ceb382d86063e9f8814735d2863125" exitCode=0 Feb 26 12:24:35 crc kubenswrapper[4724]: I0226 12:24:35.005963 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6dv9" event={"ID":"12a72bc2-5fa9-4ce8-8072-37de7cc16370","Type":"ContainerDied","Data":"4a645061e20485c8a6ddbc386e5353c595ceb382d86063e9f8814735d2863125"} Feb 26 12:24:35 crc kubenswrapper[4724]: I0226 12:24:35.500868 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:35 crc kubenswrapper[4724]: I0226 12:24:35.569799 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12a72bc2-5fa9-4ce8-8072-37de7cc16370-utilities\") pod \"12a72bc2-5fa9-4ce8-8072-37de7cc16370\" (UID: \"12a72bc2-5fa9-4ce8-8072-37de7cc16370\") " Feb 26 12:24:35 crc kubenswrapper[4724]: I0226 12:24:35.569889 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12a72bc2-5fa9-4ce8-8072-37de7cc16370-catalog-content\") pod \"12a72bc2-5fa9-4ce8-8072-37de7cc16370\" (UID: \"12a72bc2-5fa9-4ce8-8072-37de7cc16370\") " Feb 26 12:24:35 crc kubenswrapper[4724]: I0226 12:24:35.569926 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6lc6\" (UniqueName: \"kubernetes.io/projected/12a72bc2-5fa9-4ce8-8072-37de7cc16370-kube-api-access-j6lc6\") pod \"12a72bc2-5fa9-4ce8-8072-37de7cc16370\" (UID: \"12a72bc2-5fa9-4ce8-8072-37de7cc16370\") " Feb 26 12:24:35 crc kubenswrapper[4724]: I0226 12:24:35.571790 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12a72bc2-5fa9-4ce8-8072-37de7cc16370-utilities" (OuterVolumeSpecName: "utilities") pod "12a72bc2-5fa9-4ce8-8072-37de7cc16370" (UID: "12a72bc2-5fa9-4ce8-8072-37de7cc16370"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:24:35 crc kubenswrapper[4724]: I0226 12:24:35.604558 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12a72bc2-5fa9-4ce8-8072-37de7cc16370-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12a72bc2-5fa9-4ce8-8072-37de7cc16370" (UID: "12a72bc2-5fa9-4ce8-8072-37de7cc16370"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:24:35 crc kubenswrapper[4724]: I0226 12:24:35.672700 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12a72bc2-5fa9-4ce8-8072-37de7cc16370-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:24:35 crc kubenswrapper[4724]: I0226 12:24:35.673049 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12a72bc2-5fa9-4ce8-8072-37de7cc16370-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:24:36 crc kubenswrapper[4724]: I0226 12:24:36.005789 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a72bc2-5fa9-4ce8-8072-37de7cc16370-kube-api-access-j6lc6" (OuterVolumeSpecName: "kube-api-access-j6lc6") pod "12a72bc2-5fa9-4ce8-8072-37de7cc16370" (UID: "12a72bc2-5fa9-4ce8-8072-37de7cc16370"). InnerVolumeSpecName "kube-api-access-j6lc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:24:36 crc kubenswrapper[4724]: I0226 12:24:36.023424 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6dv9" event={"ID":"12a72bc2-5fa9-4ce8-8072-37de7cc16370","Type":"ContainerDied","Data":"8b8d585f183db37fc5ab5007aeea29dfe73eed6a31af4ffa70690987e97680a6"} Feb 26 12:24:36 crc kubenswrapper[4724]: I0226 12:24:36.023473 4724 scope.go:117] "RemoveContainer" containerID="4a645061e20485c8a6ddbc386e5353c595ceb382d86063e9f8814735d2863125" Feb 26 12:24:36 crc kubenswrapper[4724]: I0226 12:24:36.023620 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6dv9" Feb 26 12:24:36 crc kubenswrapper[4724]: I0226 12:24:36.080204 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6lc6\" (UniqueName: \"kubernetes.io/projected/12a72bc2-5fa9-4ce8-8072-37de7cc16370-kube-api-access-j6lc6\") on node \"crc\" DevicePath \"\"" Feb 26 12:24:36 crc kubenswrapper[4724]: I0226 12:24:36.084226 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6dv9"] Feb 26 12:24:36 crc kubenswrapper[4724]: I0226 12:24:36.085377 4724 scope.go:117] "RemoveContainer" containerID="f6d0e316fd0e4f8d44530fa8e99f1344704ec0523b865b5d674a6fbebc20aa05" Feb 26 12:24:36 crc kubenswrapper[4724]: I0226 12:24:36.107519 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6dv9"] Feb 26 12:24:36 crc kubenswrapper[4724]: I0226 12:24:36.123353 4724 scope.go:117] "RemoveContainer" containerID="b38a2e63ea2adc97b191b8c164a931a90bbe4b5fab6537a8b9bce7d2e25876ee" Feb 26 12:24:37 crc kubenswrapper[4724]: I0226 12:24:37.976148 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:24:37 crc kubenswrapper[4724]: E0226 12:24:37.976799 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:24:37 crc kubenswrapper[4724]: I0226 12:24:37.986354 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12a72bc2-5fa9-4ce8-8072-37de7cc16370" path="/var/lib/kubelet/pods/12a72bc2-5fa9-4ce8-8072-37de7cc16370/volumes" Feb 26 12:24:49 crc kubenswrapper[4724]: I0226 12:24:49.975515 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:24:49 crc kubenswrapper[4724]: E0226 12:24:49.976270 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:25:03 crc kubenswrapper[4724]: I0226 12:25:03.991314 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kmxt5"] Feb 26 12:25:03 crc kubenswrapper[4724]: E0226 12:25:03.992333 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a72bc2-5fa9-4ce8-8072-37de7cc16370" containerName="registry-server" Feb 26 12:25:03 crc kubenswrapper[4724]: I0226 12:25:03.992348 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a72bc2-5fa9-4ce8-8072-37de7cc16370" containerName="registry-server" Feb 26 12:25:03 crc kubenswrapper[4724]: E0226 12:25:03.992370 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a72bc2-5fa9-4ce8-8072-37de7cc16370" containerName="extract-utilities" Feb 26 12:25:03 crc kubenswrapper[4724]: I0226 12:25:03.992377 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a72bc2-5fa9-4ce8-8072-37de7cc16370" containerName="extract-utilities" Feb 26 12:25:03 crc kubenswrapper[4724]: E0226 12:25:03.992393 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a72bc2-5fa9-4ce8-8072-37de7cc16370" containerName="extract-content" Feb 26 12:25:03 crc kubenswrapper[4724]: I0226 12:25:03.992399 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a72bc2-5fa9-4ce8-8072-37de7cc16370" containerName="extract-content" Feb 26 12:25:03 crc kubenswrapper[4724]: I0226 12:25:03.992620 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a72bc2-5fa9-4ce8-8072-37de7cc16370" containerName="registry-server" Feb 26 12:25:04 crc kubenswrapper[4724]: I0226 12:25:04.003187 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:25:04 crc kubenswrapper[4724]: I0226 12:25:04.006268 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kmxt5"] Feb 26 12:25:04 crc kubenswrapper[4724]: I0226 12:25:04.219444 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zrxz\" (UniqueName: \"kubernetes.io/projected/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-kube-api-access-2zrxz\") pod \"redhat-operators-kmxt5\" (UID: \"f0220fbf-6d20-45e2-90eb-0fe4903b53c2\") " pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:25:04 crc kubenswrapper[4724]: I0226 12:25:04.219805 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-catalog-content\") pod \"redhat-operators-kmxt5\" (UID: \"f0220fbf-6d20-45e2-90eb-0fe4903b53c2\") " pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:25:04 crc kubenswrapper[4724]: I0226 12:25:04.220028 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-utilities\") pod \"redhat-operators-kmxt5\" (UID: \"f0220fbf-6d20-45e2-90eb-0fe4903b53c2\") " pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:25:04 crc kubenswrapper[4724]: I0226 12:25:04.321677 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-utilities\") pod \"redhat-operators-kmxt5\" (UID: \"f0220fbf-6d20-45e2-90eb-0fe4903b53c2\") " pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:25:04 crc kubenswrapper[4724]: I0226 12:25:04.322255 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zrxz\" (UniqueName: \"kubernetes.io/projected/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-kube-api-access-2zrxz\") pod \"redhat-operators-kmxt5\" (UID: \"f0220fbf-6d20-45e2-90eb-0fe4903b53c2\") " pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:25:04 crc kubenswrapper[4724]: I0226 12:25:04.322335 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-utilities\") pod \"redhat-operators-kmxt5\" (UID: \"f0220fbf-6d20-45e2-90eb-0fe4903b53c2\") " pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:25:04 crc kubenswrapper[4724]: I0226 12:25:04.322625 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-catalog-content\") pod \"redhat-operators-kmxt5\" (UID: \"f0220fbf-6d20-45e2-90eb-0fe4903b53c2\") " pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:25:04 crc kubenswrapper[4724]: I0226 12:25:04.323311 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-catalog-content\") pod \"redhat-operators-kmxt5\" (UID: \"f0220fbf-6d20-45e2-90eb-0fe4903b53c2\") " pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:25:04 crc kubenswrapper[4724]: I0226 12:25:04.357049 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zrxz\" (UniqueName: \"kubernetes.io/projected/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-kube-api-access-2zrxz\") pod \"redhat-operators-kmxt5\" (UID: \"f0220fbf-6d20-45e2-90eb-0fe4903b53c2\") " pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:25:04 crc kubenswrapper[4724]: I0226 12:25:04.630113 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:25:04 crc kubenswrapper[4724]: I0226 12:25:04.975450 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:25:04 crc kubenswrapper[4724]: E0226 12:25:04.976029 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:25:05 crc kubenswrapper[4724]: I0226 12:25:05.829446 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kmxt5"] Feb 26 12:25:06 crc kubenswrapper[4724]: I0226 12:25:06.321539 4724 generic.go:334] "Generic (PLEG): container finished" podID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerID="0569b5318593a490dea14e92a6cdf8758a66ce152418d96ce5ea6abfddfb9e58" exitCode=0 Feb 26 12:25:06 crc kubenswrapper[4724]: I0226 12:25:06.321592 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmxt5" event={"ID":"f0220fbf-6d20-45e2-90eb-0fe4903b53c2","Type":"ContainerDied","Data":"0569b5318593a490dea14e92a6cdf8758a66ce152418d96ce5ea6abfddfb9e58"} Feb 26 12:25:06 crc kubenswrapper[4724]: I0226 12:25:06.321850 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmxt5" event={"ID":"f0220fbf-6d20-45e2-90eb-0fe4903b53c2","Type":"ContainerStarted","Data":"587d36956dfd2ab926c69f5011126064cdcac69021ada39ad4a4d36030e0f9e2"} Feb 26 12:25:09 crc kubenswrapper[4724]: I0226 12:25:09.357436 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmxt5" event={"ID":"f0220fbf-6d20-45e2-90eb-0fe4903b53c2","Type":"ContainerStarted","Data":"e8a772461145e2a2ee16eaf6613e320a3ac004075ed60f2cd87595e60807581b"} Feb 26 12:25:16 crc kubenswrapper[4724]: I0226 12:25:16.976173 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:25:16 crc kubenswrapper[4724]: E0226 12:25:16.977215 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:25:25 crc kubenswrapper[4724]: I0226 12:25:25.139748 4724 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.138395275s: [/var/lib/containers/storage/overlay/c0281d18f6b2181ad680c8e0e2ba5005a5d9f05249768242d933b43b8d449daa/diff /var/log/pods/openstack_nova-api-0_2496c701-9abc-4d28-8f5d-9cde4cefbabb/nova-api-api/0.log]; will not log again for this container unless duration exceeds 2s Feb 26 12:25:28 crc kubenswrapper[4724]: I0226 12:25:28.975384 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:25:28 crc kubenswrapper[4724]: E0226 12:25:28.977460 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:25:29 crc kubenswrapper[4724]: I0226 12:25:29.619807 4724 generic.go:334] "Generic (PLEG): container finished" podID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerID="e8a772461145e2a2ee16eaf6613e320a3ac004075ed60f2cd87595e60807581b" exitCode=0 Feb 26 12:25:29 crc kubenswrapper[4724]: I0226 12:25:29.619853 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmxt5" event={"ID":"f0220fbf-6d20-45e2-90eb-0fe4903b53c2","Type":"ContainerDied","Data":"e8a772461145e2a2ee16eaf6613e320a3ac004075ed60f2cd87595e60807581b"} Feb 26 12:25:31 crc kubenswrapper[4724]: I0226 12:25:31.646852 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmxt5" event={"ID":"f0220fbf-6d20-45e2-90eb-0fe4903b53c2","Type":"ContainerStarted","Data":"ace32fc8b2bebf89fc8f53b1283f353eafdf80c6b5700ed937117ac4e161e586"} Feb 26 12:25:31 crc kubenswrapper[4724]: I0226 12:25:31.675451 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kmxt5" podStartSLOduration=4.012570756 podStartE2EDuration="28.67539786s" podCreationTimestamp="2026-02-26 12:25:03 +0000 UTC" firstStartedPulling="2026-02-26 12:25:06.323107691 +0000 UTC m=+4772.978846806" lastFinishedPulling="2026-02-26 12:25:30.985934795 +0000 UTC m=+4797.641673910" observedRunningTime="2026-02-26 12:25:31.66366626 +0000 UTC m=+4798.319405375" watchObservedRunningTime="2026-02-26 12:25:31.67539786 +0000 UTC m=+4798.331136975" Feb 26 12:25:34 crc kubenswrapper[4724]: I0226 12:25:34.630895 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:25:34 crc kubenswrapper[4724]: I0226 12:25:34.631419 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:25:35 crc kubenswrapper[4724]: I0226 12:25:35.676597 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kmxt5" podUID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerName="registry-server" probeResult="failure" output=< Feb 26 12:25:35 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:25:35 crc kubenswrapper[4724]: > Feb 26 12:25:40 crc kubenswrapper[4724]: I0226 12:25:40.976499 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:25:40 crc kubenswrapper[4724]: E0226 12:25:40.980234 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:25:45 crc kubenswrapper[4724]: I0226 12:25:45.800764 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kmxt5" podUID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerName="registry-server" probeResult="failure" output=< Feb 26 12:25:45 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:25:45 crc kubenswrapper[4724]: > Feb 26 12:25:51 crc kubenswrapper[4724]: I0226 12:25:51.975276 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:25:51 crc kubenswrapper[4724]: E0226 12:25:51.975971 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:25:55 crc kubenswrapper[4724]: I0226 12:25:55.673741 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kmxt5" podUID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerName="registry-server" probeResult="failure" output=< Feb 26 12:25:55 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:25:55 crc kubenswrapper[4724]: > Feb 26 12:26:00 crc kubenswrapper[4724]: I0226 12:26:00.297542 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535146-k6td2"] Feb 26 12:26:00 crc kubenswrapper[4724]: I0226 12:26:00.309133 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535146-k6td2" Feb 26 12:26:00 crc kubenswrapper[4724]: I0226 12:26:00.313830 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535146-k6td2"] Feb 26 12:26:00 crc kubenswrapper[4724]: I0226 12:26:00.370129 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:26:00 crc kubenswrapper[4724]: I0226 12:26:00.370135 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:26:00 crc kubenswrapper[4724]: I0226 12:26:00.370171 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:26:00 crc kubenswrapper[4724]: I0226 12:26:00.393652 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzbf\" (UniqueName: \"kubernetes.io/projected/ac2157f7-d6e6-4b07-b51a-ec1d3035a24f-kube-api-access-vxzbf\") pod \"auto-csr-approver-29535146-k6td2\" (UID: \"ac2157f7-d6e6-4b07-b51a-ec1d3035a24f\") " pod="openshift-infra/auto-csr-approver-29535146-k6td2" Feb 26 12:26:00 crc kubenswrapper[4724]: I0226 12:26:00.496416 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxzbf\" (UniqueName: \"kubernetes.io/projected/ac2157f7-d6e6-4b07-b51a-ec1d3035a24f-kube-api-access-vxzbf\") pod \"auto-csr-approver-29535146-k6td2\" (UID: \"ac2157f7-d6e6-4b07-b51a-ec1d3035a24f\") " pod="openshift-infra/auto-csr-approver-29535146-k6td2" Feb 26 12:26:00 crc kubenswrapper[4724]: I0226 12:26:00.543140 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxzbf\" (UniqueName: \"kubernetes.io/projected/ac2157f7-d6e6-4b07-b51a-ec1d3035a24f-kube-api-access-vxzbf\") pod \"auto-csr-approver-29535146-k6td2\" (UID: \"ac2157f7-d6e6-4b07-b51a-ec1d3035a24f\") " pod="openshift-infra/auto-csr-approver-29535146-k6td2" Feb 26 12:26:00 crc kubenswrapper[4724]: I0226 12:26:00.663280 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535146-k6td2" Feb 26 12:26:02 crc kubenswrapper[4724]: I0226 12:26:02.227687 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535146-k6td2"] Feb 26 12:26:02 crc kubenswrapper[4724]: I0226 12:26:02.973929 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535146-k6td2" event={"ID":"ac2157f7-d6e6-4b07-b51a-ec1d3035a24f","Type":"ContainerStarted","Data":"e868857135721970671323d1b0c3d88b3432ff6d347a16ef19b65c798b1de078"} Feb 26 12:26:04 crc kubenswrapper[4724]: I0226 12:26:04.976174 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:26:04 crc kubenswrapper[4724]: E0226 12:26:04.976814 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:26:05 crc kubenswrapper[4724]: I0226 12:26:05.690708 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kmxt5" podUID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerName="registry-server" probeResult="failure" output=< Feb 26 12:26:05 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:26:05 crc kubenswrapper[4724]: > Feb 26 12:26:06 crc kubenswrapper[4724]: I0226 12:26:06.014136 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535146-k6td2" event={"ID":"ac2157f7-d6e6-4b07-b51a-ec1d3035a24f","Type":"ContainerStarted","Data":"95b7c58bd5b68d8f55520c3ee834137c05fb308d4714da0a5b37c3160bdc499a"} Feb 26 12:26:06 crc kubenswrapper[4724]: I0226 12:26:06.087068 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535146-k6td2" podStartSLOduration=4.656149341 podStartE2EDuration="6.08704221s" podCreationTimestamp="2026-02-26 12:26:00 +0000 UTC" firstStartedPulling="2026-02-26 12:26:02.249273437 +0000 UTC m=+4828.905012552" lastFinishedPulling="2026-02-26 12:26:03.680166306 +0000 UTC m=+4830.335905421" observedRunningTime="2026-02-26 12:26:06.042242355 +0000 UTC m=+4832.697981470" watchObservedRunningTime="2026-02-26 12:26:06.08704221 +0000 UTC m=+4832.742781325" Feb 26 12:26:07 crc kubenswrapper[4724]: I0226 12:26:07.025254 4724 generic.go:334] "Generic (PLEG): container finished" podID="ac2157f7-d6e6-4b07-b51a-ec1d3035a24f" containerID="95b7c58bd5b68d8f55520c3ee834137c05fb308d4714da0a5b37c3160bdc499a" exitCode=0 Feb 26 12:26:07 crc kubenswrapper[4724]: I0226 12:26:07.025328 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535146-k6td2" event={"ID":"ac2157f7-d6e6-4b07-b51a-ec1d3035a24f","Type":"ContainerDied","Data":"95b7c58bd5b68d8f55520c3ee834137c05fb308d4714da0a5b37c3160bdc499a"} Feb 26 12:26:08 crc kubenswrapper[4724]: I0226 12:26:08.556332 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535146-k6td2" Feb 26 12:26:08 crc kubenswrapper[4724]: I0226 12:26:08.679280 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxzbf\" (UniqueName: \"kubernetes.io/projected/ac2157f7-d6e6-4b07-b51a-ec1d3035a24f-kube-api-access-vxzbf\") pod \"ac2157f7-d6e6-4b07-b51a-ec1d3035a24f\" (UID: \"ac2157f7-d6e6-4b07-b51a-ec1d3035a24f\") " Feb 26 12:26:08 crc kubenswrapper[4724]: I0226 12:26:08.715048 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac2157f7-d6e6-4b07-b51a-ec1d3035a24f-kube-api-access-vxzbf" (OuterVolumeSpecName: "kube-api-access-vxzbf") pod "ac2157f7-d6e6-4b07-b51a-ec1d3035a24f" (UID: "ac2157f7-d6e6-4b07-b51a-ec1d3035a24f"). InnerVolumeSpecName "kube-api-access-vxzbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:26:08 crc kubenswrapper[4724]: I0226 12:26:08.782630 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxzbf\" (UniqueName: \"kubernetes.io/projected/ac2157f7-d6e6-4b07-b51a-ec1d3035a24f-kube-api-access-vxzbf\") on node \"crc\" DevicePath \"\"" Feb 26 12:26:09 crc kubenswrapper[4724]: I0226 12:26:09.044283 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535146-k6td2" event={"ID":"ac2157f7-d6e6-4b07-b51a-ec1d3035a24f","Type":"ContainerDied","Data":"e868857135721970671323d1b0c3d88b3432ff6d347a16ef19b65c798b1de078"} Feb 26 12:26:09 crc kubenswrapper[4724]: I0226 12:26:09.044330 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535146-k6td2" Feb 26 12:26:09 crc kubenswrapper[4724]: I0226 12:26:09.045587 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e868857135721970671323d1b0c3d88b3432ff6d347a16ef19b65c798b1de078" Feb 26 12:26:09 crc kubenswrapper[4724]: I0226 12:26:09.141359 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535140-dw7jr"] Feb 26 12:26:09 crc kubenswrapper[4724]: I0226 12:26:09.159420 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535140-dw7jr"] Feb 26 12:26:10 crc kubenswrapper[4724]: I0226 12:26:10.002393 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5" path="/var/lib/kubelet/pods/d7e89b92-c35b-43e8-aa03-2c7e2a71c2f5/volumes" Feb 26 12:26:15 crc kubenswrapper[4724]: I0226 12:26:15.938370 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kmxt5" podUID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerName="registry-server" probeResult="failure" output=< Feb 26 12:26:15 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:26:15 crc kubenswrapper[4724]: > Feb 26 12:26:16 crc kubenswrapper[4724]: I0226 12:26:16.975401 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:26:16 crc kubenswrapper[4724]: E0226 12:26:16.975983 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:26:21 crc kubenswrapper[4724]: I0226 12:26:21.264288 4724 scope.go:117] "RemoveContainer" containerID="17cfd3d0e76efe6bb3b49e58f6d3b231187564902c10e6e728bfc15c4730dbaa" Feb 26 12:26:25 crc kubenswrapper[4724]: I0226 12:26:25.671098 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kmxt5" podUID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerName="registry-server" probeResult="failure" output=< Feb 26 12:26:25 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:26:25 crc kubenswrapper[4724]: > Feb 26 12:26:27 crc kubenswrapper[4724]: I0226 12:26:27.976481 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:26:27 crc kubenswrapper[4724]: E0226 12:26:27.977323 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:26:34 crc kubenswrapper[4724]: I0226 12:26:34.683908 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:26:34 crc kubenswrapper[4724]: I0226 12:26:34.740077 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:26:35 crc kubenswrapper[4724]: I0226 12:26:35.249244 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kmxt5"] Feb 26 12:26:36 crc kubenswrapper[4724]: I0226 12:26:36.277247 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kmxt5" podUID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerName="registry-server" containerID="cri-o://ace32fc8b2bebf89fc8f53b1283f353eafdf80c6b5700ed937117ac4e161e586" gracePeriod=2 Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.138040 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.260843 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-catalog-content\") pod \"f0220fbf-6d20-45e2-90eb-0fe4903b53c2\" (UID: \"f0220fbf-6d20-45e2-90eb-0fe4903b53c2\") " Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.261749 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-utilities\") pod \"f0220fbf-6d20-45e2-90eb-0fe4903b53c2\" (UID: \"f0220fbf-6d20-45e2-90eb-0fe4903b53c2\") " Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.261877 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zrxz\" (UniqueName: \"kubernetes.io/projected/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-kube-api-access-2zrxz\") pod \"f0220fbf-6d20-45e2-90eb-0fe4903b53c2\" (UID: \"f0220fbf-6d20-45e2-90eb-0fe4903b53c2\") " Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.262115 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-utilities" (OuterVolumeSpecName: "utilities") pod "f0220fbf-6d20-45e2-90eb-0fe4903b53c2" (UID: "f0220fbf-6d20-45e2-90eb-0fe4903b53c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.262690 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.270632 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-kube-api-access-2zrxz" (OuterVolumeSpecName: "kube-api-access-2zrxz") pod "f0220fbf-6d20-45e2-90eb-0fe4903b53c2" (UID: "f0220fbf-6d20-45e2-90eb-0fe4903b53c2"). InnerVolumeSpecName "kube-api-access-2zrxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.286911 4724 generic.go:334] "Generic (PLEG): container finished" podID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerID="ace32fc8b2bebf89fc8f53b1283f353eafdf80c6b5700ed937117ac4e161e586" exitCode=0 Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.286959 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmxt5" event={"ID":"f0220fbf-6d20-45e2-90eb-0fe4903b53c2","Type":"ContainerDied","Data":"ace32fc8b2bebf89fc8f53b1283f353eafdf80c6b5700ed937117ac4e161e586"} Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.286990 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmxt5" event={"ID":"f0220fbf-6d20-45e2-90eb-0fe4903b53c2","Type":"ContainerDied","Data":"587d36956dfd2ab926c69f5011126064cdcac69021ada39ad4a4d36030e0f9e2"} Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.287008 4724 scope.go:117] "RemoveContainer" containerID="ace32fc8b2bebf89fc8f53b1283f353eafdf80c6b5700ed937117ac4e161e586" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.287149 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmxt5" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.328006 4724 scope.go:117] "RemoveContainer" containerID="e8a772461145e2a2ee16eaf6613e320a3ac004075ed60f2cd87595e60807581b" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.350823 4724 scope.go:117] "RemoveContainer" containerID="0569b5318593a490dea14e92a6cdf8758a66ce152418d96ce5ea6abfddfb9e58" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.365369 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zrxz\" (UniqueName: \"kubernetes.io/projected/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-kube-api-access-2zrxz\") on node \"crc\" DevicePath \"\"" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.406837 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0220fbf-6d20-45e2-90eb-0fe4903b53c2" (UID: "f0220fbf-6d20-45e2-90eb-0fe4903b53c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.409282 4724 scope.go:117] "RemoveContainer" containerID="ace32fc8b2bebf89fc8f53b1283f353eafdf80c6b5700ed937117ac4e161e586" Feb 26 12:26:37 crc kubenswrapper[4724]: E0226 12:26:37.414076 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace32fc8b2bebf89fc8f53b1283f353eafdf80c6b5700ed937117ac4e161e586\": container with ID starting with ace32fc8b2bebf89fc8f53b1283f353eafdf80c6b5700ed937117ac4e161e586 not found: ID does not exist" containerID="ace32fc8b2bebf89fc8f53b1283f353eafdf80c6b5700ed937117ac4e161e586" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.414115 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace32fc8b2bebf89fc8f53b1283f353eafdf80c6b5700ed937117ac4e161e586"} err="failed to get container status \"ace32fc8b2bebf89fc8f53b1283f353eafdf80c6b5700ed937117ac4e161e586\": rpc error: code = NotFound desc = could not find container \"ace32fc8b2bebf89fc8f53b1283f353eafdf80c6b5700ed937117ac4e161e586\": container with ID starting with ace32fc8b2bebf89fc8f53b1283f353eafdf80c6b5700ed937117ac4e161e586 not found: ID does not exist" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.414152 4724 scope.go:117] "RemoveContainer" containerID="e8a772461145e2a2ee16eaf6613e320a3ac004075ed60f2cd87595e60807581b" Feb 26 12:26:37 crc kubenswrapper[4724]: E0226 12:26:37.414594 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8a772461145e2a2ee16eaf6613e320a3ac004075ed60f2cd87595e60807581b\": container with ID starting with e8a772461145e2a2ee16eaf6613e320a3ac004075ed60f2cd87595e60807581b not found: ID does not exist" containerID="e8a772461145e2a2ee16eaf6613e320a3ac004075ed60f2cd87595e60807581b" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.414656 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8a772461145e2a2ee16eaf6613e320a3ac004075ed60f2cd87595e60807581b"} err="failed to get container status \"e8a772461145e2a2ee16eaf6613e320a3ac004075ed60f2cd87595e60807581b\": rpc error: code = NotFound desc = could not find container \"e8a772461145e2a2ee16eaf6613e320a3ac004075ed60f2cd87595e60807581b\": container with ID starting with e8a772461145e2a2ee16eaf6613e320a3ac004075ed60f2cd87595e60807581b not found: ID does not exist" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.414700 4724 scope.go:117] "RemoveContainer" containerID="0569b5318593a490dea14e92a6cdf8758a66ce152418d96ce5ea6abfddfb9e58" Feb 26 12:26:37 crc kubenswrapper[4724]: E0226 12:26:37.415013 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0569b5318593a490dea14e92a6cdf8758a66ce152418d96ce5ea6abfddfb9e58\": container with ID starting with 0569b5318593a490dea14e92a6cdf8758a66ce152418d96ce5ea6abfddfb9e58 not found: ID does not exist" containerID="0569b5318593a490dea14e92a6cdf8758a66ce152418d96ce5ea6abfddfb9e58" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.415046 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0569b5318593a490dea14e92a6cdf8758a66ce152418d96ce5ea6abfddfb9e58"} err="failed to get container status \"0569b5318593a490dea14e92a6cdf8758a66ce152418d96ce5ea6abfddfb9e58\": rpc error: code = NotFound desc = could not find container \"0569b5318593a490dea14e92a6cdf8758a66ce152418d96ce5ea6abfddfb9e58\": container with ID starting with 0569b5318593a490dea14e92a6cdf8758a66ce152418d96ce5ea6abfddfb9e58 not found: ID does not exist" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.467637 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0220fbf-6d20-45e2-90eb-0fe4903b53c2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.625276 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kmxt5"] Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.635671 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kmxt5"] Feb 26 12:26:37 crc kubenswrapper[4724]: I0226 12:26:37.987695 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" path="/var/lib/kubelet/pods/f0220fbf-6d20-45e2-90eb-0fe4903b53c2/volumes" Feb 26 12:26:41 crc kubenswrapper[4724]: I0226 12:26:41.975697 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:26:41 crc kubenswrapper[4724]: E0226 12:26:41.976281 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:26:54 crc kubenswrapper[4724]: I0226 12:26:54.975415 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:26:54 crc kubenswrapper[4724]: E0226 12:26:54.976125 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:27:06 crc kubenswrapper[4724]: I0226 12:27:06.975318 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:27:06 crc kubenswrapper[4724]: E0226 12:27:06.976012 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:27:17 crc kubenswrapper[4724]: I0226 12:27:17.976772 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:27:17 crc kubenswrapper[4724]: E0226 12:27:17.978142 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:27:32 crc kubenswrapper[4724]: I0226 12:27:32.976491 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:27:32 crc kubenswrapper[4724]: E0226 12:27:32.978616 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:27:44 crc kubenswrapper[4724]: I0226 12:27:44.976671 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:27:44 crc kubenswrapper[4724]: E0226 12:27:44.977448 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:27:57 crc kubenswrapper[4724]: I0226 12:27:57.978452 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:27:57 crc kubenswrapper[4724]: E0226 12:27:57.980685 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.159155 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535148-k9r6q"] Feb 26 12:28:00 crc kubenswrapper[4724]: E0226 12:28:00.164998 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerName="extract-utilities" Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.165051 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerName="extract-utilities" Feb 26 12:28:00 crc kubenswrapper[4724]: E0226 12:28:00.165139 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerName="extract-content" Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.165149 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerName="extract-content" Feb 26 12:28:00 crc kubenswrapper[4724]: E0226 12:28:00.165268 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerName="registry-server" Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.165279 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerName="registry-server" Feb 26 12:28:00 crc kubenswrapper[4724]: E0226 12:28:00.165297 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac2157f7-d6e6-4b07-b51a-ec1d3035a24f" containerName="oc" Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.165307 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac2157f7-d6e6-4b07-b51a-ec1d3035a24f" containerName="oc" Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.165781 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0220fbf-6d20-45e2-90eb-0fe4903b53c2" containerName="registry-server" Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.165802 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac2157f7-d6e6-4b07-b51a-ec1d3035a24f" containerName="oc" Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.167901 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535148-k9r6q" Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.171495 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.171902 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.172133 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.240074 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535148-k9r6q"] Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.258906 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvl7f\" (UniqueName: \"kubernetes.io/projected/2832e830-817f-468f-9100-f7377498be57-kube-api-access-vvl7f\") pod \"auto-csr-approver-29535148-k9r6q\" (UID: \"2832e830-817f-468f-9100-f7377498be57\") " pod="openshift-infra/auto-csr-approver-29535148-k9r6q" Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.361171 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvl7f\" (UniqueName: \"kubernetes.io/projected/2832e830-817f-468f-9100-f7377498be57-kube-api-access-vvl7f\") pod \"auto-csr-approver-29535148-k9r6q\" (UID: \"2832e830-817f-468f-9100-f7377498be57\") " pod="openshift-infra/auto-csr-approver-29535148-k9r6q" Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.386328 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvl7f\" (UniqueName: \"kubernetes.io/projected/2832e830-817f-468f-9100-f7377498be57-kube-api-access-vvl7f\") pod \"auto-csr-approver-29535148-k9r6q\" (UID: \"2832e830-817f-468f-9100-f7377498be57\") " pod="openshift-infra/auto-csr-approver-29535148-k9r6q" Feb 26 12:28:00 crc kubenswrapper[4724]: I0226 12:28:00.520017 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535148-k9r6q" Feb 26 12:28:01 crc kubenswrapper[4724]: I0226 12:28:01.088199 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535148-k9r6q"] Feb 26 12:28:01 crc kubenswrapper[4724]: I0226 12:28:01.105368 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 12:28:01 crc kubenswrapper[4724]: I0226 12:28:01.221687 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535148-k9r6q" event={"ID":"2832e830-817f-468f-9100-f7377498be57","Type":"ContainerStarted","Data":"ffc0c981ba4b5d3e04c57f316facbbae8f86a14b1fd7674de3370ea20e02e22c"} Feb 26 12:28:03 crc kubenswrapper[4724]: I0226 12:28:03.242378 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535148-k9r6q" event={"ID":"2832e830-817f-468f-9100-f7377498be57","Type":"ContainerStarted","Data":"9ea5a773ae0f2aee8ea8896b217dd43e8e35591a3253c2623aed0ee4ad7f93ad"} Feb 26 12:28:03 crc kubenswrapper[4724]: I0226 12:28:03.263598 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535148-k9r6q" podStartSLOduration=2.06592364 podStartE2EDuration="3.263569279s" podCreationTimestamp="2026-02-26 12:28:00 +0000 UTC" firstStartedPulling="2026-02-26 12:28:01.103670175 +0000 UTC m=+4947.759409290" lastFinishedPulling="2026-02-26 12:28:02.301315814 +0000 UTC m=+4948.957054929" observedRunningTime="2026-02-26 12:28:03.26202455 +0000 UTC m=+4949.917763665" watchObservedRunningTime="2026-02-26 12:28:03.263569279 +0000 UTC m=+4949.919308394" Feb 26 12:28:05 crc kubenswrapper[4724]: I0226 12:28:05.261473 4724 generic.go:334] "Generic (PLEG): container finished" podID="2832e830-817f-468f-9100-f7377498be57" containerID="9ea5a773ae0f2aee8ea8896b217dd43e8e35591a3253c2623aed0ee4ad7f93ad" exitCode=0 Feb 26 12:28:05 crc kubenswrapper[4724]: I0226 12:28:05.261553 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535148-k9r6q" event={"ID":"2832e830-817f-468f-9100-f7377498be57","Type":"ContainerDied","Data":"9ea5a773ae0f2aee8ea8896b217dd43e8e35591a3253c2623aed0ee4ad7f93ad"} Feb 26 12:28:07 crc kubenswrapper[4724]: I0226 12:28:07.279858 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535148-k9r6q" event={"ID":"2832e830-817f-468f-9100-f7377498be57","Type":"ContainerDied","Data":"ffc0c981ba4b5d3e04c57f316facbbae8f86a14b1fd7674de3370ea20e02e22c"} Feb 26 12:28:07 crc kubenswrapper[4724]: I0226 12:28:07.281338 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffc0c981ba4b5d3e04c57f316facbbae8f86a14b1fd7674de3370ea20e02e22c" Feb 26 12:28:07 crc kubenswrapper[4724]: I0226 12:28:07.315915 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535148-k9r6q" Feb 26 12:28:07 crc kubenswrapper[4724]: I0226 12:28:07.410825 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvl7f\" (UniqueName: \"kubernetes.io/projected/2832e830-817f-468f-9100-f7377498be57-kube-api-access-vvl7f\") pod \"2832e830-817f-468f-9100-f7377498be57\" (UID: \"2832e830-817f-468f-9100-f7377498be57\") " Feb 26 12:28:07 crc kubenswrapper[4724]: I0226 12:28:07.421046 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2832e830-817f-468f-9100-f7377498be57-kube-api-access-vvl7f" (OuterVolumeSpecName: "kube-api-access-vvl7f") pod "2832e830-817f-468f-9100-f7377498be57" (UID: "2832e830-817f-468f-9100-f7377498be57"). InnerVolumeSpecName "kube-api-access-vvl7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:28:07 crc kubenswrapper[4724]: I0226 12:28:07.513937 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvl7f\" (UniqueName: \"kubernetes.io/projected/2832e830-817f-468f-9100-f7377498be57-kube-api-access-vvl7f\") on node \"crc\" DevicePath \"\"" Feb 26 12:28:08 crc kubenswrapper[4724]: I0226 12:28:08.288772 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535148-k9r6q" Feb 26 12:28:08 crc kubenswrapper[4724]: I0226 12:28:08.386328 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535142-stg7q"] Feb 26 12:28:08 crc kubenswrapper[4724]: I0226 12:28:08.395705 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535142-stg7q"] Feb 26 12:28:09 crc kubenswrapper[4724]: I0226 12:28:09.985878 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aac99134-c83a-468f-b4f7-61d2eed0b581" path="/var/lib/kubelet/pods/aac99134-c83a-468f-b4f7-61d2eed0b581/volumes" Feb 26 12:28:12 crc kubenswrapper[4724]: I0226 12:28:12.976640 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:28:12 crc kubenswrapper[4724]: E0226 12:28:12.977488 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:28:21 crc kubenswrapper[4724]: I0226 12:28:21.412094 4724 scope.go:117] "RemoveContainer" containerID="e0b377088fb6a3b72727f3b7fed144f1c771e0f9b2af2a7737957bc86e2d46d8" Feb 26 12:28:23 crc kubenswrapper[4724]: I0226 12:28:23.984587 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:28:24 crc kubenswrapper[4724]: I0226 12:28:24.425195 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"4af0530e9f0e68464b2fd18e31afbe76a5f6302651d2b503514574dd87244d41"} Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.344120 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535150-64kp5"] Feb 26 12:30:00 crc kubenswrapper[4724]: E0226 12:30:00.345553 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2832e830-817f-468f-9100-f7377498be57" containerName="oc" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.345572 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2832e830-817f-468f-9100-f7377498be57" containerName="oc" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.349559 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="2832e830-817f-468f-9100-f7377498be57" containerName="oc" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.350453 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw"] Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.351491 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535150-64kp5" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.357803 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.358118 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.358270 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.360984 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.364109 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.364909 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.380412 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw"] Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.402809 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535150-64kp5"] Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.534571 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-secret-volume\") pod \"collect-profiles-29535150-xn5fw\" (UID: \"55e091f5-f546-4bf5-b14d-9ae47e7c3f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.534708 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhkb2\" (UniqueName: \"kubernetes.io/projected/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-kube-api-access-xhkb2\") pod \"collect-profiles-29535150-xn5fw\" (UID: \"55e091f5-f546-4bf5-b14d-9ae47e7c3f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.534870 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q7jk\" (UniqueName: \"kubernetes.io/projected/3b845e03-8213-4e63-bab5-ea846475f8fd-kube-api-access-8q7jk\") pod \"auto-csr-approver-29535150-64kp5\" (UID: \"3b845e03-8213-4e63-bab5-ea846475f8fd\") " pod="openshift-infra/auto-csr-approver-29535150-64kp5" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.534985 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-config-volume\") pod \"collect-profiles-29535150-xn5fw\" (UID: \"55e091f5-f546-4bf5-b14d-9ae47e7c3f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.636718 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-config-volume\") pod \"collect-profiles-29535150-xn5fw\" (UID: \"55e091f5-f546-4bf5-b14d-9ae47e7c3f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.636850 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-secret-volume\") pod \"collect-profiles-29535150-xn5fw\" (UID: \"55e091f5-f546-4bf5-b14d-9ae47e7c3f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.636970 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhkb2\" (UniqueName: \"kubernetes.io/projected/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-kube-api-access-xhkb2\") pod \"collect-profiles-29535150-xn5fw\" (UID: \"55e091f5-f546-4bf5-b14d-9ae47e7c3f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.637022 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q7jk\" (UniqueName: \"kubernetes.io/projected/3b845e03-8213-4e63-bab5-ea846475f8fd-kube-api-access-8q7jk\") pod \"auto-csr-approver-29535150-64kp5\" (UID: \"3b845e03-8213-4e63-bab5-ea846475f8fd\") " pod="openshift-infra/auto-csr-approver-29535150-64kp5" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.639205 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-config-volume\") pod \"collect-profiles-29535150-xn5fw\" (UID: \"55e091f5-f546-4bf5-b14d-9ae47e7c3f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.659456 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-secret-volume\") pod \"collect-profiles-29535150-xn5fw\" (UID: \"55e091f5-f546-4bf5-b14d-9ae47e7c3f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.659684 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q7jk\" (UniqueName: \"kubernetes.io/projected/3b845e03-8213-4e63-bab5-ea846475f8fd-kube-api-access-8q7jk\") pod \"auto-csr-approver-29535150-64kp5\" (UID: \"3b845e03-8213-4e63-bab5-ea846475f8fd\") " pod="openshift-infra/auto-csr-approver-29535150-64kp5" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.675225 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhkb2\" (UniqueName: \"kubernetes.io/projected/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-kube-api-access-xhkb2\") pod \"collect-profiles-29535150-xn5fw\" (UID: \"55e091f5-f546-4bf5-b14d-9ae47e7c3f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.689193 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535150-64kp5" Feb 26 12:30:00 crc kubenswrapper[4724]: I0226 12:30:00.709872 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" Feb 26 12:30:01 crc kubenswrapper[4724]: I0226 12:30:01.518432 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535150-64kp5"] Feb 26 12:30:01 crc kubenswrapper[4724]: I0226 12:30:01.595213 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw"] Feb 26 12:30:02 crc kubenswrapper[4724]: I0226 12:30:02.386451 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535150-64kp5" event={"ID":"3b845e03-8213-4e63-bab5-ea846475f8fd","Type":"ContainerStarted","Data":"791fb57dfbc5870d5a36003355a20078e8b82736e24b60872e0caecd8fc48db5"} Feb 26 12:30:02 crc kubenswrapper[4724]: I0226 12:30:02.388324 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" event={"ID":"55e091f5-f546-4bf5-b14d-9ae47e7c3f96","Type":"ContainerStarted","Data":"74e8e6227a58736f6af55cf66b3c62b9f78605007c3d7e3b2e6e3e2175c84e79"} Feb 26 12:30:02 crc kubenswrapper[4724]: I0226 12:30:02.388480 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" event={"ID":"55e091f5-f546-4bf5-b14d-9ae47e7c3f96","Type":"ContainerStarted","Data":"890e9b954d236e4cb0f1ac4fee58304ed64f532fd2f382f117c9756bcac5cd04"} Feb 26 12:30:02 crc kubenswrapper[4724]: I0226 12:30:02.423948 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" podStartSLOduration=2.4239101339999998 podStartE2EDuration="2.423910134s" podCreationTimestamp="2026-02-26 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 12:30:02.419392568 +0000 UTC m=+5069.075131693" watchObservedRunningTime="2026-02-26 12:30:02.423910134 +0000 UTC m=+5069.079649259" Feb 26 12:30:03 crc kubenswrapper[4724]: I0226 12:30:03.397234 4724 generic.go:334] "Generic (PLEG): container finished" podID="55e091f5-f546-4bf5-b14d-9ae47e7c3f96" containerID="74e8e6227a58736f6af55cf66b3c62b9f78605007c3d7e3b2e6e3e2175c84e79" exitCode=0 Feb 26 12:30:03 crc kubenswrapper[4724]: I0226 12:30:03.397288 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" event={"ID":"55e091f5-f546-4bf5-b14d-9ae47e7c3f96","Type":"ContainerDied","Data":"74e8e6227a58736f6af55cf66b3c62b9f78605007c3d7e3b2e6e3e2175c84e79"} Feb 26 12:30:04 crc kubenswrapper[4724]: I0226 12:30:04.408896 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535150-64kp5" event={"ID":"3b845e03-8213-4e63-bab5-ea846475f8fd","Type":"ContainerStarted","Data":"783f8756c9a2099f1b9c0717fa941ecaaf8a92f1819d0e9c3fa40f1f7d01b044"} Feb 26 12:30:04 crc kubenswrapper[4724]: I0226 12:30:04.431729 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535150-64kp5" podStartSLOduration=2.096067687 podStartE2EDuration="4.431707151s" podCreationTimestamp="2026-02-26 12:30:00 +0000 UTC" firstStartedPulling="2026-02-26 12:30:01.524801332 +0000 UTC m=+5068.180540447" lastFinishedPulling="2026-02-26 12:30:03.860440786 +0000 UTC m=+5070.516179911" observedRunningTime="2026-02-26 12:30:04.42107833 +0000 UTC m=+5071.076817445" watchObservedRunningTime="2026-02-26 12:30:04.431707151 +0000 UTC m=+5071.087446266" Feb 26 12:30:04 crc kubenswrapper[4724]: I0226 12:30:04.928205 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" Feb 26 12:30:05 crc kubenswrapper[4724]: I0226 12:30:05.025605 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhkb2\" (UniqueName: \"kubernetes.io/projected/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-kube-api-access-xhkb2\") pod \"55e091f5-f546-4bf5-b14d-9ae47e7c3f96\" (UID: \"55e091f5-f546-4bf5-b14d-9ae47e7c3f96\") " Feb 26 12:30:05 crc kubenswrapper[4724]: I0226 12:30:05.025654 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-config-volume\") pod \"55e091f5-f546-4bf5-b14d-9ae47e7c3f96\" (UID: \"55e091f5-f546-4bf5-b14d-9ae47e7c3f96\") " Feb 26 12:30:05 crc kubenswrapper[4724]: I0226 12:30:05.025789 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-secret-volume\") pod \"55e091f5-f546-4bf5-b14d-9ae47e7c3f96\" (UID: \"55e091f5-f546-4bf5-b14d-9ae47e7c3f96\") " Feb 26 12:30:05 crc kubenswrapper[4724]: I0226 12:30:05.027676 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-config-volume" (OuterVolumeSpecName: "config-volume") pod "55e091f5-f546-4bf5-b14d-9ae47e7c3f96" (UID: "55e091f5-f546-4bf5-b14d-9ae47e7c3f96"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 12:30:05 crc kubenswrapper[4724]: I0226 12:30:05.036215 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "55e091f5-f546-4bf5-b14d-9ae47e7c3f96" (UID: "55e091f5-f546-4bf5-b14d-9ae47e7c3f96"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 12:30:05 crc kubenswrapper[4724]: I0226 12:30:05.043436 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-kube-api-access-xhkb2" (OuterVolumeSpecName: "kube-api-access-xhkb2") pod "55e091f5-f546-4bf5-b14d-9ae47e7c3f96" (UID: "55e091f5-f546-4bf5-b14d-9ae47e7c3f96"). InnerVolumeSpecName "kube-api-access-xhkb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:30:05 crc kubenswrapper[4724]: I0226 12:30:05.127978 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 12:30:05 crc kubenswrapper[4724]: I0226 12:30:05.128010 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhkb2\" (UniqueName: \"kubernetes.io/projected/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-kube-api-access-xhkb2\") on node \"crc\" DevicePath \"\"" Feb 26 12:30:05 crc kubenswrapper[4724]: I0226 12:30:05.128019 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55e091f5-f546-4bf5-b14d-9ae47e7c3f96-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 12:30:05 crc kubenswrapper[4724]: I0226 12:30:05.417646 4724 generic.go:334] "Generic (PLEG): container finished" podID="3b845e03-8213-4e63-bab5-ea846475f8fd" containerID="783f8756c9a2099f1b9c0717fa941ecaaf8a92f1819d0e9c3fa40f1f7d01b044" exitCode=0 Feb 26 12:30:05 crc kubenswrapper[4724]: I0226 12:30:05.417970 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535150-64kp5" event={"ID":"3b845e03-8213-4e63-bab5-ea846475f8fd","Type":"ContainerDied","Data":"783f8756c9a2099f1b9c0717fa941ecaaf8a92f1819d0e9c3fa40f1f7d01b044"} Feb 26 12:30:05 crc kubenswrapper[4724]: I0226 12:30:05.420450 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" event={"ID":"55e091f5-f546-4bf5-b14d-9ae47e7c3f96","Type":"ContainerDied","Data":"890e9b954d236e4cb0f1ac4fee58304ed64f532fd2f382f117c9756bcac5cd04"} Feb 26 12:30:05 crc kubenswrapper[4724]: I0226 12:30:05.420492 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="890e9b954d236e4cb0f1ac4fee58304ed64f532fd2f382f117c9756bcac5cd04" Feb 26 12:30:05 crc kubenswrapper[4724]: I0226 12:30:05.420554 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw" Feb 26 12:30:06 crc kubenswrapper[4724]: I0226 12:30:06.013481 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc"] Feb 26 12:30:06 crc kubenswrapper[4724]: I0226 12:30:06.023639 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535105-nm7kc"] Feb 26 12:30:06 crc kubenswrapper[4724]: I0226 12:30:06.859961 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535150-64kp5" Feb 26 12:30:06 crc kubenswrapper[4724]: I0226 12:30:06.984280 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q7jk\" (UniqueName: \"kubernetes.io/projected/3b845e03-8213-4e63-bab5-ea846475f8fd-kube-api-access-8q7jk\") pod \"3b845e03-8213-4e63-bab5-ea846475f8fd\" (UID: \"3b845e03-8213-4e63-bab5-ea846475f8fd\") " Feb 26 12:30:06 crc kubenswrapper[4724]: I0226 12:30:06.996471 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b845e03-8213-4e63-bab5-ea846475f8fd-kube-api-access-8q7jk" (OuterVolumeSpecName: "kube-api-access-8q7jk") pod "3b845e03-8213-4e63-bab5-ea846475f8fd" (UID: "3b845e03-8213-4e63-bab5-ea846475f8fd"). InnerVolumeSpecName "kube-api-access-8q7jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:30:07 crc kubenswrapper[4724]: I0226 12:30:07.079390 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535144-wk64c"] Feb 26 12:30:07 crc kubenswrapper[4724]: I0226 12:30:07.087710 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q7jk\" (UniqueName: \"kubernetes.io/projected/3b845e03-8213-4e63-bab5-ea846475f8fd-kube-api-access-8q7jk\") on node \"crc\" DevicePath \"\"" Feb 26 12:30:07 crc kubenswrapper[4724]: I0226 12:30:07.091943 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535144-wk64c"] Feb 26 12:30:07 crc kubenswrapper[4724]: I0226 12:30:07.442205 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535150-64kp5" event={"ID":"3b845e03-8213-4e63-bab5-ea846475f8fd","Type":"ContainerDied","Data":"791fb57dfbc5870d5a36003355a20078e8b82736e24b60872e0caecd8fc48db5"} Feb 26 12:30:07 crc kubenswrapper[4724]: I0226 12:30:07.442258 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="791fb57dfbc5870d5a36003355a20078e8b82736e24b60872e0caecd8fc48db5" Feb 26 12:30:07 crc kubenswrapper[4724]: I0226 12:30:07.442254 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535150-64kp5" Feb 26 12:30:07 crc kubenswrapper[4724]: I0226 12:30:07.986102 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="494cf401-faeb-4819-982f-fc5fe2bb0fdc" path="/var/lib/kubelet/pods/494cf401-faeb-4819-982f-fc5fe2bb0fdc/volumes" Feb 26 12:30:07 crc kubenswrapper[4724]: I0226 12:30:07.988719 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="825b34fe-cee9-42f2-9954-1aa50c2b748e" path="/var/lib/kubelet/pods/825b34fe-cee9-42f2-9954-1aa50c2b748e/volumes" Feb 26 12:30:21 crc kubenswrapper[4724]: I0226 12:30:21.524156 4724 scope.go:117] "RemoveContainer" containerID="d1baa7ba9938a5a6ae314e9aa6e2cde1549114b86caaf45fc5355a77350f6642" Feb 26 12:30:21 crc kubenswrapper[4724]: I0226 12:30:21.553365 4724 scope.go:117] "RemoveContainer" containerID="16768e9ec32464d16ffcc41738c8904d86967e7d7d307b1b5505cee4c4600396" Feb 26 12:30:46 crc kubenswrapper[4724]: I0226 12:30:46.906296 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:30:46 crc kubenswrapper[4724]: I0226 12:30:46.908599 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.172002 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cwvll"] Feb 26 12:30:53 crc kubenswrapper[4724]: E0226 12:30:53.173308 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b845e03-8213-4e63-bab5-ea846475f8fd" containerName="oc" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.173331 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b845e03-8213-4e63-bab5-ea846475f8fd" containerName="oc" Feb 26 12:30:53 crc kubenswrapper[4724]: E0226 12:30:53.173367 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e091f5-f546-4bf5-b14d-9ae47e7c3f96" containerName="collect-profiles" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.173376 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e091f5-f546-4bf5-b14d-9ae47e7c3f96" containerName="collect-profiles" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.173634 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b845e03-8213-4e63-bab5-ea846475f8fd" containerName="oc" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.173673 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="55e091f5-f546-4bf5-b14d-9ae47e7c3f96" containerName="collect-profiles" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.177396 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.192257 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwvll"] Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.212569 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h5fq\" (UniqueName: \"kubernetes.io/projected/d5c1433c-5877-4d0f-9ab7-8e18e233e110-kube-api-access-5h5fq\") pod \"certified-operators-cwvll\" (UID: \"d5c1433c-5877-4d0f-9ab7-8e18e233e110\") " pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.213725 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c1433c-5877-4d0f-9ab7-8e18e233e110-utilities\") pod \"certified-operators-cwvll\" (UID: \"d5c1433c-5877-4d0f-9ab7-8e18e233e110\") " pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.213842 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c1433c-5877-4d0f-9ab7-8e18e233e110-catalog-content\") pod \"certified-operators-cwvll\" (UID: \"d5c1433c-5877-4d0f-9ab7-8e18e233e110\") " pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.315153 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c1433c-5877-4d0f-9ab7-8e18e233e110-utilities\") pod \"certified-operators-cwvll\" (UID: \"d5c1433c-5877-4d0f-9ab7-8e18e233e110\") " pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.315460 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c1433c-5877-4d0f-9ab7-8e18e233e110-catalog-content\") pod \"certified-operators-cwvll\" (UID: \"d5c1433c-5877-4d0f-9ab7-8e18e233e110\") " pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.315567 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h5fq\" (UniqueName: \"kubernetes.io/projected/d5c1433c-5877-4d0f-9ab7-8e18e233e110-kube-api-access-5h5fq\") pod \"certified-operators-cwvll\" (UID: \"d5c1433c-5877-4d0f-9ab7-8e18e233e110\") " pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.315836 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c1433c-5877-4d0f-9ab7-8e18e233e110-utilities\") pod \"certified-operators-cwvll\" (UID: \"d5c1433c-5877-4d0f-9ab7-8e18e233e110\") " pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.315859 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c1433c-5877-4d0f-9ab7-8e18e233e110-catalog-content\") pod \"certified-operators-cwvll\" (UID: \"d5c1433c-5877-4d0f-9ab7-8e18e233e110\") " pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.344646 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h5fq\" (UniqueName: \"kubernetes.io/projected/d5c1433c-5877-4d0f-9ab7-8e18e233e110-kube-api-access-5h5fq\") pod \"certified-operators-cwvll\" (UID: \"d5c1433c-5877-4d0f-9ab7-8e18e233e110\") " pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:30:53 crc kubenswrapper[4724]: I0226 12:30:53.497676 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:30:54 crc kubenswrapper[4724]: I0226 12:30:54.021557 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwvll"] Feb 26 12:30:54 crc kubenswrapper[4724]: I0226 12:30:54.865984 4724 generic.go:334] "Generic (PLEG): container finished" podID="d5c1433c-5877-4d0f-9ab7-8e18e233e110" containerID="b0eb7c42970903c586bf6b36d2d9346a2cfd60fdd017960c2d976fadf8eb4464" exitCode=0 Feb 26 12:30:54 crc kubenswrapper[4724]: I0226 12:30:54.866055 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwvll" event={"ID":"d5c1433c-5877-4d0f-9ab7-8e18e233e110","Type":"ContainerDied","Data":"b0eb7c42970903c586bf6b36d2d9346a2cfd60fdd017960c2d976fadf8eb4464"} Feb 26 12:30:54 crc kubenswrapper[4724]: I0226 12:30:54.866803 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwvll" event={"ID":"d5c1433c-5877-4d0f-9ab7-8e18e233e110","Type":"ContainerStarted","Data":"1654b5acb361948a33fb1ebca2e848df986aef95567c147060b0442499a66fd2"} Feb 26 12:30:55 crc kubenswrapper[4724]: I0226 12:30:55.878434 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwvll" event={"ID":"d5c1433c-5877-4d0f-9ab7-8e18e233e110","Type":"ContainerStarted","Data":"45aaa5080ac39429713b44b823f60d9e80a4f76d7c92fef65523fa20ae1915c3"} Feb 26 12:30:58 crc kubenswrapper[4724]: I0226 12:30:58.912895 4724 generic.go:334] "Generic (PLEG): container finished" podID="d5c1433c-5877-4d0f-9ab7-8e18e233e110" containerID="45aaa5080ac39429713b44b823f60d9e80a4f76d7c92fef65523fa20ae1915c3" exitCode=0 Feb 26 12:30:58 crc kubenswrapper[4724]: I0226 12:30:58.912975 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwvll" event={"ID":"d5c1433c-5877-4d0f-9ab7-8e18e233e110","Type":"ContainerDied","Data":"45aaa5080ac39429713b44b823f60d9e80a4f76d7c92fef65523fa20ae1915c3"} Feb 26 12:30:59 crc kubenswrapper[4724]: I0226 12:30:59.924316 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwvll" event={"ID":"d5c1433c-5877-4d0f-9ab7-8e18e233e110","Type":"ContainerStarted","Data":"b5e1ee7739b4ead72271813a02d72367cb7b7039352d7392244da39abda1daff"} Feb 26 12:30:59 crc kubenswrapper[4724]: I0226 12:30:59.973023 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cwvll" podStartSLOduration=2.4976705 podStartE2EDuration="6.971954495s" podCreationTimestamp="2026-02-26 12:30:53 +0000 UTC" firstStartedPulling="2026-02-26 12:30:54.868074744 +0000 UTC m=+5121.523813859" lastFinishedPulling="2026-02-26 12:30:59.342358739 +0000 UTC m=+5125.998097854" observedRunningTime="2026-02-26 12:30:59.941713563 +0000 UTC m=+5126.597452688" watchObservedRunningTime="2026-02-26 12:30:59.971954495 +0000 UTC m=+5126.627693610" Feb 26 12:31:03 crc kubenswrapper[4724]: I0226 12:31:03.498648 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:31:03 crc kubenswrapper[4724]: I0226 12:31:03.499912 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:31:04 crc kubenswrapper[4724]: I0226 12:31:04.543245 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cwvll" podUID="d5c1433c-5877-4d0f-9ab7-8e18e233e110" containerName="registry-server" probeResult="failure" output=< Feb 26 12:31:04 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:31:04 crc kubenswrapper[4724]: > Feb 26 12:31:14 crc kubenswrapper[4724]: I0226 12:31:14.554433 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cwvll" podUID="d5c1433c-5877-4d0f-9ab7-8e18e233e110" containerName="registry-server" probeResult="failure" output=< Feb 26 12:31:14 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:31:14 crc kubenswrapper[4724]: > Feb 26 12:31:16 crc kubenswrapper[4724]: I0226 12:31:16.906971 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:31:16 crc kubenswrapper[4724]: I0226 12:31:16.907384 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:31:24 crc kubenswrapper[4724]: I0226 12:31:24.560270 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cwvll" podUID="d5c1433c-5877-4d0f-9ab7-8e18e233e110" containerName="registry-server" probeResult="failure" output=< Feb 26 12:31:24 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:31:24 crc kubenswrapper[4724]: > Feb 26 12:31:33 crc kubenswrapper[4724]: I0226 12:31:33.838828 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:31:33 crc kubenswrapper[4724]: I0226 12:31:33.887658 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:31:34 crc kubenswrapper[4724]: I0226 12:31:34.081784 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwvll"] Feb 26 12:31:35 crc kubenswrapper[4724]: I0226 12:31:35.263464 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cwvll" podUID="d5c1433c-5877-4d0f-9ab7-8e18e233e110" containerName="registry-server" containerID="cri-o://b5e1ee7739b4ead72271813a02d72367cb7b7039352d7392244da39abda1daff" gracePeriod=2 Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.237319 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.278284 4724 generic.go:334] "Generic (PLEG): container finished" podID="d5c1433c-5877-4d0f-9ab7-8e18e233e110" containerID="b5e1ee7739b4ead72271813a02d72367cb7b7039352d7392244da39abda1daff" exitCode=0 Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.278340 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwvll" event={"ID":"d5c1433c-5877-4d0f-9ab7-8e18e233e110","Type":"ContainerDied","Data":"b5e1ee7739b4ead72271813a02d72367cb7b7039352d7392244da39abda1daff"} Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.278385 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwvll" event={"ID":"d5c1433c-5877-4d0f-9ab7-8e18e233e110","Type":"ContainerDied","Data":"1654b5acb361948a33fb1ebca2e848df986aef95567c147060b0442499a66fd2"} Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.278412 4724 scope.go:117] "RemoveContainer" containerID="b5e1ee7739b4ead72271813a02d72367cb7b7039352d7392244da39abda1daff" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.278405 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwvll" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.352367 4724 scope.go:117] "RemoveContainer" containerID="45aaa5080ac39429713b44b823f60d9e80a4f76d7c92fef65523fa20ae1915c3" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.417467 4724 scope.go:117] "RemoveContainer" containerID="b0eb7c42970903c586bf6b36d2d9346a2cfd60fdd017960c2d976fadf8eb4464" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.421031 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c1433c-5877-4d0f-9ab7-8e18e233e110-utilities\") pod \"d5c1433c-5877-4d0f-9ab7-8e18e233e110\" (UID: \"d5c1433c-5877-4d0f-9ab7-8e18e233e110\") " Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.421132 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c1433c-5877-4d0f-9ab7-8e18e233e110-catalog-content\") pod \"d5c1433c-5877-4d0f-9ab7-8e18e233e110\" (UID: \"d5c1433c-5877-4d0f-9ab7-8e18e233e110\") " Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.422935 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h5fq\" (UniqueName: \"kubernetes.io/projected/d5c1433c-5877-4d0f-9ab7-8e18e233e110-kube-api-access-5h5fq\") pod \"d5c1433c-5877-4d0f-9ab7-8e18e233e110\" (UID: \"d5c1433c-5877-4d0f-9ab7-8e18e233e110\") " Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.422942 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5c1433c-5877-4d0f-9ab7-8e18e233e110-utilities" (OuterVolumeSpecName: "utilities") pod "d5c1433c-5877-4d0f-9ab7-8e18e233e110" (UID: "d5c1433c-5877-4d0f-9ab7-8e18e233e110"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.423613 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c1433c-5877-4d0f-9ab7-8e18e233e110-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.449812 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5c1433c-5877-4d0f-9ab7-8e18e233e110-kube-api-access-5h5fq" (OuterVolumeSpecName: "kube-api-access-5h5fq") pod "d5c1433c-5877-4d0f-9ab7-8e18e233e110" (UID: "d5c1433c-5877-4d0f-9ab7-8e18e233e110"). InnerVolumeSpecName "kube-api-access-5h5fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.521802 4724 scope.go:117] "RemoveContainer" containerID="b5e1ee7739b4ead72271813a02d72367cb7b7039352d7392244da39abda1daff" Feb 26 12:31:36 crc kubenswrapper[4724]: E0226 12:31:36.525657 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5e1ee7739b4ead72271813a02d72367cb7b7039352d7392244da39abda1daff\": container with ID starting with b5e1ee7739b4ead72271813a02d72367cb7b7039352d7392244da39abda1daff not found: ID does not exist" containerID="b5e1ee7739b4ead72271813a02d72367cb7b7039352d7392244da39abda1daff" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.525717 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5e1ee7739b4ead72271813a02d72367cb7b7039352d7392244da39abda1daff"} err="failed to get container status \"b5e1ee7739b4ead72271813a02d72367cb7b7039352d7392244da39abda1daff\": rpc error: code = NotFound desc = could not find container \"b5e1ee7739b4ead72271813a02d72367cb7b7039352d7392244da39abda1daff\": container with ID starting with b5e1ee7739b4ead72271813a02d72367cb7b7039352d7392244da39abda1daff not found: ID does not exist" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.525751 4724 scope.go:117] "RemoveContainer" containerID="45aaa5080ac39429713b44b823f60d9e80a4f76d7c92fef65523fa20ae1915c3" Feb 26 12:31:36 crc kubenswrapper[4724]: E0226 12:31:36.526260 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45aaa5080ac39429713b44b823f60d9e80a4f76d7c92fef65523fa20ae1915c3\": container with ID starting with 45aaa5080ac39429713b44b823f60d9e80a4f76d7c92fef65523fa20ae1915c3 not found: ID does not exist" containerID="45aaa5080ac39429713b44b823f60d9e80a4f76d7c92fef65523fa20ae1915c3" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.526293 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45aaa5080ac39429713b44b823f60d9e80a4f76d7c92fef65523fa20ae1915c3"} err="failed to get container status \"45aaa5080ac39429713b44b823f60d9e80a4f76d7c92fef65523fa20ae1915c3\": rpc error: code = NotFound desc = could not find container \"45aaa5080ac39429713b44b823f60d9e80a4f76d7c92fef65523fa20ae1915c3\": container with ID starting with 45aaa5080ac39429713b44b823f60d9e80a4f76d7c92fef65523fa20ae1915c3 not found: ID does not exist" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.526308 4724 scope.go:117] "RemoveContainer" containerID="b0eb7c42970903c586bf6b36d2d9346a2cfd60fdd017960c2d976fadf8eb4464" Feb 26 12:31:36 crc kubenswrapper[4724]: E0226 12:31:36.526504 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0eb7c42970903c586bf6b36d2d9346a2cfd60fdd017960c2d976fadf8eb4464\": container with ID starting with b0eb7c42970903c586bf6b36d2d9346a2cfd60fdd017960c2d976fadf8eb4464 not found: ID does not exist" containerID="b0eb7c42970903c586bf6b36d2d9346a2cfd60fdd017960c2d976fadf8eb4464" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.526533 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0eb7c42970903c586bf6b36d2d9346a2cfd60fdd017960c2d976fadf8eb4464"} err="failed to get container status \"b0eb7c42970903c586bf6b36d2d9346a2cfd60fdd017960c2d976fadf8eb4464\": rpc error: code = NotFound desc = could not find container \"b0eb7c42970903c586bf6b36d2d9346a2cfd60fdd017960c2d976fadf8eb4464\": container with ID starting with b0eb7c42970903c586bf6b36d2d9346a2cfd60fdd017960c2d976fadf8eb4464 not found: ID does not exist" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.527657 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h5fq\" (UniqueName: \"kubernetes.io/projected/d5c1433c-5877-4d0f-9ab7-8e18e233e110-kube-api-access-5h5fq\") on node \"crc\" DevicePath \"\"" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.599201 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5c1433c-5877-4d0f-9ab7-8e18e233e110-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5c1433c-5877-4d0f-9ab7-8e18e233e110" (UID: "d5c1433c-5877-4d0f-9ab7-8e18e233e110"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.630455 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c1433c-5877-4d0f-9ab7-8e18e233e110-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.918112 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwvll"] Feb 26 12:31:36 crc kubenswrapper[4724]: I0226 12:31:36.931105 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cwvll"] Feb 26 12:31:37 crc kubenswrapper[4724]: I0226 12:31:37.989166 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5c1433c-5877-4d0f-9ab7-8e18e233e110" path="/var/lib/kubelet/pods/d5c1433c-5877-4d0f-9ab7-8e18e233e110/volumes" Feb 26 12:31:46 crc kubenswrapper[4724]: I0226 12:31:46.919542 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:31:46 crc kubenswrapper[4724]: I0226 12:31:46.920077 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:31:46 crc kubenswrapper[4724]: I0226 12:31:46.920124 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 12:31:46 crc kubenswrapper[4724]: I0226 12:31:46.921323 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4af0530e9f0e68464b2fd18e31afbe76a5f6302651d2b503514574dd87244d41"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 12:31:46 crc kubenswrapper[4724]: I0226 12:31:46.921382 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://4af0530e9f0e68464b2fd18e31afbe76a5f6302651d2b503514574dd87244d41" gracePeriod=600 Feb 26 12:31:47 crc kubenswrapper[4724]: I0226 12:31:47.377549 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="4af0530e9f0e68464b2fd18e31afbe76a5f6302651d2b503514574dd87244d41" exitCode=0 Feb 26 12:31:47 crc kubenswrapper[4724]: I0226 12:31:47.377622 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"4af0530e9f0e68464b2fd18e31afbe76a5f6302651d2b503514574dd87244d41"} Feb 26 12:31:47 crc kubenswrapper[4724]: I0226 12:31:47.377962 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978"} Feb 26 12:31:47 crc kubenswrapper[4724]: I0226 12:31:47.377998 4724 scope.go:117] "RemoveContainer" containerID="aea925bc6bf9b64f4e19d0d3a419bc57cdbed2b99fbeef8a0389a6e00ce7b4db" Feb 26 12:32:00 crc kubenswrapper[4724]: I0226 12:32:00.176682 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535152-tfl7m"] Feb 26 12:32:00 crc kubenswrapper[4724]: E0226 12:32:00.179117 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c1433c-5877-4d0f-9ab7-8e18e233e110" containerName="registry-server" Feb 26 12:32:00 crc kubenswrapper[4724]: I0226 12:32:00.179138 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c1433c-5877-4d0f-9ab7-8e18e233e110" containerName="registry-server" Feb 26 12:32:00 crc kubenswrapper[4724]: E0226 12:32:00.179160 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c1433c-5877-4d0f-9ab7-8e18e233e110" containerName="extract-utilities" Feb 26 12:32:00 crc kubenswrapper[4724]: I0226 12:32:00.179167 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c1433c-5877-4d0f-9ab7-8e18e233e110" containerName="extract-utilities" Feb 26 12:32:00 crc kubenswrapper[4724]: E0226 12:32:00.179220 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c1433c-5877-4d0f-9ab7-8e18e233e110" containerName="extract-content" Feb 26 12:32:00 crc kubenswrapper[4724]: I0226 12:32:00.179229 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c1433c-5877-4d0f-9ab7-8e18e233e110" containerName="extract-content" Feb 26 12:32:00 crc kubenswrapper[4724]: I0226 12:32:00.179475 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5c1433c-5877-4d0f-9ab7-8e18e233e110" containerName="registry-server" Feb 26 12:32:00 crc kubenswrapper[4724]: I0226 12:32:00.181594 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535152-tfl7m" Feb 26 12:32:00 crc kubenswrapper[4724]: I0226 12:32:00.184968 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:32:00 crc kubenswrapper[4724]: I0226 12:32:00.187230 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:32:00 crc kubenswrapper[4724]: I0226 12:32:00.187463 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:32:00 crc kubenswrapper[4724]: I0226 12:32:00.258474 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535152-tfl7m"] Feb 26 12:32:00 crc kubenswrapper[4724]: I0226 12:32:00.336659 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lszc8\" (UniqueName: \"kubernetes.io/projected/b0536996-e306-4587-b1da-d6d0afaace7d-kube-api-access-lszc8\") pod \"auto-csr-approver-29535152-tfl7m\" (UID: \"b0536996-e306-4587-b1da-d6d0afaace7d\") " pod="openshift-infra/auto-csr-approver-29535152-tfl7m" Feb 26 12:32:00 crc kubenswrapper[4724]: I0226 12:32:00.438814 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lszc8\" (UniqueName: \"kubernetes.io/projected/b0536996-e306-4587-b1da-d6d0afaace7d-kube-api-access-lszc8\") pod \"auto-csr-approver-29535152-tfl7m\" (UID: \"b0536996-e306-4587-b1da-d6d0afaace7d\") " pod="openshift-infra/auto-csr-approver-29535152-tfl7m" Feb 26 12:32:00 crc kubenswrapper[4724]: I0226 12:32:00.461277 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lszc8\" (UniqueName: \"kubernetes.io/projected/b0536996-e306-4587-b1da-d6d0afaace7d-kube-api-access-lszc8\") pod \"auto-csr-approver-29535152-tfl7m\" (UID: \"b0536996-e306-4587-b1da-d6d0afaace7d\") " pod="openshift-infra/auto-csr-approver-29535152-tfl7m" Feb 26 12:32:00 crc kubenswrapper[4724]: I0226 12:32:00.505895 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535152-tfl7m" Feb 26 12:32:01 crc kubenswrapper[4724]: I0226 12:32:01.081300 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535152-tfl7m"] Feb 26 12:32:01 crc kubenswrapper[4724]: I0226 12:32:01.522362 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535152-tfl7m" event={"ID":"b0536996-e306-4587-b1da-d6d0afaace7d","Type":"ContainerStarted","Data":"fde01aa7df3119b38a34cc3ed0761f6566e72bda8dfa7b8c34435a4c0a255eba"} Feb 26 12:32:03 crc kubenswrapper[4724]: I0226 12:32:03.543590 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535152-tfl7m" event={"ID":"b0536996-e306-4587-b1da-d6d0afaace7d","Type":"ContainerStarted","Data":"69cad349b52ac61f7efd7147f79db564cdff87adb7563b6849c173235c2a563b"} Feb 26 12:32:03 crc kubenswrapper[4724]: I0226 12:32:03.568318 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535152-tfl7m" podStartSLOduration=2.5726361410000003 podStartE2EDuration="3.568295439s" podCreationTimestamp="2026-02-26 12:32:00 +0000 UTC" firstStartedPulling="2026-02-26 12:32:01.107058486 +0000 UTC m=+5187.762797601" lastFinishedPulling="2026-02-26 12:32:02.102717784 +0000 UTC m=+5188.758456899" observedRunningTime="2026-02-26 12:32:03.561256549 +0000 UTC m=+5190.216995664" watchObservedRunningTime="2026-02-26 12:32:03.568295439 +0000 UTC m=+5190.224034554" Feb 26 12:32:04 crc kubenswrapper[4724]: I0226 12:32:04.555864 4724 generic.go:334] "Generic (PLEG): container finished" podID="b0536996-e306-4587-b1da-d6d0afaace7d" containerID="69cad349b52ac61f7efd7147f79db564cdff87adb7563b6849c173235c2a563b" exitCode=0 Feb 26 12:32:04 crc kubenswrapper[4724]: I0226 12:32:04.555909 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535152-tfl7m" event={"ID":"b0536996-e306-4587-b1da-d6d0afaace7d","Type":"ContainerDied","Data":"69cad349b52ac61f7efd7147f79db564cdff87adb7563b6849c173235c2a563b"} Feb 26 12:32:05 crc kubenswrapper[4724]: I0226 12:32:05.924849 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535152-tfl7m" Feb 26 12:32:06 crc kubenswrapper[4724]: I0226 12:32:06.052142 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lszc8\" (UniqueName: \"kubernetes.io/projected/b0536996-e306-4587-b1da-d6d0afaace7d-kube-api-access-lszc8\") pod \"b0536996-e306-4587-b1da-d6d0afaace7d\" (UID: \"b0536996-e306-4587-b1da-d6d0afaace7d\") " Feb 26 12:32:06 crc kubenswrapper[4724]: I0226 12:32:06.061415 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0536996-e306-4587-b1da-d6d0afaace7d-kube-api-access-lszc8" (OuterVolumeSpecName: "kube-api-access-lszc8") pod "b0536996-e306-4587-b1da-d6d0afaace7d" (UID: "b0536996-e306-4587-b1da-d6d0afaace7d"). InnerVolumeSpecName "kube-api-access-lszc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:32:06 crc kubenswrapper[4724]: I0226 12:32:06.161168 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lszc8\" (UniqueName: \"kubernetes.io/projected/b0536996-e306-4587-b1da-d6d0afaace7d-kube-api-access-lszc8\") on node \"crc\" DevicePath \"\"" Feb 26 12:32:06 crc kubenswrapper[4724]: I0226 12:32:06.585336 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535152-tfl7m" event={"ID":"b0536996-e306-4587-b1da-d6d0afaace7d","Type":"ContainerDied","Data":"fde01aa7df3119b38a34cc3ed0761f6566e72bda8dfa7b8c34435a4c0a255eba"} Feb 26 12:32:06 crc kubenswrapper[4724]: I0226 12:32:06.585666 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fde01aa7df3119b38a34cc3ed0761f6566e72bda8dfa7b8c34435a4c0a255eba" Feb 26 12:32:06 crc kubenswrapper[4724]: I0226 12:32:06.585764 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535152-tfl7m" Feb 26 12:32:06 crc kubenswrapper[4724]: I0226 12:32:06.726244 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535146-k6td2"] Feb 26 12:32:06 crc kubenswrapper[4724]: I0226 12:32:06.754058 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535146-k6td2"] Feb 26 12:32:07 crc kubenswrapper[4724]: I0226 12:32:07.989062 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac2157f7-d6e6-4b07-b51a-ec1d3035a24f" path="/var/lib/kubelet/pods/ac2157f7-d6e6-4b07-b51a-ec1d3035a24f/volumes" Feb 26 12:32:21 crc kubenswrapper[4724]: I0226 12:32:21.903807 4724 scope.go:117] "RemoveContainer" containerID="95b7c58bd5b68d8f55520c3ee834137c05fb308d4714da0a5b37c3160bdc499a" Feb 26 12:34:00 crc kubenswrapper[4724]: I0226 12:34:00.157273 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535154-7m2qc"] Feb 26 12:34:00 crc kubenswrapper[4724]: E0226 12:34:00.159090 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0536996-e306-4587-b1da-d6d0afaace7d" containerName="oc" Feb 26 12:34:00 crc kubenswrapper[4724]: I0226 12:34:00.159111 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0536996-e306-4587-b1da-d6d0afaace7d" containerName="oc" Feb 26 12:34:00 crc kubenswrapper[4724]: I0226 12:34:00.159421 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0536996-e306-4587-b1da-d6d0afaace7d" containerName="oc" Feb 26 12:34:00 crc kubenswrapper[4724]: I0226 12:34:00.161494 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535154-7m2qc" Feb 26 12:34:00 crc kubenswrapper[4724]: I0226 12:34:00.163969 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:34:00 crc kubenswrapper[4724]: I0226 12:34:00.164397 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:34:00 crc kubenswrapper[4724]: I0226 12:34:00.164459 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:34:00 crc kubenswrapper[4724]: I0226 12:34:00.215407 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frdhf\" (UniqueName: \"kubernetes.io/projected/f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9-kube-api-access-frdhf\") pod \"auto-csr-approver-29535154-7m2qc\" (UID: \"f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9\") " pod="openshift-infra/auto-csr-approver-29535154-7m2qc" Feb 26 12:34:00 crc kubenswrapper[4724]: I0226 12:34:00.226971 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535154-7m2qc"] Feb 26 12:34:00 crc kubenswrapper[4724]: I0226 12:34:00.316786 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frdhf\" (UniqueName: \"kubernetes.io/projected/f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9-kube-api-access-frdhf\") pod \"auto-csr-approver-29535154-7m2qc\" (UID: \"f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9\") " pod="openshift-infra/auto-csr-approver-29535154-7m2qc" Feb 26 12:34:00 crc kubenswrapper[4724]: I0226 12:34:00.338102 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frdhf\" (UniqueName: \"kubernetes.io/projected/f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9-kube-api-access-frdhf\") pod \"auto-csr-approver-29535154-7m2qc\" (UID: \"f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9\") " pod="openshift-infra/auto-csr-approver-29535154-7m2qc" Feb 26 12:34:00 crc kubenswrapper[4724]: I0226 12:34:00.490346 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535154-7m2qc" Feb 26 12:34:01 crc kubenswrapper[4724]: I0226 12:34:01.359465 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535154-7m2qc"] Feb 26 12:34:01 crc kubenswrapper[4724]: I0226 12:34:01.376163 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 12:34:01 crc kubenswrapper[4724]: I0226 12:34:01.639292 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535154-7m2qc" event={"ID":"f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9","Type":"ContainerStarted","Data":"34bd65ce6d0852525daa190c1ab7ff79b06bbcba06eaa862099215276f5a162a"} Feb 26 12:34:03 crc kubenswrapper[4724]: I0226 12:34:03.657274 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535154-7m2qc" event={"ID":"f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9","Type":"ContainerStarted","Data":"411c904a717766698dbde0e0dadb4facc2674f9a475ff1894f6ef9c222f88a4c"} Feb 26 12:34:03 crc kubenswrapper[4724]: I0226 12:34:03.679738 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535154-7m2qc" podStartSLOduration=2.048529064 podStartE2EDuration="3.67970327s" podCreationTimestamp="2026-02-26 12:34:00 +0000 UTC" firstStartedPulling="2026-02-26 12:34:01.372069681 +0000 UTC m=+5308.027808796" lastFinishedPulling="2026-02-26 12:34:03.003243887 +0000 UTC m=+5309.658983002" observedRunningTime="2026-02-26 12:34:03.669703924 +0000 UTC m=+5310.325443039" watchObservedRunningTime="2026-02-26 12:34:03.67970327 +0000 UTC m=+5310.335442385" Feb 26 12:34:04 crc kubenswrapper[4724]: I0226 12:34:04.667220 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535154-7m2qc" event={"ID":"f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9","Type":"ContainerDied","Data":"411c904a717766698dbde0e0dadb4facc2674f9a475ff1894f6ef9c222f88a4c"} Feb 26 12:34:04 crc kubenswrapper[4724]: I0226 12:34:04.667065 4724 generic.go:334] "Generic (PLEG): container finished" podID="f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9" containerID="411c904a717766698dbde0e0dadb4facc2674f9a475ff1894f6ef9c222f88a4c" exitCode=0 Feb 26 12:34:06 crc kubenswrapper[4724]: I0226 12:34:06.143620 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535154-7m2qc" Feb 26 12:34:06 crc kubenswrapper[4724]: I0226 12:34:06.337982 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frdhf\" (UniqueName: \"kubernetes.io/projected/f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9-kube-api-access-frdhf\") pod \"f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9\" (UID: \"f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9\") " Feb 26 12:34:06 crc kubenswrapper[4724]: I0226 12:34:06.351996 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9-kube-api-access-frdhf" (OuterVolumeSpecName: "kube-api-access-frdhf") pod "f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9" (UID: "f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9"). InnerVolumeSpecName "kube-api-access-frdhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:34:06 crc kubenswrapper[4724]: I0226 12:34:06.441238 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frdhf\" (UniqueName: \"kubernetes.io/projected/f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9-kube-api-access-frdhf\") on node \"crc\" DevicePath \"\"" Feb 26 12:34:06 crc kubenswrapper[4724]: I0226 12:34:06.688961 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535154-7m2qc" event={"ID":"f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9","Type":"ContainerDied","Data":"34bd65ce6d0852525daa190c1ab7ff79b06bbcba06eaa862099215276f5a162a"} Feb 26 12:34:06 crc kubenswrapper[4724]: I0226 12:34:06.689240 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34bd65ce6d0852525daa190c1ab7ff79b06bbcba06eaa862099215276f5a162a" Feb 26 12:34:06 crc kubenswrapper[4724]: I0226 12:34:06.689392 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535154-7m2qc" Feb 26 12:34:06 crc kubenswrapper[4724]: I0226 12:34:06.745085 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535148-k9r6q"] Feb 26 12:34:06 crc kubenswrapper[4724]: I0226 12:34:06.760056 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535148-k9r6q"] Feb 26 12:34:08 crc kubenswrapper[4724]: I0226 12:34:08.000695 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2832e830-817f-468f-9100-f7377498be57" path="/var/lib/kubelet/pods/2832e830-817f-468f-9100-f7377498be57/volumes" Feb 26 12:34:16 crc kubenswrapper[4724]: I0226 12:34:16.906850 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:34:16 crc kubenswrapper[4724]: I0226 12:34:16.907474 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:34:22 crc kubenswrapper[4724]: I0226 12:34:22.045062 4724 scope.go:117] "RemoveContainer" containerID="9ea5a773ae0f2aee8ea8896b217dd43e8e35591a3253c2623aed0ee4ad7f93ad" Feb 26 12:34:46 crc kubenswrapper[4724]: I0226 12:34:46.907148 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:34:46 crc kubenswrapper[4724]: I0226 12:34:46.907640 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:35:14 crc kubenswrapper[4724]: I0226 12:35:14.377032 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-59gpc"] Feb 26 12:35:14 crc kubenswrapper[4724]: E0226 12:35:14.380225 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9" containerName="oc" Feb 26 12:35:14 crc kubenswrapper[4724]: I0226 12:35:14.380246 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9" containerName="oc" Feb 26 12:35:14 crc kubenswrapper[4724]: I0226 12:35:14.380560 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9" containerName="oc" Feb 26 12:35:14 crc kubenswrapper[4724]: I0226 12:35:14.383593 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:14 crc kubenswrapper[4724]: I0226 12:35:14.389755 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-59gpc"] Feb 26 12:35:14 crc kubenswrapper[4724]: I0226 12:35:14.474093 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca92f621-01d9-44c1-8cce-1f3b052914d2-catalog-content\") pod \"redhat-marketplace-59gpc\" (UID: \"ca92f621-01d9-44c1-8cce-1f3b052914d2\") " pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:14 crc kubenswrapper[4724]: I0226 12:35:14.474922 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca92f621-01d9-44c1-8cce-1f3b052914d2-utilities\") pod \"redhat-marketplace-59gpc\" (UID: \"ca92f621-01d9-44c1-8cce-1f3b052914d2\") " pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:14 crc kubenswrapper[4724]: I0226 12:35:14.475208 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v948s\" (UniqueName: \"kubernetes.io/projected/ca92f621-01d9-44c1-8cce-1f3b052914d2-kube-api-access-v948s\") pod \"redhat-marketplace-59gpc\" (UID: \"ca92f621-01d9-44c1-8cce-1f3b052914d2\") " pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:14 crc kubenswrapper[4724]: I0226 12:35:14.576915 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca92f621-01d9-44c1-8cce-1f3b052914d2-catalog-content\") pod \"redhat-marketplace-59gpc\" (UID: \"ca92f621-01d9-44c1-8cce-1f3b052914d2\") " pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:14 crc kubenswrapper[4724]: I0226 12:35:14.577028 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca92f621-01d9-44c1-8cce-1f3b052914d2-utilities\") pod \"redhat-marketplace-59gpc\" (UID: \"ca92f621-01d9-44c1-8cce-1f3b052914d2\") " pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:14 crc kubenswrapper[4724]: I0226 12:35:14.577085 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v948s\" (UniqueName: \"kubernetes.io/projected/ca92f621-01d9-44c1-8cce-1f3b052914d2-kube-api-access-v948s\") pod \"redhat-marketplace-59gpc\" (UID: \"ca92f621-01d9-44c1-8cce-1f3b052914d2\") " pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:14 crc kubenswrapper[4724]: I0226 12:35:14.577558 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca92f621-01d9-44c1-8cce-1f3b052914d2-catalog-content\") pod \"redhat-marketplace-59gpc\" (UID: \"ca92f621-01d9-44c1-8cce-1f3b052914d2\") " pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:14 crc kubenswrapper[4724]: I0226 12:35:14.577714 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca92f621-01d9-44c1-8cce-1f3b052914d2-utilities\") pod \"redhat-marketplace-59gpc\" (UID: \"ca92f621-01d9-44c1-8cce-1f3b052914d2\") " pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:14 crc kubenswrapper[4724]: I0226 12:35:14.596325 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v948s\" (UniqueName: \"kubernetes.io/projected/ca92f621-01d9-44c1-8cce-1f3b052914d2-kube-api-access-v948s\") pod \"redhat-marketplace-59gpc\" (UID: \"ca92f621-01d9-44c1-8cce-1f3b052914d2\") " pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:14 crc kubenswrapper[4724]: I0226 12:35:14.715308 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:15 crc kubenswrapper[4724]: I0226 12:35:15.236138 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-59gpc"] Feb 26 12:35:15 crc kubenswrapper[4724]: I0226 12:35:15.716053 4724 generic.go:334] "Generic (PLEG): container finished" podID="ca92f621-01d9-44c1-8cce-1f3b052914d2" containerID="774b15f97a53a59ea04869f2d63cf77470eaf526d92943389b6f1ec110c809cd" exitCode=0 Feb 26 12:35:15 crc kubenswrapper[4724]: I0226 12:35:15.716470 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59gpc" event={"ID":"ca92f621-01d9-44c1-8cce-1f3b052914d2","Type":"ContainerDied","Data":"774b15f97a53a59ea04869f2d63cf77470eaf526d92943389b6f1ec110c809cd"} Feb 26 12:35:15 crc kubenswrapper[4724]: I0226 12:35:15.717314 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59gpc" event={"ID":"ca92f621-01d9-44c1-8cce-1f3b052914d2","Type":"ContainerStarted","Data":"a0bfe916de1dcbc830608ef7c0ca0371b1e500730d4264dd7164db86226ef59a"} Feb 26 12:35:16 crc kubenswrapper[4724]: I0226 12:35:16.905914 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:35:16 crc kubenswrapper[4724]: I0226 12:35:16.906315 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:35:16 crc kubenswrapper[4724]: I0226 12:35:16.906366 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 12:35:16 crc kubenswrapper[4724]: I0226 12:35:16.910071 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 12:35:16 crc kubenswrapper[4724]: I0226 12:35:16.910147 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" gracePeriod=600 Feb 26 12:35:17 crc kubenswrapper[4724]: E0226 12:35:17.031155 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:35:17 crc kubenswrapper[4724]: I0226 12:35:17.753536 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59gpc" event={"ID":"ca92f621-01d9-44c1-8cce-1f3b052914d2","Type":"ContainerStarted","Data":"d366e53fa3c18e43c0852c7b2a70de917e40dec630573762eed7c9df6d568db0"} Feb 26 12:35:17 crc kubenswrapper[4724]: I0226 12:35:17.758162 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" exitCode=0 Feb 26 12:35:17 crc kubenswrapper[4724]: I0226 12:35:17.758255 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978"} Feb 26 12:35:17 crc kubenswrapper[4724]: I0226 12:35:17.758326 4724 scope.go:117] "RemoveContainer" containerID="4af0530e9f0e68464b2fd18e31afbe76a5f6302651d2b503514574dd87244d41" Feb 26 12:35:17 crc kubenswrapper[4724]: I0226 12:35:17.759607 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:35:17 crc kubenswrapper[4724]: E0226 12:35:17.760406 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:35:18 crc kubenswrapper[4724]: I0226 12:35:18.779060 4724 generic.go:334] "Generic (PLEG): container finished" podID="ca92f621-01d9-44c1-8cce-1f3b052914d2" containerID="d366e53fa3c18e43c0852c7b2a70de917e40dec630573762eed7c9df6d568db0" exitCode=0 Feb 26 12:35:18 crc kubenswrapper[4724]: I0226 12:35:18.779113 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59gpc" event={"ID":"ca92f621-01d9-44c1-8cce-1f3b052914d2","Type":"ContainerDied","Data":"d366e53fa3c18e43c0852c7b2a70de917e40dec630573762eed7c9df6d568db0"} Feb 26 12:35:19 crc kubenswrapper[4724]: I0226 12:35:19.791506 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59gpc" event={"ID":"ca92f621-01d9-44c1-8cce-1f3b052914d2","Type":"ContainerStarted","Data":"95ac588374d63e2818751468e1863aaec251ee32b56d8a7c159d7667481b09db"} Feb 26 12:35:19 crc kubenswrapper[4724]: I0226 12:35:19.812110 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-59gpc" podStartSLOduration=2.136808623 podStartE2EDuration="5.812083784s" podCreationTimestamp="2026-02-26 12:35:14 +0000 UTC" firstStartedPulling="2026-02-26 12:35:15.726399467 +0000 UTC m=+5382.382138582" lastFinishedPulling="2026-02-26 12:35:19.401674618 +0000 UTC m=+5386.057413743" observedRunningTime="2026-02-26 12:35:19.808433721 +0000 UTC m=+5386.464172856" watchObservedRunningTime="2026-02-26 12:35:19.812083784 +0000 UTC m=+5386.467822899" Feb 26 12:35:24 crc kubenswrapper[4724]: I0226 12:35:24.716120 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:24 crc kubenswrapper[4724]: I0226 12:35:24.716834 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:25 crc kubenswrapper[4724]: I0226 12:35:25.764742 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-59gpc" podUID="ca92f621-01d9-44c1-8cce-1f3b052914d2" containerName="registry-server" probeResult="failure" output=< Feb 26 12:35:25 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:35:25 crc kubenswrapper[4724]: > Feb 26 12:35:31 crc kubenswrapper[4724]: I0226 12:35:31.975922 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:35:31 crc kubenswrapper[4724]: E0226 12:35:31.977611 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:35:34 crc kubenswrapper[4724]: I0226 12:35:34.773885 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:34 crc kubenswrapper[4724]: I0226 12:35:34.827429 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:35 crc kubenswrapper[4724]: I0226 12:35:35.013236 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-59gpc"] Feb 26 12:35:35 crc kubenswrapper[4724]: I0226 12:35:35.931095 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-59gpc" podUID="ca92f621-01d9-44c1-8cce-1f3b052914d2" containerName="registry-server" containerID="cri-o://95ac588374d63e2818751468e1863aaec251ee32b56d8a7c159d7667481b09db" gracePeriod=2 Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.503007 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.631584 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca92f621-01d9-44c1-8cce-1f3b052914d2-catalog-content\") pod \"ca92f621-01d9-44c1-8cce-1f3b052914d2\" (UID: \"ca92f621-01d9-44c1-8cce-1f3b052914d2\") " Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.631774 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca92f621-01d9-44c1-8cce-1f3b052914d2-utilities\") pod \"ca92f621-01d9-44c1-8cce-1f3b052914d2\" (UID: \"ca92f621-01d9-44c1-8cce-1f3b052914d2\") " Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.631800 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v948s\" (UniqueName: \"kubernetes.io/projected/ca92f621-01d9-44c1-8cce-1f3b052914d2-kube-api-access-v948s\") pod \"ca92f621-01d9-44c1-8cce-1f3b052914d2\" (UID: \"ca92f621-01d9-44c1-8cce-1f3b052914d2\") " Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.634252 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca92f621-01d9-44c1-8cce-1f3b052914d2-utilities" (OuterVolumeSpecName: "utilities") pod "ca92f621-01d9-44c1-8cce-1f3b052914d2" (UID: "ca92f621-01d9-44c1-8cce-1f3b052914d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.653122 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca92f621-01d9-44c1-8cce-1f3b052914d2-kube-api-access-v948s" (OuterVolumeSpecName: "kube-api-access-v948s") pod "ca92f621-01d9-44c1-8cce-1f3b052914d2" (UID: "ca92f621-01d9-44c1-8cce-1f3b052914d2"). InnerVolumeSpecName "kube-api-access-v948s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.664555 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca92f621-01d9-44c1-8cce-1f3b052914d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca92f621-01d9-44c1-8cce-1f3b052914d2" (UID: "ca92f621-01d9-44c1-8cce-1f3b052914d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.735333 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca92f621-01d9-44c1-8cce-1f3b052914d2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.735371 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca92f621-01d9-44c1-8cce-1f3b052914d2-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.735384 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v948s\" (UniqueName: \"kubernetes.io/projected/ca92f621-01d9-44c1-8cce-1f3b052914d2-kube-api-access-v948s\") on node \"crc\" DevicePath \"\"" Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.947307 4724 generic.go:334] "Generic (PLEG): container finished" podID="ca92f621-01d9-44c1-8cce-1f3b052914d2" containerID="95ac588374d63e2818751468e1863aaec251ee32b56d8a7c159d7667481b09db" exitCode=0 Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.947364 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59gpc" Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.947383 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59gpc" event={"ID":"ca92f621-01d9-44c1-8cce-1f3b052914d2","Type":"ContainerDied","Data":"95ac588374d63e2818751468e1863aaec251ee32b56d8a7c159d7667481b09db"} Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.947723 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59gpc" event={"ID":"ca92f621-01d9-44c1-8cce-1f3b052914d2","Type":"ContainerDied","Data":"a0bfe916de1dcbc830608ef7c0ca0371b1e500730d4264dd7164db86226ef59a"} Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.947739 4724 scope.go:117] "RemoveContainer" containerID="95ac588374d63e2818751468e1863aaec251ee32b56d8a7c159d7667481b09db" Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.979776 4724 scope.go:117] "RemoveContainer" containerID="d366e53fa3c18e43c0852c7b2a70de917e40dec630573762eed7c9df6d568db0" Feb 26 12:35:36 crc kubenswrapper[4724]: I0226 12:35:36.994770 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-59gpc"] Feb 26 12:35:37 crc kubenswrapper[4724]: I0226 12:35:37.004778 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-59gpc"] Feb 26 12:35:37 crc kubenswrapper[4724]: I0226 12:35:37.230292 4724 scope.go:117] "RemoveContainer" containerID="774b15f97a53a59ea04869f2d63cf77470eaf526d92943389b6f1ec110c809cd" Feb 26 12:35:37 crc kubenswrapper[4724]: I0226 12:35:37.271860 4724 scope.go:117] "RemoveContainer" containerID="95ac588374d63e2818751468e1863aaec251ee32b56d8a7c159d7667481b09db" Feb 26 12:35:37 crc kubenswrapper[4724]: E0226 12:35:37.274334 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95ac588374d63e2818751468e1863aaec251ee32b56d8a7c159d7667481b09db\": container with ID starting with 95ac588374d63e2818751468e1863aaec251ee32b56d8a7c159d7667481b09db not found: ID does not exist" containerID="95ac588374d63e2818751468e1863aaec251ee32b56d8a7c159d7667481b09db" Feb 26 12:35:37 crc kubenswrapper[4724]: I0226 12:35:37.274399 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95ac588374d63e2818751468e1863aaec251ee32b56d8a7c159d7667481b09db"} err="failed to get container status \"95ac588374d63e2818751468e1863aaec251ee32b56d8a7c159d7667481b09db\": rpc error: code = NotFound desc = could not find container \"95ac588374d63e2818751468e1863aaec251ee32b56d8a7c159d7667481b09db\": container with ID starting with 95ac588374d63e2818751468e1863aaec251ee32b56d8a7c159d7667481b09db not found: ID does not exist" Feb 26 12:35:37 crc kubenswrapper[4724]: I0226 12:35:37.274428 4724 scope.go:117] "RemoveContainer" containerID="d366e53fa3c18e43c0852c7b2a70de917e40dec630573762eed7c9df6d568db0" Feb 26 12:35:37 crc kubenswrapper[4724]: E0226 12:35:37.274742 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d366e53fa3c18e43c0852c7b2a70de917e40dec630573762eed7c9df6d568db0\": container with ID starting with d366e53fa3c18e43c0852c7b2a70de917e40dec630573762eed7c9df6d568db0 not found: ID does not exist" containerID="d366e53fa3c18e43c0852c7b2a70de917e40dec630573762eed7c9df6d568db0" Feb 26 12:35:37 crc kubenswrapper[4724]: I0226 12:35:37.274786 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d366e53fa3c18e43c0852c7b2a70de917e40dec630573762eed7c9df6d568db0"} err="failed to get container status \"d366e53fa3c18e43c0852c7b2a70de917e40dec630573762eed7c9df6d568db0\": rpc error: code = NotFound desc = could not find container \"d366e53fa3c18e43c0852c7b2a70de917e40dec630573762eed7c9df6d568db0\": container with ID starting with d366e53fa3c18e43c0852c7b2a70de917e40dec630573762eed7c9df6d568db0 not found: ID does not exist" Feb 26 12:35:37 crc kubenswrapper[4724]: I0226 12:35:37.274819 4724 scope.go:117] "RemoveContainer" containerID="774b15f97a53a59ea04869f2d63cf77470eaf526d92943389b6f1ec110c809cd" Feb 26 12:35:37 crc kubenswrapper[4724]: E0226 12:35:37.275115 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"774b15f97a53a59ea04869f2d63cf77470eaf526d92943389b6f1ec110c809cd\": container with ID starting with 774b15f97a53a59ea04869f2d63cf77470eaf526d92943389b6f1ec110c809cd not found: ID does not exist" containerID="774b15f97a53a59ea04869f2d63cf77470eaf526d92943389b6f1ec110c809cd" Feb 26 12:35:37 crc kubenswrapper[4724]: I0226 12:35:37.275144 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"774b15f97a53a59ea04869f2d63cf77470eaf526d92943389b6f1ec110c809cd"} err="failed to get container status \"774b15f97a53a59ea04869f2d63cf77470eaf526d92943389b6f1ec110c809cd\": rpc error: code = NotFound desc = could not find container \"774b15f97a53a59ea04869f2d63cf77470eaf526d92943389b6f1ec110c809cd\": container with ID starting with 774b15f97a53a59ea04869f2d63cf77470eaf526d92943389b6f1ec110c809cd not found: ID does not exist" Feb 26 12:35:37 crc kubenswrapper[4724]: I0226 12:35:37.989300 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca92f621-01d9-44c1-8cce-1f3b052914d2" path="/var/lib/kubelet/pods/ca92f621-01d9-44c1-8cce-1f3b052914d2/volumes" Feb 26 12:35:44 crc kubenswrapper[4724]: I0226 12:35:44.975764 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:35:44 crc kubenswrapper[4724]: E0226 12:35:44.976491 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:35:55 crc kubenswrapper[4724]: I0226 12:35:55.981159 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:35:55 crc kubenswrapper[4724]: E0226 12:35:55.982010 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:36:00 crc kubenswrapper[4724]: I0226 12:36:00.147865 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535156-hrq2j"] Feb 26 12:36:00 crc kubenswrapper[4724]: E0226 12:36:00.148976 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca92f621-01d9-44c1-8cce-1f3b052914d2" containerName="extract-utilities" Feb 26 12:36:00 crc kubenswrapper[4724]: I0226 12:36:00.148990 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca92f621-01d9-44c1-8cce-1f3b052914d2" containerName="extract-utilities" Feb 26 12:36:00 crc kubenswrapper[4724]: E0226 12:36:00.149007 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca92f621-01d9-44c1-8cce-1f3b052914d2" containerName="registry-server" Feb 26 12:36:00 crc kubenswrapper[4724]: I0226 12:36:00.149012 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca92f621-01d9-44c1-8cce-1f3b052914d2" containerName="registry-server" Feb 26 12:36:00 crc kubenswrapper[4724]: E0226 12:36:00.149023 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca92f621-01d9-44c1-8cce-1f3b052914d2" containerName="extract-content" Feb 26 12:36:00 crc kubenswrapper[4724]: I0226 12:36:00.149029 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca92f621-01d9-44c1-8cce-1f3b052914d2" containerName="extract-content" Feb 26 12:36:00 crc kubenswrapper[4724]: I0226 12:36:00.149228 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca92f621-01d9-44c1-8cce-1f3b052914d2" containerName="registry-server" Feb 26 12:36:00 crc kubenswrapper[4724]: I0226 12:36:00.149892 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535156-hrq2j" Feb 26 12:36:00 crc kubenswrapper[4724]: I0226 12:36:00.153700 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:36:00 crc kubenswrapper[4724]: I0226 12:36:00.153848 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:36:00 crc kubenswrapper[4724]: I0226 12:36:00.153960 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:36:00 crc kubenswrapper[4724]: I0226 12:36:00.169387 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535156-hrq2j"] Feb 26 12:36:00 crc kubenswrapper[4724]: I0226 12:36:00.223987 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8vs7\" (UniqueName: \"kubernetes.io/projected/44bd9ad3-49b2-47a8-b90b-bf589333ac94-kube-api-access-p8vs7\") pod \"auto-csr-approver-29535156-hrq2j\" (UID: \"44bd9ad3-49b2-47a8-b90b-bf589333ac94\") " pod="openshift-infra/auto-csr-approver-29535156-hrq2j" Feb 26 12:36:00 crc kubenswrapper[4724]: I0226 12:36:00.325791 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8vs7\" (UniqueName: \"kubernetes.io/projected/44bd9ad3-49b2-47a8-b90b-bf589333ac94-kube-api-access-p8vs7\") pod \"auto-csr-approver-29535156-hrq2j\" (UID: \"44bd9ad3-49b2-47a8-b90b-bf589333ac94\") " pod="openshift-infra/auto-csr-approver-29535156-hrq2j" Feb 26 12:36:00 crc kubenswrapper[4724]: I0226 12:36:00.351859 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8vs7\" (UniqueName: \"kubernetes.io/projected/44bd9ad3-49b2-47a8-b90b-bf589333ac94-kube-api-access-p8vs7\") pod \"auto-csr-approver-29535156-hrq2j\" (UID: \"44bd9ad3-49b2-47a8-b90b-bf589333ac94\") " pod="openshift-infra/auto-csr-approver-29535156-hrq2j" Feb 26 12:36:00 crc kubenswrapper[4724]: I0226 12:36:00.478287 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535156-hrq2j" Feb 26 12:36:00 crc kubenswrapper[4724]: I0226 12:36:00.973978 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535156-hrq2j"] Feb 26 12:36:01 crc kubenswrapper[4724]: I0226 12:36:01.191779 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535156-hrq2j" event={"ID":"44bd9ad3-49b2-47a8-b90b-bf589333ac94","Type":"ContainerStarted","Data":"008a6d0b00ca88c80fc9d045c05c3eecfea098f4df1d371e6a993d937b8e949e"} Feb 26 12:36:03 crc kubenswrapper[4724]: I0226 12:36:03.212451 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535156-hrq2j" event={"ID":"44bd9ad3-49b2-47a8-b90b-bf589333ac94","Type":"ContainerStarted","Data":"93d5bc7a19489f6c2796097e2caf8abda8bd7d81dda5d19ff9e4c8efe472d606"} Feb 26 12:36:03 crc kubenswrapper[4724]: I0226 12:36:03.232154 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535156-hrq2j" podStartSLOduration=1.741180881 podStartE2EDuration="3.232127653s" podCreationTimestamp="2026-02-26 12:36:00 +0000 UTC" firstStartedPulling="2026-02-26 12:36:00.98303051 +0000 UTC m=+5427.638769625" lastFinishedPulling="2026-02-26 12:36:02.473977282 +0000 UTC m=+5429.129716397" observedRunningTime="2026-02-26 12:36:03.230038379 +0000 UTC m=+5429.885777514" watchObservedRunningTime="2026-02-26 12:36:03.232127653 +0000 UTC m=+5429.887866768" Feb 26 12:36:04 crc kubenswrapper[4724]: I0226 12:36:04.223472 4724 generic.go:334] "Generic (PLEG): container finished" podID="44bd9ad3-49b2-47a8-b90b-bf589333ac94" containerID="93d5bc7a19489f6c2796097e2caf8abda8bd7d81dda5d19ff9e4c8efe472d606" exitCode=0 Feb 26 12:36:04 crc kubenswrapper[4724]: I0226 12:36:04.223520 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535156-hrq2j" event={"ID":"44bd9ad3-49b2-47a8-b90b-bf589333ac94","Type":"ContainerDied","Data":"93d5bc7a19489f6c2796097e2caf8abda8bd7d81dda5d19ff9e4c8efe472d606"} Feb 26 12:36:05 crc kubenswrapper[4724]: I0226 12:36:05.637784 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535156-hrq2j" Feb 26 12:36:05 crc kubenswrapper[4724]: I0226 12:36:05.833710 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8vs7\" (UniqueName: \"kubernetes.io/projected/44bd9ad3-49b2-47a8-b90b-bf589333ac94-kube-api-access-p8vs7\") pod \"44bd9ad3-49b2-47a8-b90b-bf589333ac94\" (UID: \"44bd9ad3-49b2-47a8-b90b-bf589333ac94\") " Feb 26 12:36:05 crc kubenswrapper[4724]: I0226 12:36:05.848434 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44bd9ad3-49b2-47a8-b90b-bf589333ac94-kube-api-access-p8vs7" (OuterVolumeSpecName: "kube-api-access-p8vs7") pod "44bd9ad3-49b2-47a8-b90b-bf589333ac94" (UID: "44bd9ad3-49b2-47a8-b90b-bf589333ac94"). InnerVolumeSpecName "kube-api-access-p8vs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:36:05 crc kubenswrapper[4724]: I0226 12:36:05.935833 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8vs7\" (UniqueName: \"kubernetes.io/projected/44bd9ad3-49b2-47a8-b90b-bf589333ac94-kube-api-access-p8vs7\") on node \"crc\" DevicePath \"\"" Feb 26 12:36:06 crc kubenswrapper[4724]: I0226 12:36:06.243981 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535156-hrq2j" event={"ID":"44bd9ad3-49b2-47a8-b90b-bf589333ac94","Type":"ContainerDied","Data":"008a6d0b00ca88c80fc9d045c05c3eecfea098f4df1d371e6a993d937b8e949e"} Feb 26 12:36:06 crc kubenswrapper[4724]: I0226 12:36:06.244038 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="008a6d0b00ca88c80fc9d045c05c3eecfea098f4df1d371e6a993d937b8e949e" Feb 26 12:36:06 crc kubenswrapper[4724]: I0226 12:36:06.244101 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535156-hrq2j" Feb 26 12:36:06 crc kubenswrapper[4724]: I0226 12:36:06.302849 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535150-64kp5"] Feb 26 12:36:06 crc kubenswrapper[4724]: I0226 12:36:06.318491 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535150-64kp5"] Feb 26 12:36:06 crc kubenswrapper[4724]: I0226 12:36:06.980807 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:36:06 crc kubenswrapper[4724]: E0226 12:36:06.981369 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:36:07 crc kubenswrapper[4724]: I0226 12:36:07.987916 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b845e03-8213-4e63-bab5-ea846475f8fd" path="/var/lib/kubelet/pods/3b845e03-8213-4e63-bab5-ea846475f8fd/volumes" Feb 26 12:36:10 crc kubenswrapper[4724]: I0226 12:36:10.711797 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qpjqj"] Feb 26 12:36:10 crc kubenswrapper[4724]: E0226 12:36:10.713739 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bd9ad3-49b2-47a8-b90b-bf589333ac94" containerName="oc" Feb 26 12:36:10 crc kubenswrapper[4724]: I0226 12:36:10.713765 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bd9ad3-49b2-47a8-b90b-bf589333ac94" containerName="oc" Feb 26 12:36:10 crc kubenswrapper[4724]: I0226 12:36:10.714042 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bd9ad3-49b2-47a8-b90b-bf589333ac94" containerName="oc" Feb 26 12:36:10 crc kubenswrapper[4724]: I0226 12:36:10.715467 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:36:10 crc kubenswrapper[4724]: I0226 12:36:10.726202 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qpjqj"] Feb 26 12:36:10 crc kubenswrapper[4724]: I0226 12:36:10.772994 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80851036-487b-4aff-970b-6c60b77089dd-catalog-content\") pod \"redhat-operators-qpjqj\" (UID: \"80851036-487b-4aff-970b-6c60b77089dd\") " pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:36:10 crc kubenswrapper[4724]: I0226 12:36:10.773073 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80851036-487b-4aff-970b-6c60b77089dd-utilities\") pod \"redhat-operators-qpjqj\" (UID: \"80851036-487b-4aff-970b-6c60b77089dd\") " pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:36:10 crc kubenswrapper[4724]: I0226 12:36:10.773456 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltxtc\" (UniqueName: \"kubernetes.io/projected/80851036-487b-4aff-970b-6c60b77089dd-kube-api-access-ltxtc\") pod \"redhat-operators-qpjqj\" (UID: \"80851036-487b-4aff-970b-6c60b77089dd\") " pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:36:10 crc kubenswrapper[4724]: I0226 12:36:10.874885 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltxtc\" (UniqueName: \"kubernetes.io/projected/80851036-487b-4aff-970b-6c60b77089dd-kube-api-access-ltxtc\") pod \"redhat-operators-qpjqj\" (UID: \"80851036-487b-4aff-970b-6c60b77089dd\") " pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:36:10 crc kubenswrapper[4724]: I0226 12:36:10.875051 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80851036-487b-4aff-970b-6c60b77089dd-catalog-content\") pod \"redhat-operators-qpjqj\" (UID: \"80851036-487b-4aff-970b-6c60b77089dd\") " pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:36:10 crc kubenswrapper[4724]: I0226 12:36:10.875085 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80851036-487b-4aff-970b-6c60b77089dd-utilities\") pod \"redhat-operators-qpjqj\" (UID: \"80851036-487b-4aff-970b-6c60b77089dd\") " pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:36:10 crc kubenswrapper[4724]: I0226 12:36:10.875567 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80851036-487b-4aff-970b-6c60b77089dd-catalog-content\") pod \"redhat-operators-qpjqj\" (UID: \"80851036-487b-4aff-970b-6c60b77089dd\") " pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:36:10 crc kubenswrapper[4724]: I0226 12:36:10.875567 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80851036-487b-4aff-970b-6c60b77089dd-utilities\") pod \"redhat-operators-qpjqj\" (UID: \"80851036-487b-4aff-970b-6c60b77089dd\") " pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:36:10 crc kubenswrapper[4724]: I0226 12:36:10.896648 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltxtc\" (UniqueName: \"kubernetes.io/projected/80851036-487b-4aff-970b-6c60b77089dd-kube-api-access-ltxtc\") pod \"redhat-operators-qpjqj\" (UID: \"80851036-487b-4aff-970b-6c60b77089dd\") " pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:36:11 crc kubenswrapper[4724]: I0226 12:36:11.040431 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:36:11 crc kubenswrapper[4724]: I0226 12:36:11.536247 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qpjqj"] Feb 26 12:36:12 crc kubenswrapper[4724]: I0226 12:36:12.297150 4724 generic.go:334] "Generic (PLEG): container finished" podID="80851036-487b-4aff-970b-6c60b77089dd" containerID="4dcf02c680c09a30b52f01337fad5b6f8ff4aff990973711955632bbc191cf08" exitCode=0 Feb 26 12:36:12 crc kubenswrapper[4724]: I0226 12:36:12.297221 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpjqj" event={"ID":"80851036-487b-4aff-970b-6c60b77089dd","Type":"ContainerDied","Data":"4dcf02c680c09a30b52f01337fad5b6f8ff4aff990973711955632bbc191cf08"} Feb 26 12:36:12 crc kubenswrapper[4724]: I0226 12:36:12.297648 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpjqj" event={"ID":"80851036-487b-4aff-970b-6c60b77089dd","Type":"ContainerStarted","Data":"c7278f9418b175c244ffa43e6a6cd0c5cf0e28ba63d28ae15bcc4b5ff0eb0ac2"} Feb 26 12:36:13 crc kubenswrapper[4724]: I0226 12:36:13.307677 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpjqj" event={"ID":"80851036-487b-4aff-970b-6c60b77089dd","Type":"ContainerStarted","Data":"3df3ac798e22a2782fa561940c49a87024f78a0719e63160849ef3a9283510e8"} Feb 26 12:36:19 crc kubenswrapper[4724]: I0226 12:36:19.363422 4724 generic.go:334] "Generic (PLEG): container finished" podID="80851036-487b-4aff-970b-6c60b77089dd" containerID="3df3ac798e22a2782fa561940c49a87024f78a0719e63160849ef3a9283510e8" exitCode=0 Feb 26 12:36:19 crc kubenswrapper[4724]: I0226 12:36:19.363472 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpjqj" event={"ID":"80851036-487b-4aff-970b-6c60b77089dd","Type":"ContainerDied","Data":"3df3ac798e22a2782fa561940c49a87024f78a0719e63160849ef3a9283510e8"} Feb 26 12:36:20 crc kubenswrapper[4724]: I0226 12:36:20.376853 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpjqj" event={"ID":"80851036-487b-4aff-970b-6c60b77089dd","Type":"ContainerStarted","Data":"312b6186fd569c2957e5242506c37ee951df0047886884c74a251383349bbb89"} Feb 26 12:36:20 crc kubenswrapper[4724]: I0226 12:36:20.406401 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qpjqj" podStartSLOduration=2.832696881 podStartE2EDuration="10.406379734s" podCreationTimestamp="2026-02-26 12:36:10 +0000 UTC" firstStartedPulling="2026-02-26 12:36:12.300442242 +0000 UTC m=+5438.956181357" lastFinishedPulling="2026-02-26 12:36:19.874125085 +0000 UTC m=+5446.529864210" observedRunningTime="2026-02-26 12:36:20.401538961 +0000 UTC m=+5447.057278076" watchObservedRunningTime="2026-02-26 12:36:20.406379734 +0000 UTC m=+5447.062118849" Feb 26 12:36:20 crc kubenswrapper[4724]: I0226 12:36:20.975255 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:36:20 crc kubenswrapper[4724]: E0226 12:36:20.975867 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:36:21 crc kubenswrapper[4724]: I0226 12:36:21.040910 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:36:21 crc kubenswrapper[4724]: I0226 12:36:21.040979 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:36:22 crc kubenswrapper[4724]: I0226 12:36:22.140945 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qpjqj" podUID="80851036-487b-4aff-970b-6c60b77089dd" containerName="registry-server" probeResult="failure" output=< Feb 26 12:36:22 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:36:22 crc kubenswrapper[4724]: > Feb 26 12:36:22 crc kubenswrapper[4724]: I0226 12:36:22.146700 4724 scope.go:117] "RemoveContainer" containerID="783f8756c9a2099f1b9c0717fa941ecaaf8a92f1819d0e9c3fa40f1f7d01b044" Feb 26 12:36:32 crc kubenswrapper[4724]: I0226 12:36:32.085297 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qpjqj" podUID="80851036-487b-4aff-970b-6c60b77089dd" containerName="registry-server" probeResult="failure" output=< Feb 26 12:36:32 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:36:32 crc kubenswrapper[4724]: > Feb 26 12:36:32 crc kubenswrapper[4724]: I0226 12:36:32.976232 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:36:32 crc kubenswrapper[4724]: E0226 12:36:32.976972 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:36:42 crc kubenswrapper[4724]: I0226 12:36:42.097307 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qpjqj" podUID="80851036-487b-4aff-970b-6c60b77089dd" containerName="registry-server" probeResult="failure" output=< Feb 26 12:36:42 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:36:42 crc kubenswrapper[4724]: > Feb 26 12:36:44 crc kubenswrapper[4724]: I0226 12:36:44.975176 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:36:44 crc kubenswrapper[4724]: E0226 12:36:44.975975 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:36:46 crc kubenswrapper[4724]: I0226 12:36:46.646299 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-52j8x"] Feb 26 12:36:46 crc kubenswrapper[4724]: I0226 12:36:46.660471 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:36:46 crc kubenswrapper[4724]: I0226 12:36:46.721335 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52j8x"] Feb 26 12:36:46 crc kubenswrapper[4724]: I0226 12:36:46.774988 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82af98e0-3bc2-4384-adcb-fef0c343da75-utilities\") pod \"community-operators-52j8x\" (UID: \"82af98e0-3bc2-4384-adcb-fef0c343da75\") " pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:36:46 crc kubenswrapper[4724]: I0226 12:36:46.775122 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82af98e0-3bc2-4384-adcb-fef0c343da75-catalog-content\") pod \"community-operators-52j8x\" (UID: \"82af98e0-3bc2-4384-adcb-fef0c343da75\") " pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:36:46 crc kubenswrapper[4724]: I0226 12:36:46.775218 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrxvt\" (UniqueName: \"kubernetes.io/projected/82af98e0-3bc2-4384-adcb-fef0c343da75-kube-api-access-mrxvt\") pod \"community-operators-52j8x\" (UID: \"82af98e0-3bc2-4384-adcb-fef0c343da75\") " pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:36:46 crc kubenswrapper[4724]: I0226 12:36:46.877329 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82af98e0-3bc2-4384-adcb-fef0c343da75-utilities\") pod \"community-operators-52j8x\" (UID: \"82af98e0-3bc2-4384-adcb-fef0c343da75\") " pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:36:46 crc kubenswrapper[4724]: I0226 12:36:46.877420 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82af98e0-3bc2-4384-adcb-fef0c343da75-catalog-content\") pod \"community-operators-52j8x\" (UID: \"82af98e0-3bc2-4384-adcb-fef0c343da75\") " pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:36:46 crc kubenswrapper[4724]: I0226 12:36:46.877468 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrxvt\" (UniqueName: \"kubernetes.io/projected/82af98e0-3bc2-4384-adcb-fef0c343da75-kube-api-access-mrxvt\") pod \"community-operators-52j8x\" (UID: \"82af98e0-3bc2-4384-adcb-fef0c343da75\") " pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:36:46 crc kubenswrapper[4724]: I0226 12:36:46.878262 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82af98e0-3bc2-4384-adcb-fef0c343da75-catalog-content\") pod \"community-operators-52j8x\" (UID: \"82af98e0-3bc2-4384-adcb-fef0c343da75\") " pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:36:46 crc kubenswrapper[4724]: I0226 12:36:46.878509 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82af98e0-3bc2-4384-adcb-fef0c343da75-utilities\") pod \"community-operators-52j8x\" (UID: \"82af98e0-3bc2-4384-adcb-fef0c343da75\") " pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:36:47 crc kubenswrapper[4724]: I0226 12:36:47.074747 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrxvt\" (UniqueName: \"kubernetes.io/projected/82af98e0-3bc2-4384-adcb-fef0c343da75-kube-api-access-mrxvt\") pod \"community-operators-52j8x\" (UID: \"82af98e0-3bc2-4384-adcb-fef0c343da75\") " pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:36:47 crc kubenswrapper[4724]: I0226 12:36:47.299338 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:36:47 crc kubenswrapper[4724]: I0226 12:36:47.930970 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52j8x"] Feb 26 12:36:48 crc kubenswrapper[4724]: I0226 12:36:48.660083 4724 generic.go:334] "Generic (PLEG): container finished" podID="82af98e0-3bc2-4384-adcb-fef0c343da75" containerID="1dbf4575472db34184db9711cb9e9281f0e3dea8f0d546ce34c2e52951c125b1" exitCode=0 Feb 26 12:36:48 crc kubenswrapper[4724]: I0226 12:36:48.660130 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52j8x" event={"ID":"82af98e0-3bc2-4384-adcb-fef0c343da75","Type":"ContainerDied","Data":"1dbf4575472db34184db9711cb9e9281f0e3dea8f0d546ce34c2e52951c125b1"} Feb 26 12:36:48 crc kubenswrapper[4724]: I0226 12:36:48.660318 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52j8x" event={"ID":"82af98e0-3bc2-4384-adcb-fef0c343da75","Type":"ContainerStarted","Data":"997a64f1895cafcb1316530358ef929341fcad60091fa243bee9ad0e54d1fb7e"} Feb 26 12:36:50 crc kubenswrapper[4724]: I0226 12:36:50.680005 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52j8x" event={"ID":"82af98e0-3bc2-4384-adcb-fef0c343da75","Type":"ContainerStarted","Data":"ee9f7efc5513395f0c54fa809a6bb9a60c054e7a36e6022950bf2e71365f0cff"} Feb 26 12:36:52 crc kubenswrapper[4724]: I0226 12:36:52.465427 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qpjqj" podUID="80851036-487b-4aff-970b-6c60b77089dd" containerName="registry-server" probeResult="failure" output=< Feb 26 12:36:52 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:36:52 crc kubenswrapper[4724]: > Feb 26 12:36:52 crc kubenswrapper[4724]: I0226 12:36:52.698694 4724 generic.go:334] "Generic (PLEG): container finished" podID="82af98e0-3bc2-4384-adcb-fef0c343da75" containerID="ee9f7efc5513395f0c54fa809a6bb9a60c054e7a36e6022950bf2e71365f0cff" exitCode=0 Feb 26 12:36:52 crc kubenswrapper[4724]: I0226 12:36:52.698748 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52j8x" event={"ID":"82af98e0-3bc2-4384-adcb-fef0c343da75","Type":"ContainerDied","Data":"ee9f7efc5513395f0c54fa809a6bb9a60c054e7a36e6022950bf2e71365f0cff"} Feb 26 12:36:53 crc kubenswrapper[4724]: I0226 12:36:53.709693 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52j8x" event={"ID":"82af98e0-3bc2-4384-adcb-fef0c343da75","Type":"ContainerStarted","Data":"826e8616a276ab4819fc07e17aa8a515cb8b5db92755fbf5ab473c51cbd4a110"} Feb 26 12:36:53 crc kubenswrapper[4724]: I0226 12:36:53.741523 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-52j8x" podStartSLOduration=3.127610507 podStartE2EDuration="7.74150375s" podCreationTimestamp="2026-02-26 12:36:46 +0000 UTC" firstStartedPulling="2026-02-26 12:36:48.66352402 +0000 UTC m=+5475.319263135" lastFinishedPulling="2026-02-26 12:36:53.277417253 +0000 UTC m=+5479.933156378" observedRunningTime="2026-02-26 12:36:53.736714987 +0000 UTC m=+5480.392454122" watchObservedRunningTime="2026-02-26 12:36:53.74150375 +0000 UTC m=+5480.397242865" Feb 26 12:36:57 crc kubenswrapper[4724]: I0226 12:36:57.300359 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:36:57 crc kubenswrapper[4724]: I0226 12:36:57.300903 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:36:57 crc kubenswrapper[4724]: I0226 12:36:57.979269 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:36:57 crc kubenswrapper[4724]: E0226 12:36:57.979576 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:36:58 crc kubenswrapper[4724]: I0226 12:36:58.363209 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-52j8x" podUID="82af98e0-3bc2-4384-adcb-fef0c343da75" containerName="registry-server" probeResult="failure" output=< Feb 26 12:36:58 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:36:58 crc kubenswrapper[4724]: > Feb 26 12:37:02 crc kubenswrapper[4724]: I0226 12:37:02.430085 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qpjqj" podUID="80851036-487b-4aff-970b-6c60b77089dd" containerName="registry-server" probeResult="failure" output=< Feb 26 12:37:02 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:37:02 crc kubenswrapper[4724]: > Feb 26 12:37:08 crc kubenswrapper[4724]: I0226 12:37:08.415727 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-52j8x" podUID="82af98e0-3bc2-4384-adcb-fef0c343da75" containerName="registry-server" probeResult="failure" output=< Feb 26 12:37:08 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:37:08 crc kubenswrapper[4724]: > Feb 26 12:37:10 crc kubenswrapper[4724]: I0226 12:37:10.050849 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:37:10 crc kubenswrapper[4724]: E0226 12:37:10.053730 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:37:12 crc kubenswrapper[4724]: I0226 12:37:12.091501 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qpjqj" podUID="80851036-487b-4aff-970b-6c60b77089dd" containerName="registry-server" probeResult="failure" output=< Feb 26 12:37:12 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:37:12 crc kubenswrapper[4724]: > Feb 26 12:37:17 crc kubenswrapper[4724]: I0226 12:37:17.364025 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:37:17 crc kubenswrapper[4724]: I0226 12:37:17.416042 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:37:17 crc kubenswrapper[4724]: I0226 12:37:17.828898 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52j8x"] Feb 26 12:37:18 crc kubenswrapper[4724]: I0226 12:37:18.978933 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-52j8x" podUID="82af98e0-3bc2-4384-adcb-fef0c343da75" containerName="registry-server" containerID="cri-o://826e8616a276ab4819fc07e17aa8a515cb8b5db92755fbf5ab473c51cbd4a110" gracePeriod=2 Feb 26 12:37:20 crc kubenswrapper[4724]: I0226 12:37:20.052306 4724 generic.go:334] "Generic (PLEG): container finished" podID="82af98e0-3bc2-4384-adcb-fef0c343da75" containerID="826e8616a276ab4819fc07e17aa8a515cb8b5db92755fbf5ab473c51cbd4a110" exitCode=0 Feb 26 12:37:20 crc kubenswrapper[4724]: I0226 12:37:20.052575 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52j8x" event={"ID":"82af98e0-3bc2-4384-adcb-fef0c343da75","Type":"ContainerDied","Data":"826e8616a276ab4819fc07e17aa8a515cb8b5db92755fbf5ab473c51cbd4a110"} Feb 26 12:37:20 crc kubenswrapper[4724]: I0226 12:37:20.233474 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:37:20 crc kubenswrapper[4724]: I0226 12:37:20.396900 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82af98e0-3bc2-4384-adcb-fef0c343da75-utilities\") pod \"82af98e0-3bc2-4384-adcb-fef0c343da75\" (UID: \"82af98e0-3bc2-4384-adcb-fef0c343da75\") " Feb 26 12:37:20 crc kubenswrapper[4724]: I0226 12:37:20.397126 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrxvt\" (UniqueName: \"kubernetes.io/projected/82af98e0-3bc2-4384-adcb-fef0c343da75-kube-api-access-mrxvt\") pod \"82af98e0-3bc2-4384-adcb-fef0c343da75\" (UID: \"82af98e0-3bc2-4384-adcb-fef0c343da75\") " Feb 26 12:37:20 crc kubenswrapper[4724]: I0226 12:37:20.397270 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82af98e0-3bc2-4384-adcb-fef0c343da75-catalog-content\") pod \"82af98e0-3bc2-4384-adcb-fef0c343da75\" (UID: \"82af98e0-3bc2-4384-adcb-fef0c343da75\") " Feb 26 12:37:20 crc kubenswrapper[4724]: I0226 12:37:20.402579 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82af98e0-3bc2-4384-adcb-fef0c343da75-utilities" (OuterVolumeSpecName: "utilities") pod "82af98e0-3bc2-4384-adcb-fef0c343da75" (UID: "82af98e0-3bc2-4384-adcb-fef0c343da75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:37:20 crc kubenswrapper[4724]: I0226 12:37:20.490047 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82af98e0-3bc2-4384-adcb-fef0c343da75-kube-api-access-mrxvt" (OuterVolumeSpecName: "kube-api-access-mrxvt") pod "82af98e0-3bc2-4384-adcb-fef0c343da75" (UID: "82af98e0-3bc2-4384-adcb-fef0c343da75"). InnerVolumeSpecName "kube-api-access-mrxvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:37:20 crc kubenswrapper[4724]: I0226 12:37:20.500284 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82af98e0-3bc2-4384-adcb-fef0c343da75-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:37:20 crc kubenswrapper[4724]: I0226 12:37:20.500372 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrxvt\" (UniqueName: \"kubernetes.io/projected/82af98e0-3bc2-4384-adcb-fef0c343da75-kube-api-access-mrxvt\") on node \"crc\" DevicePath \"\"" Feb 26 12:37:20 crc kubenswrapper[4724]: I0226 12:37:20.615121 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82af98e0-3bc2-4384-adcb-fef0c343da75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82af98e0-3bc2-4384-adcb-fef0c343da75" (UID: "82af98e0-3bc2-4384-adcb-fef0c343da75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:37:20 crc kubenswrapper[4724]: I0226 12:37:20.704567 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82af98e0-3bc2-4384-adcb-fef0c343da75-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:37:20 crc kubenswrapper[4724]: I0226 12:37:20.975604 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:37:20 crc kubenswrapper[4724]: E0226 12:37:20.975955 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:37:21 crc kubenswrapper[4724]: I0226 12:37:21.066056 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52j8x" event={"ID":"82af98e0-3bc2-4384-adcb-fef0c343da75","Type":"ContainerDied","Data":"997a64f1895cafcb1316530358ef929341fcad60091fa243bee9ad0e54d1fb7e"} Feb 26 12:37:21 crc kubenswrapper[4724]: I0226 12:37:21.066118 4724 scope.go:117] "RemoveContainer" containerID="826e8616a276ab4819fc07e17aa8a515cb8b5db92755fbf5ab473c51cbd4a110" Feb 26 12:37:21 crc kubenswrapper[4724]: I0226 12:37:21.066144 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52j8x" Feb 26 12:37:21 crc kubenswrapper[4724]: I0226 12:37:21.100115 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:37:21 crc kubenswrapper[4724]: I0226 12:37:21.112223 4724 scope.go:117] "RemoveContainer" containerID="ee9f7efc5513395f0c54fa809a6bb9a60c054e7a36e6022950bf2e71365f0cff" Feb 26 12:37:21 crc kubenswrapper[4724]: I0226 12:37:21.139320 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52j8x"] Feb 26 12:37:21 crc kubenswrapper[4724]: I0226 12:37:21.148016 4724 scope.go:117] "RemoveContainer" containerID="1dbf4575472db34184db9711cb9e9281f0e3dea8f0d546ce34c2e52951c125b1" Feb 26 12:37:21 crc kubenswrapper[4724]: I0226 12:37:21.157241 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-52j8x"] Feb 26 12:37:21 crc kubenswrapper[4724]: I0226 12:37:21.167545 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:37:21 crc kubenswrapper[4724]: I0226 12:37:21.988605 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82af98e0-3bc2-4384-adcb-fef0c343da75" path="/var/lib/kubelet/pods/82af98e0-3bc2-4384-adcb-fef0c343da75/volumes" Feb 26 12:37:22 crc kubenswrapper[4724]: I0226 12:37:22.623880 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qpjqj"] Feb 26 12:37:23 crc kubenswrapper[4724]: I0226 12:37:23.086059 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qpjqj" podUID="80851036-487b-4aff-970b-6c60b77089dd" containerName="registry-server" containerID="cri-o://312b6186fd569c2957e5242506c37ee951df0047886884c74a251383349bbb89" gracePeriod=2 Feb 26 12:37:24 crc kubenswrapper[4724]: I0226 12:37:24.109561 4724 generic.go:334] "Generic (PLEG): container finished" podID="80851036-487b-4aff-970b-6c60b77089dd" containerID="312b6186fd569c2957e5242506c37ee951df0047886884c74a251383349bbb89" exitCode=0 Feb 26 12:37:24 crc kubenswrapper[4724]: I0226 12:37:24.109878 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpjqj" event={"ID":"80851036-487b-4aff-970b-6c60b77089dd","Type":"ContainerDied","Data":"312b6186fd569c2957e5242506c37ee951df0047886884c74a251383349bbb89"} Feb 26 12:37:24 crc kubenswrapper[4724]: I0226 12:37:24.239435 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:37:24 crc kubenswrapper[4724]: I0226 12:37:24.388921 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80851036-487b-4aff-970b-6c60b77089dd-catalog-content\") pod \"80851036-487b-4aff-970b-6c60b77089dd\" (UID: \"80851036-487b-4aff-970b-6c60b77089dd\") " Feb 26 12:37:24 crc kubenswrapper[4724]: I0226 12:37:24.389360 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80851036-487b-4aff-970b-6c60b77089dd-utilities\") pod \"80851036-487b-4aff-970b-6c60b77089dd\" (UID: \"80851036-487b-4aff-970b-6c60b77089dd\") " Feb 26 12:37:24 crc kubenswrapper[4724]: I0226 12:37:24.389491 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltxtc\" (UniqueName: \"kubernetes.io/projected/80851036-487b-4aff-970b-6c60b77089dd-kube-api-access-ltxtc\") pod \"80851036-487b-4aff-970b-6c60b77089dd\" (UID: \"80851036-487b-4aff-970b-6c60b77089dd\") " Feb 26 12:37:24 crc kubenswrapper[4724]: I0226 12:37:24.390290 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80851036-487b-4aff-970b-6c60b77089dd-utilities" (OuterVolumeSpecName: "utilities") pod "80851036-487b-4aff-970b-6c60b77089dd" (UID: "80851036-487b-4aff-970b-6c60b77089dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:37:24 crc kubenswrapper[4724]: I0226 12:37:24.396609 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80851036-487b-4aff-970b-6c60b77089dd-kube-api-access-ltxtc" (OuterVolumeSpecName: "kube-api-access-ltxtc") pod "80851036-487b-4aff-970b-6c60b77089dd" (UID: "80851036-487b-4aff-970b-6c60b77089dd"). InnerVolumeSpecName "kube-api-access-ltxtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:37:24 crc kubenswrapper[4724]: I0226 12:37:24.492042 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80851036-487b-4aff-970b-6c60b77089dd-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:37:24 crc kubenswrapper[4724]: I0226 12:37:24.492086 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltxtc\" (UniqueName: \"kubernetes.io/projected/80851036-487b-4aff-970b-6c60b77089dd-kube-api-access-ltxtc\") on node \"crc\" DevicePath \"\"" Feb 26 12:37:24 crc kubenswrapper[4724]: I0226 12:37:24.585661 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80851036-487b-4aff-970b-6c60b77089dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80851036-487b-4aff-970b-6c60b77089dd" (UID: "80851036-487b-4aff-970b-6c60b77089dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:37:24 crc kubenswrapper[4724]: I0226 12:37:24.593909 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80851036-487b-4aff-970b-6c60b77089dd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:37:25 crc kubenswrapper[4724]: I0226 12:37:25.129404 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpjqj" event={"ID":"80851036-487b-4aff-970b-6c60b77089dd","Type":"ContainerDied","Data":"c7278f9418b175c244ffa43e6a6cd0c5cf0e28ba63d28ae15bcc4b5ff0eb0ac2"} Feb 26 12:37:25 crc kubenswrapper[4724]: I0226 12:37:25.129464 4724 scope.go:117] "RemoveContainer" containerID="312b6186fd569c2957e5242506c37ee951df0047886884c74a251383349bbb89" Feb 26 12:37:25 crc kubenswrapper[4724]: I0226 12:37:25.129480 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpjqj" Feb 26 12:37:25 crc kubenswrapper[4724]: I0226 12:37:25.176558 4724 scope.go:117] "RemoveContainer" containerID="3df3ac798e22a2782fa561940c49a87024f78a0719e63160849ef3a9283510e8" Feb 26 12:37:25 crc kubenswrapper[4724]: I0226 12:37:25.183762 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qpjqj"] Feb 26 12:37:25 crc kubenswrapper[4724]: I0226 12:37:25.205910 4724 scope.go:117] "RemoveContainer" containerID="4dcf02c680c09a30b52f01337fad5b6f8ff4aff990973711955632bbc191cf08" Feb 26 12:37:25 crc kubenswrapper[4724]: I0226 12:37:25.209691 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qpjqj"] Feb 26 12:37:25 crc kubenswrapper[4724]: I0226 12:37:25.987451 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80851036-487b-4aff-970b-6c60b77089dd" path="/var/lib/kubelet/pods/80851036-487b-4aff-970b-6c60b77089dd/volumes" Feb 26 12:37:35 crc kubenswrapper[4724]: I0226 12:37:35.977718 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:37:35 crc kubenswrapper[4724]: E0226 12:37:35.978659 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:37:50 crc kubenswrapper[4724]: I0226 12:37:50.977395 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:37:50 crc kubenswrapper[4724]: E0226 12:37:50.978432 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.221805 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535158-8kzmc"] Feb 26 12:38:00 crc kubenswrapper[4724]: E0226 12:38:00.227342 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80851036-487b-4aff-970b-6c60b77089dd" containerName="registry-server" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.227790 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="80851036-487b-4aff-970b-6c60b77089dd" containerName="registry-server" Feb 26 12:38:00 crc kubenswrapper[4724]: E0226 12:38:00.227852 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80851036-487b-4aff-970b-6c60b77089dd" containerName="extract-utilities" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.227864 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="80851036-487b-4aff-970b-6c60b77089dd" containerName="extract-utilities" Feb 26 12:38:00 crc kubenswrapper[4724]: E0226 12:38:00.227877 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82af98e0-3bc2-4384-adcb-fef0c343da75" containerName="extract-utilities" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.227886 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="82af98e0-3bc2-4384-adcb-fef0c343da75" containerName="extract-utilities" Feb 26 12:38:00 crc kubenswrapper[4724]: E0226 12:38:00.227917 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80851036-487b-4aff-970b-6c60b77089dd" containerName="extract-content" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.227924 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="80851036-487b-4aff-970b-6c60b77089dd" containerName="extract-content" Feb 26 12:38:00 crc kubenswrapper[4724]: E0226 12:38:00.227933 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82af98e0-3bc2-4384-adcb-fef0c343da75" containerName="registry-server" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.227939 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="82af98e0-3bc2-4384-adcb-fef0c343da75" containerName="registry-server" Feb 26 12:38:00 crc kubenswrapper[4724]: E0226 12:38:00.227959 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82af98e0-3bc2-4384-adcb-fef0c343da75" containerName="extract-content" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.227966 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="82af98e0-3bc2-4384-adcb-fef0c343da75" containerName="extract-content" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.228772 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="82af98e0-3bc2-4384-adcb-fef0c343da75" containerName="registry-server" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.228803 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="80851036-487b-4aff-970b-6c60b77089dd" containerName="registry-server" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.231546 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535158-8kzmc" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.254530 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.254840 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.255288 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.260611 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535158-8kzmc"] Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.348674 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw27q\" (UniqueName: \"kubernetes.io/projected/f503fc72-9ae7-4210-bdd3-a5768cfa590d-kube-api-access-xw27q\") pod \"auto-csr-approver-29535158-8kzmc\" (UID: \"f503fc72-9ae7-4210-bdd3-a5768cfa590d\") " pod="openshift-infra/auto-csr-approver-29535158-8kzmc" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.450772 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw27q\" (UniqueName: \"kubernetes.io/projected/f503fc72-9ae7-4210-bdd3-a5768cfa590d-kube-api-access-xw27q\") pod \"auto-csr-approver-29535158-8kzmc\" (UID: \"f503fc72-9ae7-4210-bdd3-a5768cfa590d\") " pod="openshift-infra/auto-csr-approver-29535158-8kzmc" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.480356 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw27q\" (UniqueName: \"kubernetes.io/projected/f503fc72-9ae7-4210-bdd3-a5768cfa590d-kube-api-access-xw27q\") pod \"auto-csr-approver-29535158-8kzmc\" (UID: \"f503fc72-9ae7-4210-bdd3-a5768cfa590d\") " pod="openshift-infra/auto-csr-approver-29535158-8kzmc" Feb 26 12:38:00 crc kubenswrapper[4724]: I0226 12:38:00.572875 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535158-8kzmc" Feb 26 12:38:01 crc kubenswrapper[4724]: I0226 12:38:01.582709 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535158-8kzmc"] Feb 26 12:38:01 crc kubenswrapper[4724]: I0226 12:38:01.976057 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:38:01 crc kubenswrapper[4724]: E0226 12:38:01.976413 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:38:02 crc kubenswrapper[4724]: I0226 12:38:02.548088 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535158-8kzmc" event={"ID":"f503fc72-9ae7-4210-bdd3-a5768cfa590d","Type":"ContainerStarted","Data":"5793d729e72533139d6d60c01e99bd8941b5af29663fc90aa1ae11750c41a5c3"} Feb 26 12:38:04 crc kubenswrapper[4724]: I0226 12:38:04.571059 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535158-8kzmc" event={"ID":"f503fc72-9ae7-4210-bdd3-a5768cfa590d","Type":"ContainerStarted","Data":"589315bf64b0f91393fd7ab5922a1d4f386321f57d0c14c687c65de45ae07316"} Feb 26 12:38:04 crc kubenswrapper[4724]: I0226 12:38:04.620739 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535158-8kzmc" podStartSLOduration=3.553809356 podStartE2EDuration="4.589155188s" podCreationTimestamp="2026-02-26 12:38:00 +0000 UTC" firstStartedPulling="2026-02-26 12:38:01.597490312 +0000 UTC m=+5548.253229437" lastFinishedPulling="2026-02-26 12:38:02.632836154 +0000 UTC m=+5549.288575269" observedRunningTime="2026-02-26 12:38:04.585871764 +0000 UTC m=+5551.241610889" watchObservedRunningTime="2026-02-26 12:38:04.589155188 +0000 UTC m=+5551.244894303" Feb 26 12:38:06 crc kubenswrapper[4724]: I0226 12:38:06.591890 4724 generic.go:334] "Generic (PLEG): container finished" podID="f503fc72-9ae7-4210-bdd3-a5768cfa590d" containerID="589315bf64b0f91393fd7ab5922a1d4f386321f57d0c14c687c65de45ae07316" exitCode=0 Feb 26 12:38:06 crc kubenswrapper[4724]: I0226 12:38:06.592234 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535158-8kzmc" event={"ID":"f503fc72-9ae7-4210-bdd3-a5768cfa590d","Type":"ContainerDied","Data":"589315bf64b0f91393fd7ab5922a1d4f386321f57d0c14c687c65de45ae07316"} Feb 26 12:38:07 crc kubenswrapper[4724]: I0226 12:38:07.995279 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535158-8kzmc" Feb 26 12:38:08 crc kubenswrapper[4724]: I0226 12:38:08.127684 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw27q\" (UniqueName: \"kubernetes.io/projected/f503fc72-9ae7-4210-bdd3-a5768cfa590d-kube-api-access-xw27q\") pod \"f503fc72-9ae7-4210-bdd3-a5768cfa590d\" (UID: \"f503fc72-9ae7-4210-bdd3-a5768cfa590d\") " Feb 26 12:38:08 crc kubenswrapper[4724]: I0226 12:38:08.151520 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f503fc72-9ae7-4210-bdd3-a5768cfa590d-kube-api-access-xw27q" (OuterVolumeSpecName: "kube-api-access-xw27q") pod "f503fc72-9ae7-4210-bdd3-a5768cfa590d" (UID: "f503fc72-9ae7-4210-bdd3-a5768cfa590d"). InnerVolumeSpecName "kube-api-access-xw27q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:38:08 crc kubenswrapper[4724]: I0226 12:38:08.230419 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw27q\" (UniqueName: \"kubernetes.io/projected/f503fc72-9ae7-4210-bdd3-a5768cfa590d-kube-api-access-xw27q\") on node \"crc\" DevicePath \"\"" Feb 26 12:38:08 crc kubenswrapper[4724]: I0226 12:38:08.610419 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535158-8kzmc" event={"ID":"f503fc72-9ae7-4210-bdd3-a5768cfa590d","Type":"ContainerDied","Data":"5793d729e72533139d6d60c01e99bd8941b5af29663fc90aa1ae11750c41a5c3"} Feb 26 12:38:08 crc kubenswrapper[4724]: I0226 12:38:08.610756 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5793d729e72533139d6d60c01e99bd8941b5af29663fc90aa1ae11750c41a5c3" Feb 26 12:38:08 crc kubenswrapper[4724]: I0226 12:38:08.610477 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535158-8kzmc" Feb 26 12:38:08 crc kubenswrapper[4724]: I0226 12:38:08.710296 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535152-tfl7m"] Feb 26 12:38:08 crc kubenswrapper[4724]: I0226 12:38:08.718744 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535152-tfl7m"] Feb 26 12:38:09 crc kubenswrapper[4724]: I0226 12:38:09.987841 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0536996-e306-4587-b1da-d6d0afaace7d" path="/var/lib/kubelet/pods/b0536996-e306-4587-b1da-d6d0afaace7d/volumes" Feb 26 12:38:14 crc kubenswrapper[4724]: I0226 12:38:14.976289 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:38:14 crc kubenswrapper[4724]: E0226 12:38:14.977206 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:38:22 crc kubenswrapper[4724]: I0226 12:38:22.684320 4724 scope.go:117] "RemoveContainer" containerID="69cad349b52ac61f7efd7147f79db564cdff87adb7563b6849c173235c2a563b" Feb 26 12:38:29 crc kubenswrapper[4724]: I0226 12:38:29.975926 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:38:29 crc kubenswrapper[4724]: E0226 12:38:29.976841 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:38:40 crc kubenswrapper[4724]: I0226 12:38:40.976069 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:38:40 crc kubenswrapper[4724]: E0226 12:38:40.976814 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:38:51 crc kubenswrapper[4724]: I0226 12:38:51.976665 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:38:51 crc kubenswrapper[4724]: E0226 12:38:51.977736 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:39:02 crc kubenswrapper[4724]: I0226 12:39:02.976033 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:39:02 crc kubenswrapper[4724]: E0226 12:39:02.976919 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:39:14 crc kubenswrapper[4724]: I0226 12:39:14.975695 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:39:14 crc kubenswrapper[4724]: E0226 12:39:14.977622 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:39:25 crc kubenswrapper[4724]: I0226 12:39:25.976199 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:39:25 crc kubenswrapper[4724]: E0226 12:39:25.977316 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:39:40 crc kubenswrapper[4724]: I0226 12:39:40.975971 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:39:40 crc kubenswrapper[4724]: E0226 12:39:40.976673 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:39:52 crc kubenswrapper[4724]: I0226 12:39:52.977106 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:39:52 crc kubenswrapper[4724]: E0226 12:39:52.978023 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:40:00 crc kubenswrapper[4724]: I0226 12:40:00.180904 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535160-ngp7b"] Feb 26 12:40:00 crc kubenswrapper[4724]: E0226 12:40:00.181909 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f503fc72-9ae7-4210-bdd3-a5768cfa590d" containerName="oc" Feb 26 12:40:00 crc kubenswrapper[4724]: I0226 12:40:00.181923 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f503fc72-9ae7-4210-bdd3-a5768cfa590d" containerName="oc" Feb 26 12:40:00 crc kubenswrapper[4724]: I0226 12:40:00.182140 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f503fc72-9ae7-4210-bdd3-a5768cfa590d" containerName="oc" Feb 26 12:40:00 crc kubenswrapper[4724]: I0226 12:40:00.182891 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535160-ngp7b" Feb 26 12:40:00 crc kubenswrapper[4724]: I0226 12:40:00.190109 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:40:00 crc kubenswrapper[4724]: I0226 12:40:00.190126 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:40:00 crc kubenswrapper[4724]: I0226 12:40:00.193321 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:40:00 crc kubenswrapper[4724]: I0226 12:40:00.198735 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535160-ngp7b"] Feb 26 12:40:00 crc kubenswrapper[4724]: I0226 12:40:00.334362 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz5vj\" (UniqueName: \"kubernetes.io/projected/6f3cd17f-2082-4864-865f-699e05d4fc84-kube-api-access-kz5vj\") pod \"auto-csr-approver-29535160-ngp7b\" (UID: \"6f3cd17f-2082-4864-865f-699e05d4fc84\") " pod="openshift-infra/auto-csr-approver-29535160-ngp7b" Feb 26 12:40:00 crc kubenswrapper[4724]: I0226 12:40:00.436056 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz5vj\" (UniqueName: \"kubernetes.io/projected/6f3cd17f-2082-4864-865f-699e05d4fc84-kube-api-access-kz5vj\") pod \"auto-csr-approver-29535160-ngp7b\" (UID: \"6f3cd17f-2082-4864-865f-699e05d4fc84\") " pod="openshift-infra/auto-csr-approver-29535160-ngp7b" Feb 26 12:40:00 crc kubenswrapper[4724]: I0226 12:40:00.479054 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz5vj\" (UniqueName: \"kubernetes.io/projected/6f3cd17f-2082-4864-865f-699e05d4fc84-kube-api-access-kz5vj\") pod \"auto-csr-approver-29535160-ngp7b\" (UID: \"6f3cd17f-2082-4864-865f-699e05d4fc84\") " pod="openshift-infra/auto-csr-approver-29535160-ngp7b" Feb 26 12:40:00 crc kubenswrapper[4724]: I0226 12:40:00.506226 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535160-ngp7b" Feb 26 12:40:01 crc kubenswrapper[4724]: I0226 12:40:01.174707 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535160-ngp7b"] Feb 26 12:40:01 crc kubenswrapper[4724]: W0226 12:40:01.180720 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f3cd17f_2082_4864_865f_699e05d4fc84.slice/crio-46636fd4a85580e25f9311687cffd71579304cee6c29654fb378feba73ab7b0f WatchSource:0}: Error finding container 46636fd4a85580e25f9311687cffd71579304cee6c29654fb378feba73ab7b0f: Status 404 returned error can't find the container with id 46636fd4a85580e25f9311687cffd71579304cee6c29654fb378feba73ab7b0f Feb 26 12:40:01 crc kubenswrapper[4724]: I0226 12:40:01.183374 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 12:40:02 crc kubenswrapper[4724]: I0226 12:40:02.109979 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535160-ngp7b" event={"ID":"6f3cd17f-2082-4864-865f-699e05d4fc84","Type":"ContainerStarted","Data":"46636fd4a85580e25f9311687cffd71579304cee6c29654fb378feba73ab7b0f"} Feb 26 12:40:03 crc kubenswrapper[4724]: I0226 12:40:03.123532 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535160-ngp7b" event={"ID":"6f3cd17f-2082-4864-865f-699e05d4fc84","Type":"ContainerStarted","Data":"c03d006443ebcd4b41826f98155c754c5cd7f1f0ce325b13120e9957b3ee428e"} Feb 26 12:40:03 crc kubenswrapper[4724]: I0226 12:40:03.155978 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535160-ngp7b" podStartSLOduration=2.11674319 podStartE2EDuration="3.155927409s" podCreationTimestamp="2026-02-26 12:40:00 +0000 UTC" firstStartedPulling="2026-02-26 12:40:01.181967666 +0000 UTC m=+5667.837706781" lastFinishedPulling="2026-02-26 12:40:02.221151845 +0000 UTC m=+5668.876891000" observedRunningTime="2026-02-26 12:40:03.142709512 +0000 UTC m=+5669.798448657" watchObservedRunningTime="2026-02-26 12:40:03.155927409 +0000 UTC m=+5669.811666534" Feb 26 12:40:04 crc kubenswrapper[4724]: I0226 12:40:04.136844 4724 generic.go:334] "Generic (PLEG): container finished" podID="6f3cd17f-2082-4864-865f-699e05d4fc84" containerID="c03d006443ebcd4b41826f98155c754c5cd7f1f0ce325b13120e9957b3ee428e" exitCode=0 Feb 26 12:40:04 crc kubenswrapper[4724]: I0226 12:40:04.136885 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535160-ngp7b" event={"ID":"6f3cd17f-2082-4864-865f-699e05d4fc84","Type":"ContainerDied","Data":"c03d006443ebcd4b41826f98155c754c5cd7f1f0ce325b13120e9957b3ee428e"} Feb 26 12:40:04 crc kubenswrapper[4724]: I0226 12:40:04.976012 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:40:04 crc kubenswrapper[4724]: E0226 12:40:04.976604 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:40:05 crc kubenswrapper[4724]: I0226 12:40:05.538190 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535160-ngp7b" Feb 26 12:40:05 crc kubenswrapper[4724]: I0226 12:40:05.645507 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz5vj\" (UniqueName: \"kubernetes.io/projected/6f3cd17f-2082-4864-865f-699e05d4fc84-kube-api-access-kz5vj\") pod \"6f3cd17f-2082-4864-865f-699e05d4fc84\" (UID: \"6f3cd17f-2082-4864-865f-699e05d4fc84\") " Feb 26 12:40:05 crc kubenswrapper[4724]: I0226 12:40:05.655807 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f3cd17f-2082-4864-865f-699e05d4fc84-kube-api-access-kz5vj" (OuterVolumeSpecName: "kube-api-access-kz5vj") pod "6f3cd17f-2082-4864-865f-699e05d4fc84" (UID: "6f3cd17f-2082-4864-865f-699e05d4fc84"). InnerVolumeSpecName "kube-api-access-kz5vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:40:05 crc kubenswrapper[4724]: I0226 12:40:05.748001 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz5vj\" (UniqueName: \"kubernetes.io/projected/6f3cd17f-2082-4864-865f-699e05d4fc84-kube-api-access-kz5vj\") on node \"crc\" DevicePath \"\"" Feb 26 12:40:06 crc kubenswrapper[4724]: I0226 12:40:06.156845 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535160-ngp7b" event={"ID":"6f3cd17f-2082-4864-865f-699e05d4fc84","Type":"ContainerDied","Data":"46636fd4a85580e25f9311687cffd71579304cee6c29654fb378feba73ab7b0f"} Feb 26 12:40:06 crc kubenswrapper[4724]: I0226 12:40:06.156896 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46636fd4a85580e25f9311687cffd71579304cee6c29654fb378feba73ab7b0f" Feb 26 12:40:06 crc kubenswrapper[4724]: I0226 12:40:06.156968 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535160-ngp7b" Feb 26 12:40:06 crc kubenswrapper[4724]: I0226 12:40:06.238760 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535154-7m2qc"] Feb 26 12:40:06 crc kubenswrapper[4724]: I0226 12:40:06.277022 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535154-7m2qc"] Feb 26 12:40:07 crc kubenswrapper[4724]: I0226 12:40:07.994827 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9" path="/var/lib/kubelet/pods/f26e2bd2-84ce-45d9-9f72-6b3a1b7286a9/volumes" Feb 26 12:40:17 crc kubenswrapper[4724]: I0226 12:40:17.976554 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:40:19 crc kubenswrapper[4724]: I0226 12:40:19.291828 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"c13f8aaffd91bf6bdd288b56b623f9d351168988e4fe880b8bdef6c9cf96524e"} Feb 26 12:40:22 crc kubenswrapper[4724]: I0226 12:40:22.842417 4724 scope.go:117] "RemoveContainer" containerID="411c904a717766698dbde0e0dadb4facc2674f9a475ff1894f6ef9c222f88a4c" Feb 26 12:41:01 crc kubenswrapper[4724]: I0226 12:41:01.064049 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kj7rh"] Feb 26 12:41:01 crc kubenswrapper[4724]: E0226 12:41:01.065007 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3cd17f-2082-4864-865f-699e05d4fc84" containerName="oc" Feb 26 12:41:01 crc kubenswrapper[4724]: I0226 12:41:01.065021 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3cd17f-2082-4864-865f-699e05d4fc84" containerName="oc" Feb 26 12:41:01 crc kubenswrapper[4724]: I0226 12:41:01.065827 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f3cd17f-2082-4864-865f-699e05d4fc84" containerName="oc" Feb 26 12:41:01 crc kubenswrapper[4724]: I0226 12:41:01.068509 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:01 crc kubenswrapper[4724]: I0226 12:41:01.086016 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kj7rh"] Feb 26 12:41:01 crc kubenswrapper[4724]: I0226 12:41:01.142409 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18808630-294e-449c-81d4-c57c7ff88c1f-utilities\") pod \"certified-operators-kj7rh\" (UID: \"18808630-294e-449c-81d4-c57c7ff88c1f\") " pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:01 crc kubenswrapper[4724]: I0226 12:41:01.142774 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18808630-294e-449c-81d4-c57c7ff88c1f-catalog-content\") pod \"certified-operators-kj7rh\" (UID: \"18808630-294e-449c-81d4-c57c7ff88c1f\") " pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:01 crc kubenswrapper[4724]: I0226 12:41:01.142854 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqrmh\" (UniqueName: \"kubernetes.io/projected/18808630-294e-449c-81d4-c57c7ff88c1f-kube-api-access-xqrmh\") pod \"certified-operators-kj7rh\" (UID: \"18808630-294e-449c-81d4-c57c7ff88c1f\") " pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:01 crc kubenswrapper[4724]: I0226 12:41:01.244919 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18808630-294e-449c-81d4-c57c7ff88c1f-catalog-content\") pod \"certified-operators-kj7rh\" (UID: \"18808630-294e-449c-81d4-c57c7ff88c1f\") " pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:01 crc kubenswrapper[4724]: I0226 12:41:01.244959 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqrmh\" (UniqueName: \"kubernetes.io/projected/18808630-294e-449c-81d4-c57c7ff88c1f-kube-api-access-xqrmh\") pod \"certified-operators-kj7rh\" (UID: \"18808630-294e-449c-81d4-c57c7ff88c1f\") " pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:01 crc kubenswrapper[4724]: I0226 12:41:01.245088 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18808630-294e-449c-81d4-c57c7ff88c1f-utilities\") pod \"certified-operators-kj7rh\" (UID: \"18808630-294e-449c-81d4-c57c7ff88c1f\") " pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:01 crc kubenswrapper[4724]: I0226 12:41:01.245427 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18808630-294e-449c-81d4-c57c7ff88c1f-catalog-content\") pod \"certified-operators-kj7rh\" (UID: \"18808630-294e-449c-81d4-c57c7ff88c1f\") " pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:01 crc kubenswrapper[4724]: I0226 12:41:01.245484 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18808630-294e-449c-81d4-c57c7ff88c1f-utilities\") pod \"certified-operators-kj7rh\" (UID: \"18808630-294e-449c-81d4-c57c7ff88c1f\") " pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:01 crc kubenswrapper[4724]: I0226 12:41:01.267690 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqrmh\" (UniqueName: \"kubernetes.io/projected/18808630-294e-449c-81d4-c57c7ff88c1f-kube-api-access-xqrmh\") pod \"certified-operators-kj7rh\" (UID: \"18808630-294e-449c-81d4-c57c7ff88c1f\") " pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:01 crc kubenswrapper[4724]: I0226 12:41:01.398016 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:02 crc kubenswrapper[4724]: I0226 12:41:02.374019 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kj7rh"] Feb 26 12:41:03 crc kubenswrapper[4724]: I0226 12:41:03.031276 4724 generic.go:334] "Generic (PLEG): container finished" podID="18808630-294e-449c-81d4-c57c7ff88c1f" containerID="e11d82397842a3ed7391ad8a35e339a848f7b6df21f23c2324a8c5b1af967fc3" exitCode=0 Feb 26 12:41:03 crc kubenswrapper[4724]: I0226 12:41:03.031349 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj7rh" event={"ID":"18808630-294e-449c-81d4-c57c7ff88c1f","Type":"ContainerDied","Data":"e11d82397842a3ed7391ad8a35e339a848f7b6df21f23c2324a8c5b1af967fc3"} Feb 26 12:41:03 crc kubenswrapper[4724]: I0226 12:41:03.031741 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj7rh" event={"ID":"18808630-294e-449c-81d4-c57c7ff88c1f","Type":"ContainerStarted","Data":"3847580d17d02ca2aec8be0ff797828a3379b9e9be6842460d765743b7da0748"} Feb 26 12:41:05 crc kubenswrapper[4724]: I0226 12:41:05.052462 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj7rh" event={"ID":"18808630-294e-449c-81d4-c57c7ff88c1f","Type":"ContainerStarted","Data":"e2feadf655cd330707fd616cb61057ee7163430026e56114350b57c214215588"} Feb 26 12:41:08 crc kubenswrapper[4724]: I0226 12:41:08.080711 4724 generic.go:334] "Generic (PLEG): container finished" podID="18808630-294e-449c-81d4-c57c7ff88c1f" containerID="e2feadf655cd330707fd616cb61057ee7163430026e56114350b57c214215588" exitCode=0 Feb 26 12:41:08 crc kubenswrapper[4724]: I0226 12:41:08.081327 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj7rh" event={"ID":"18808630-294e-449c-81d4-c57c7ff88c1f","Type":"ContainerDied","Data":"e2feadf655cd330707fd616cb61057ee7163430026e56114350b57c214215588"} Feb 26 12:41:09 crc kubenswrapper[4724]: I0226 12:41:09.091566 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj7rh" event={"ID":"18808630-294e-449c-81d4-c57c7ff88c1f","Type":"ContainerStarted","Data":"404da26201b4e88e43312873f5698cb670aad43125c57b55985f0c3ac9f10250"} Feb 26 12:41:09 crc kubenswrapper[4724]: I0226 12:41:09.129312 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kj7rh" podStartSLOduration=2.45502759 podStartE2EDuration="8.129293482s" podCreationTimestamp="2026-02-26 12:41:01 +0000 UTC" firstStartedPulling="2026-02-26 12:41:03.033761525 +0000 UTC m=+5729.689500640" lastFinishedPulling="2026-02-26 12:41:08.708027417 +0000 UTC m=+5735.363766532" observedRunningTime="2026-02-26 12:41:09.115845608 +0000 UTC m=+5735.771584723" watchObservedRunningTime="2026-02-26 12:41:09.129293482 +0000 UTC m=+5735.785032597" Feb 26 12:41:11 crc kubenswrapper[4724]: I0226 12:41:11.402811 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:11 crc kubenswrapper[4724]: I0226 12:41:11.403230 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:12 crc kubenswrapper[4724]: I0226 12:41:12.488297 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kj7rh" podUID="18808630-294e-449c-81d4-c57c7ff88c1f" containerName="registry-server" probeResult="failure" output=< Feb 26 12:41:12 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:41:12 crc kubenswrapper[4724]: > Feb 26 12:41:22 crc kubenswrapper[4724]: I0226 12:41:22.450896 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kj7rh" podUID="18808630-294e-449c-81d4-c57c7ff88c1f" containerName="registry-server" probeResult="failure" output=< Feb 26 12:41:22 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:41:22 crc kubenswrapper[4724]: > Feb 26 12:41:32 crc kubenswrapper[4724]: I0226 12:41:32.454454 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kj7rh" podUID="18808630-294e-449c-81d4-c57c7ff88c1f" containerName="registry-server" probeResult="failure" output=< Feb 26 12:41:32 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:41:32 crc kubenswrapper[4724]: > Feb 26 12:41:41 crc kubenswrapper[4724]: I0226 12:41:41.463491 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:41 crc kubenswrapper[4724]: I0226 12:41:41.512929 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:41 crc kubenswrapper[4724]: I0226 12:41:41.704157 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kj7rh"] Feb 26 12:41:43 crc kubenswrapper[4724]: I0226 12:41:43.431542 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kj7rh" podUID="18808630-294e-449c-81d4-c57c7ff88c1f" containerName="registry-server" containerID="cri-o://404da26201b4e88e43312873f5698cb670aad43125c57b55985f0c3ac9f10250" gracePeriod=2 Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.204171 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.314758 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqrmh\" (UniqueName: \"kubernetes.io/projected/18808630-294e-449c-81d4-c57c7ff88c1f-kube-api-access-xqrmh\") pod \"18808630-294e-449c-81d4-c57c7ff88c1f\" (UID: \"18808630-294e-449c-81d4-c57c7ff88c1f\") " Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.315010 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18808630-294e-449c-81d4-c57c7ff88c1f-utilities\") pod \"18808630-294e-449c-81d4-c57c7ff88c1f\" (UID: \"18808630-294e-449c-81d4-c57c7ff88c1f\") " Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.315094 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18808630-294e-449c-81d4-c57c7ff88c1f-catalog-content\") pod \"18808630-294e-449c-81d4-c57c7ff88c1f\" (UID: \"18808630-294e-449c-81d4-c57c7ff88c1f\") " Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.320045 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18808630-294e-449c-81d4-c57c7ff88c1f-utilities" (OuterVolumeSpecName: "utilities") pod "18808630-294e-449c-81d4-c57c7ff88c1f" (UID: "18808630-294e-449c-81d4-c57c7ff88c1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.325446 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18808630-294e-449c-81d4-c57c7ff88c1f-kube-api-access-xqrmh" (OuterVolumeSpecName: "kube-api-access-xqrmh") pod "18808630-294e-449c-81d4-c57c7ff88c1f" (UID: "18808630-294e-449c-81d4-c57c7ff88c1f"). InnerVolumeSpecName "kube-api-access-xqrmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.364064 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18808630-294e-449c-81d4-c57c7ff88c1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18808630-294e-449c-81d4-c57c7ff88c1f" (UID: "18808630-294e-449c-81d4-c57c7ff88c1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.417804 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqrmh\" (UniqueName: \"kubernetes.io/projected/18808630-294e-449c-81d4-c57c7ff88c1f-kube-api-access-xqrmh\") on node \"crc\" DevicePath \"\"" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.417869 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18808630-294e-449c-81d4-c57c7ff88c1f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.417879 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18808630-294e-449c-81d4-c57c7ff88c1f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.440561 4724 generic.go:334] "Generic (PLEG): container finished" podID="18808630-294e-449c-81d4-c57c7ff88c1f" containerID="404da26201b4e88e43312873f5698cb670aad43125c57b55985f0c3ac9f10250" exitCode=0 Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.440632 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj7rh" event={"ID":"18808630-294e-449c-81d4-c57c7ff88c1f","Type":"ContainerDied","Data":"404da26201b4e88e43312873f5698cb670aad43125c57b55985f0c3ac9f10250"} Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.440670 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj7rh" event={"ID":"18808630-294e-449c-81d4-c57c7ff88c1f","Type":"ContainerDied","Data":"3847580d17d02ca2aec8be0ff797828a3379b9e9be6842460d765743b7da0748"} Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.440693 4724 scope.go:117] "RemoveContainer" containerID="404da26201b4e88e43312873f5698cb670aad43125c57b55985f0c3ac9f10250" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.442237 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj7rh" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.461875 4724 scope.go:117] "RemoveContainer" containerID="e2feadf655cd330707fd616cb61057ee7163430026e56114350b57c214215588" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.492462 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kj7rh"] Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.498276 4724 scope.go:117] "RemoveContainer" containerID="e11d82397842a3ed7391ad8a35e339a848f7b6df21f23c2324a8c5b1af967fc3" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.505205 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kj7rh"] Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.544488 4724 scope.go:117] "RemoveContainer" containerID="404da26201b4e88e43312873f5698cb670aad43125c57b55985f0c3ac9f10250" Feb 26 12:41:44 crc kubenswrapper[4724]: E0226 12:41:44.546438 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"404da26201b4e88e43312873f5698cb670aad43125c57b55985f0c3ac9f10250\": container with ID starting with 404da26201b4e88e43312873f5698cb670aad43125c57b55985f0c3ac9f10250 not found: ID does not exist" containerID="404da26201b4e88e43312873f5698cb670aad43125c57b55985f0c3ac9f10250" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.546484 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"404da26201b4e88e43312873f5698cb670aad43125c57b55985f0c3ac9f10250"} err="failed to get container status \"404da26201b4e88e43312873f5698cb670aad43125c57b55985f0c3ac9f10250\": rpc error: code = NotFound desc = could not find container \"404da26201b4e88e43312873f5698cb670aad43125c57b55985f0c3ac9f10250\": container with ID starting with 404da26201b4e88e43312873f5698cb670aad43125c57b55985f0c3ac9f10250 not found: ID does not exist" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.546514 4724 scope.go:117] "RemoveContainer" containerID="e2feadf655cd330707fd616cb61057ee7163430026e56114350b57c214215588" Feb 26 12:41:44 crc kubenswrapper[4724]: E0226 12:41:44.547081 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2feadf655cd330707fd616cb61057ee7163430026e56114350b57c214215588\": container with ID starting with e2feadf655cd330707fd616cb61057ee7163430026e56114350b57c214215588 not found: ID does not exist" containerID="e2feadf655cd330707fd616cb61057ee7163430026e56114350b57c214215588" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.547122 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2feadf655cd330707fd616cb61057ee7163430026e56114350b57c214215588"} err="failed to get container status \"e2feadf655cd330707fd616cb61057ee7163430026e56114350b57c214215588\": rpc error: code = NotFound desc = could not find container \"e2feadf655cd330707fd616cb61057ee7163430026e56114350b57c214215588\": container with ID starting with e2feadf655cd330707fd616cb61057ee7163430026e56114350b57c214215588 not found: ID does not exist" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.547153 4724 scope.go:117] "RemoveContainer" containerID="e11d82397842a3ed7391ad8a35e339a848f7b6df21f23c2324a8c5b1af967fc3" Feb 26 12:41:44 crc kubenswrapper[4724]: E0226 12:41:44.547593 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e11d82397842a3ed7391ad8a35e339a848f7b6df21f23c2324a8c5b1af967fc3\": container with ID starting with e11d82397842a3ed7391ad8a35e339a848f7b6df21f23c2324a8c5b1af967fc3 not found: ID does not exist" containerID="e11d82397842a3ed7391ad8a35e339a848f7b6df21f23c2324a8c5b1af967fc3" Feb 26 12:41:44 crc kubenswrapper[4724]: I0226 12:41:44.547747 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e11d82397842a3ed7391ad8a35e339a848f7b6df21f23c2324a8c5b1af967fc3"} err="failed to get container status \"e11d82397842a3ed7391ad8a35e339a848f7b6df21f23c2324a8c5b1af967fc3\": rpc error: code = NotFound desc = could not find container \"e11d82397842a3ed7391ad8a35e339a848f7b6df21f23c2324a8c5b1af967fc3\": container with ID starting with e11d82397842a3ed7391ad8a35e339a848f7b6df21f23c2324a8c5b1af967fc3 not found: ID does not exist" Feb 26 12:41:45 crc kubenswrapper[4724]: I0226 12:41:45.986892 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18808630-294e-449c-81d4-c57c7ff88c1f" path="/var/lib/kubelet/pods/18808630-294e-449c-81d4-c57c7ff88c1f/volumes" Feb 26 12:42:00 crc kubenswrapper[4724]: I0226 12:42:00.157545 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535162-m9bs5"] Feb 26 12:42:00 crc kubenswrapper[4724]: E0226 12:42:00.158669 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18808630-294e-449c-81d4-c57c7ff88c1f" containerName="registry-server" Feb 26 12:42:00 crc kubenswrapper[4724]: I0226 12:42:00.158692 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="18808630-294e-449c-81d4-c57c7ff88c1f" containerName="registry-server" Feb 26 12:42:00 crc kubenswrapper[4724]: E0226 12:42:00.158719 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18808630-294e-449c-81d4-c57c7ff88c1f" containerName="extract-content" Feb 26 12:42:00 crc kubenswrapper[4724]: I0226 12:42:00.158728 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="18808630-294e-449c-81d4-c57c7ff88c1f" containerName="extract-content" Feb 26 12:42:00 crc kubenswrapper[4724]: E0226 12:42:00.158771 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18808630-294e-449c-81d4-c57c7ff88c1f" containerName="extract-utilities" Feb 26 12:42:00 crc kubenswrapper[4724]: I0226 12:42:00.158780 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="18808630-294e-449c-81d4-c57c7ff88c1f" containerName="extract-utilities" Feb 26 12:42:00 crc kubenswrapper[4724]: I0226 12:42:00.159026 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="18808630-294e-449c-81d4-c57c7ff88c1f" containerName="registry-server" Feb 26 12:42:00 crc kubenswrapper[4724]: I0226 12:42:00.159914 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535162-m9bs5" Feb 26 12:42:00 crc kubenswrapper[4724]: I0226 12:42:00.165959 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:42:00 crc kubenswrapper[4724]: I0226 12:42:00.166293 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:42:00 crc kubenswrapper[4724]: I0226 12:42:00.168201 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535162-m9bs5"] Feb 26 12:42:00 crc kubenswrapper[4724]: I0226 12:42:00.171144 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:42:00 crc kubenswrapper[4724]: I0226 12:42:00.182564 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46589\" (UniqueName: \"kubernetes.io/projected/06b194c0-b53e-4974-9646-4febbeb34a16-kube-api-access-46589\") pod \"auto-csr-approver-29535162-m9bs5\" (UID: \"06b194c0-b53e-4974-9646-4febbeb34a16\") " pod="openshift-infra/auto-csr-approver-29535162-m9bs5" Feb 26 12:42:00 crc kubenswrapper[4724]: I0226 12:42:00.284539 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46589\" (UniqueName: \"kubernetes.io/projected/06b194c0-b53e-4974-9646-4febbeb34a16-kube-api-access-46589\") pod \"auto-csr-approver-29535162-m9bs5\" (UID: \"06b194c0-b53e-4974-9646-4febbeb34a16\") " pod="openshift-infra/auto-csr-approver-29535162-m9bs5" Feb 26 12:42:00 crc kubenswrapper[4724]: I0226 12:42:00.323209 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46589\" (UniqueName: \"kubernetes.io/projected/06b194c0-b53e-4974-9646-4febbeb34a16-kube-api-access-46589\") pod \"auto-csr-approver-29535162-m9bs5\" (UID: \"06b194c0-b53e-4974-9646-4febbeb34a16\") " pod="openshift-infra/auto-csr-approver-29535162-m9bs5" Feb 26 12:42:00 crc kubenswrapper[4724]: I0226 12:42:00.485949 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535162-m9bs5" Feb 26 12:42:01 crc kubenswrapper[4724]: I0226 12:42:01.096154 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535162-m9bs5"] Feb 26 12:42:01 crc kubenswrapper[4724]: I0226 12:42:01.638501 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535162-m9bs5" event={"ID":"06b194c0-b53e-4974-9646-4febbeb34a16","Type":"ContainerStarted","Data":"6259aaff4bd8a723313cc1b6adeed6b328d3761d655b7a2c544997f982f8bb51"} Feb 26 12:42:03 crc kubenswrapper[4724]: I0226 12:42:03.662142 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535162-m9bs5" event={"ID":"06b194c0-b53e-4974-9646-4febbeb34a16","Type":"ContainerStarted","Data":"f34973db78430d636aa38b31c8bbf391d6d0fddfb100723d7c197452afb5726b"} Feb 26 12:42:03 crc kubenswrapper[4724]: I0226 12:42:03.695091 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535162-m9bs5" podStartSLOduration=2.258067825 podStartE2EDuration="3.695048471s" podCreationTimestamp="2026-02-26 12:42:00 +0000 UTC" firstStartedPulling="2026-02-26 12:42:01.107270776 +0000 UTC m=+5787.763009891" lastFinishedPulling="2026-02-26 12:42:02.544251422 +0000 UTC m=+5789.199990537" observedRunningTime="2026-02-26 12:42:03.692378903 +0000 UTC m=+5790.348118018" watchObservedRunningTime="2026-02-26 12:42:03.695048471 +0000 UTC m=+5790.350787586" Feb 26 12:42:04 crc kubenswrapper[4724]: I0226 12:42:04.671882 4724 generic.go:334] "Generic (PLEG): container finished" podID="06b194c0-b53e-4974-9646-4febbeb34a16" containerID="f34973db78430d636aa38b31c8bbf391d6d0fddfb100723d7c197452afb5726b" exitCode=0 Feb 26 12:42:04 crc kubenswrapper[4724]: I0226 12:42:04.671935 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535162-m9bs5" event={"ID":"06b194c0-b53e-4974-9646-4febbeb34a16","Type":"ContainerDied","Data":"f34973db78430d636aa38b31c8bbf391d6d0fddfb100723d7c197452afb5726b"} Feb 26 12:42:06 crc kubenswrapper[4724]: I0226 12:42:06.142804 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535162-m9bs5" Feb 26 12:42:06 crc kubenswrapper[4724]: I0226 12:42:06.304745 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46589\" (UniqueName: \"kubernetes.io/projected/06b194c0-b53e-4974-9646-4febbeb34a16-kube-api-access-46589\") pod \"06b194c0-b53e-4974-9646-4febbeb34a16\" (UID: \"06b194c0-b53e-4974-9646-4febbeb34a16\") " Feb 26 12:42:06 crc kubenswrapper[4724]: I0226 12:42:06.314869 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b194c0-b53e-4974-9646-4febbeb34a16-kube-api-access-46589" (OuterVolumeSpecName: "kube-api-access-46589") pod "06b194c0-b53e-4974-9646-4febbeb34a16" (UID: "06b194c0-b53e-4974-9646-4febbeb34a16"). InnerVolumeSpecName "kube-api-access-46589". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:42:06 crc kubenswrapper[4724]: I0226 12:42:06.407543 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46589\" (UniqueName: \"kubernetes.io/projected/06b194c0-b53e-4974-9646-4febbeb34a16-kube-api-access-46589\") on node \"crc\" DevicePath \"\"" Feb 26 12:42:06 crc kubenswrapper[4724]: I0226 12:42:06.703845 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535162-m9bs5" event={"ID":"06b194c0-b53e-4974-9646-4febbeb34a16","Type":"ContainerDied","Data":"6259aaff4bd8a723313cc1b6adeed6b328d3761d655b7a2c544997f982f8bb51"} Feb 26 12:42:06 crc kubenswrapper[4724]: I0226 12:42:06.703897 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6259aaff4bd8a723313cc1b6adeed6b328d3761d655b7a2c544997f982f8bb51" Feb 26 12:42:06 crc kubenswrapper[4724]: I0226 12:42:06.703963 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535162-m9bs5" Feb 26 12:42:07 crc kubenswrapper[4724]: I0226 12:42:07.223453 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535156-hrq2j"] Feb 26 12:42:07 crc kubenswrapper[4724]: I0226 12:42:07.234035 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535156-hrq2j"] Feb 26 12:42:07 crc kubenswrapper[4724]: I0226 12:42:07.987793 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44bd9ad3-49b2-47a8-b90b-bf589333ac94" path="/var/lib/kubelet/pods/44bd9ad3-49b2-47a8-b90b-bf589333ac94/volumes" Feb 26 12:42:22 crc kubenswrapper[4724]: I0226 12:42:22.951559 4724 scope.go:117] "RemoveContainer" containerID="93d5bc7a19489f6c2796097e2caf8abda8bd7d81dda5d19ff9e4c8efe472d606" Feb 26 12:42:46 crc kubenswrapper[4724]: I0226 12:42:46.906231 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:42:46 crc kubenswrapper[4724]: I0226 12:42:46.908633 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:43:16 crc kubenswrapper[4724]: I0226 12:43:16.906594 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:43:16 crc kubenswrapper[4724]: I0226 12:43:16.907166 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:43:46 crc kubenswrapper[4724]: I0226 12:43:46.907715 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:43:46 crc kubenswrapper[4724]: I0226 12:43:46.908346 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:43:46 crc kubenswrapper[4724]: I0226 12:43:46.908412 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 12:43:46 crc kubenswrapper[4724]: I0226 12:43:46.910406 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c13f8aaffd91bf6bdd288b56b623f9d351168988e4fe880b8bdef6c9cf96524e"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 12:43:46 crc kubenswrapper[4724]: I0226 12:43:46.910465 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://c13f8aaffd91bf6bdd288b56b623f9d351168988e4fe880b8bdef6c9cf96524e" gracePeriod=600 Feb 26 12:43:47 crc kubenswrapper[4724]: I0226 12:43:47.628947 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="c13f8aaffd91bf6bdd288b56b623f9d351168988e4fe880b8bdef6c9cf96524e" exitCode=0 Feb 26 12:43:47 crc kubenswrapper[4724]: I0226 12:43:47.629019 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"c13f8aaffd91bf6bdd288b56b623f9d351168988e4fe880b8bdef6c9cf96524e"} Feb 26 12:43:47 crc kubenswrapper[4724]: I0226 12:43:47.629398 4724 scope.go:117] "RemoveContainer" containerID="4ba560fe5b8ad1663ab7ac909e5d6e6a3de8f461ec88343527a124539a336978" Feb 26 12:43:48 crc kubenswrapper[4724]: I0226 12:43:48.641241 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b"} Feb 26 12:44:00 crc kubenswrapper[4724]: I0226 12:44:00.161349 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535164-7jhrh"] Feb 26 12:44:00 crc kubenswrapper[4724]: E0226 12:44:00.162288 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b194c0-b53e-4974-9646-4febbeb34a16" containerName="oc" Feb 26 12:44:00 crc kubenswrapper[4724]: I0226 12:44:00.162305 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b194c0-b53e-4974-9646-4febbeb34a16" containerName="oc" Feb 26 12:44:00 crc kubenswrapper[4724]: I0226 12:44:00.162565 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b194c0-b53e-4974-9646-4febbeb34a16" containerName="oc" Feb 26 12:44:00 crc kubenswrapper[4724]: I0226 12:44:00.163431 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535164-7jhrh" Feb 26 12:44:00 crc kubenswrapper[4724]: I0226 12:44:00.167934 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:44:00 crc kubenswrapper[4724]: I0226 12:44:00.168161 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:44:00 crc kubenswrapper[4724]: I0226 12:44:00.169727 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:44:00 crc kubenswrapper[4724]: I0226 12:44:00.187707 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535164-7jhrh"] Feb 26 12:44:00 crc kubenswrapper[4724]: I0226 12:44:00.313905 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw9jg\" (UniqueName: \"kubernetes.io/projected/3cf21403-b280-4c82-88d2-53f27e1bda8c-kube-api-access-jw9jg\") pod \"auto-csr-approver-29535164-7jhrh\" (UID: \"3cf21403-b280-4c82-88d2-53f27e1bda8c\") " pod="openshift-infra/auto-csr-approver-29535164-7jhrh" Feb 26 12:44:00 crc kubenswrapper[4724]: I0226 12:44:00.416232 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw9jg\" (UniqueName: \"kubernetes.io/projected/3cf21403-b280-4c82-88d2-53f27e1bda8c-kube-api-access-jw9jg\") pod \"auto-csr-approver-29535164-7jhrh\" (UID: \"3cf21403-b280-4c82-88d2-53f27e1bda8c\") " pod="openshift-infra/auto-csr-approver-29535164-7jhrh" Feb 26 12:44:00 crc kubenswrapper[4724]: I0226 12:44:00.436784 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw9jg\" (UniqueName: \"kubernetes.io/projected/3cf21403-b280-4c82-88d2-53f27e1bda8c-kube-api-access-jw9jg\") pod \"auto-csr-approver-29535164-7jhrh\" (UID: \"3cf21403-b280-4c82-88d2-53f27e1bda8c\") " pod="openshift-infra/auto-csr-approver-29535164-7jhrh" Feb 26 12:44:00 crc kubenswrapper[4724]: I0226 12:44:00.500427 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535164-7jhrh" Feb 26 12:44:01 crc kubenswrapper[4724]: I0226 12:44:01.120649 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535164-7jhrh"] Feb 26 12:44:01 crc kubenswrapper[4724]: I0226 12:44:01.751411 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535164-7jhrh" event={"ID":"3cf21403-b280-4c82-88d2-53f27e1bda8c","Type":"ContainerStarted","Data":"e07f9c268f87b6c8894c1c0e7707e971010bb5ef0ca7e46339eaf9e5ea1d0b8b"} Feb 26 12:44:02 crc kubenswrapper[4724]: I0226 12:44:02.762050 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535164-7jhrh" event={"ID":"3cf21403-b280-4c82-88d2-53f27e1bda8c","Type":"ContainerStarted","Data":"74d08d306ff8f834909f20891556b71a08158b19edfee42d56a8808da3322cbe"} Feb 26 12:44:02 crc kubenswrapper[4724]: I0226 12:44:02.779883 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535164-7jhrh" podStartSLOduration=1.611580953 podStartE2EDuration="2.779862399s" podCreationTimestamp="2026-02-26 12:44:00 +0000 UTC" firstStartedPulling="2026-02-26 12:44:01.147328691 +0000 UTC m=+5907.803067806" lastFinishedPulling="2026-02-26 12:44:02.315610137 +0000 UTC m=+5908.971349252" observedRunningTime="2026-02-26 12:44:02.777362585 +0000 UTC m=+5909.433101720" watchObservedRunningTime="2026-02-26 12:44:02.779862399 +0000 UTC m=+5909.435601514" Feb 26 12:44:03 crc kubenswrapper[4724]: I0226 12:44:03.771172 4724 generic.go:334] "Generic (PLEG): container finished" podID="3cf21403-b280-4c82-88d2-53f27e1bda8c" containerID="74d08d306ff8f834909f20891556b71a08158b19edfee42d56a8808da3322cbe" exitCode=0 Feb 26 12:44:03 crc kubenswrapper[4724]: I0226 12:44:03.771399 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535164-7jhrh" event={"ID":"3cf21403-b280-4c82-88d2-53f27e1bda8c","Type":"ContainerDied","Data":"74d08d306ff8f834909f20891556b71a08158b19edfee42d56a8808da3322cbe"} Feb 26 12:44:05 crc kubenswrapper[4724]: I0226 12:44:05.133225 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535164-7jhrh" Feb 26 12:44:05 crc kubenswrapper[4724]: I0226 12:44:05.307898 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw9jg\" (UniqueName: \"kubernetes.io/projected/3cf21403-b280-4c82-88d2-53f27e1bda8c-kube-api-access-jw9jg\") pod \"3cf21403-b280-4c82-88d2-53f27e1bda8c\" (UID: \"3cf21403-b280-4c82-88d2-53f27e1bda8c\") " Feb 26 12:44:05 crc kubenswrapper[4724]: I0226 12:44:05.320710 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf21403-b280-4c82-88d2-53f27e1bda8c-kube-api-access-jw9jg" (OuterVolumeSpecName: "kube-api-access-jw9jg") pod "3cf21403-b280-4c82-88d2-53f27e1bda8c" (UID: "3cf21403-b280-4c82-88d2-53f27e1bda8c"). InnerVolumeSpecName "kube-api-access-jw9jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:44:05 crc kubenswrapper[4724]: I0226 12:44:05.410474 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw9jg\" (UniqueName: \"kubernetes.io/projected/3cf21403-b280-4c82-88d2-53f27e1bda8c-kube-api-access-jw9jg\") on node \"crc\" DevicePath \"\"" Feb 26 12:44:05 crc kubenswrapper[4724]: I0226 12:44:05.791662 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535164-7jhrh" event={"ID":"3cf21403-b280-4c82-88d2-53f27e1bda8c","Type":"ContainerDied","Data":"e07f9c268f87b6c8894c1c0e7707e971010bb5ef0ca7e46339eaf9e5ea1d0b8b"} Feb 26 12:44:05 crc kubenswrapper[4724]: I0226 12:44:05.791745 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e07f9c268f87b6c8894c1c0e7707e971010bb5ef0ca7e46339eaf9e5ea1d0b8b" Feb 26 12:44:05 crc kubenswrapper[4724]: I0226 12:44:05.791821 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535164-7jhrh" Feb 26 12:44:05 crc kubenswrapper[4724]: I0226 12:44:05.884784 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535158-8kzmc"] Feb 26 12:44:05 crc kubenswrapper[4724]: I0226 12:44:05.893359 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535158-8kzmc"] Feb 26 12:44:05 crc kubenswrapper[4724]: I0226 12:44:05.987703 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f503fc72-9ae7-4210-bdd3-a5768cfa590d" path="/var/lib/kubelet/pods/f503fc72-9ae7-4210-bdd3-a5768cfa590d/volumes" Feb 26 12:44:23 crc kubenswrapper[4724]: I0226 12:44:23.091383 4724 scope.go:117] "RemoveContainer" containerID="589315bf64b0f91393fd7ab5922a1d4f386321f57d0c14c687c65de45ae07316" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.154855 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597"] Feb 26 12:45:00 crc kubenswrapper[4724]: E0226 12:45:00.155981 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf21403-b280-4c82-88d2-53f27e1bda8c" containerName="oc" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.155998 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf21403-b280-4c82-88d2-53f27e1bda8c" containerName="oc" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.156494 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf21403-b280-4c82-88d2-53f27e1bda8c" containerName="oc" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.157352 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.164223 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.167073 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.174687 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597"] Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.266887 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fbff6a3-55eb-4222-92a4-960f632ccbaf-secret-volume\") pod \"collect-profiles-29535165-vg597\" (UID: \"5fbff6a3-55eb-4222-92a4-960f632ccbaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.266991 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ss9\" (UniqueName: \"kubernetes.io/projected/5fbff6a3-55eb-4222-92a4-960f632ccbaf-kube-api-access-k9ss9\") pod \"collect-profiles-29535165-vg597\" (UID: \"5fbff6a3-55eb-4222-92a4-960f632ccbaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.267039 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fbff6a3-55eb-4222-92a4-960f632ccbaf-config-volume\") pod \"collect-profiles-29535165-vg597\" (UID: \"5fbff6a3-55eb-4222-92a4-960f632ccbaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.369783 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fbff6a3-55eb-4222-92a4-960f632ccbaf-secret-volume\") pod \"collect-profiles-29535165-vg597\" (UID: \"5fbff6a3-55eb-4222-92a4-960f632ccbaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.370404 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9ss9\" (UniqueName: \"kubernetes.io/projected/5fbff6a3-55eb-4222-92a4-960f632ccbaf-kube-api-access-k9ss9\") pod \"collect-profiles-29535165-vg597\" (UID: \"5fbff6a3-55eb-4222-92a4-960f632ccbaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.370443 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fbff6a3-55eb-4222-92a4-960f632ccbaf-config-volume\") pod \"collect-profiles-29535165-vg597\" (UID: \"5fbff6a3-55eb-4222-92a4-960f632ccbaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.371890 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fbff6a3-55eb-4222-92a4-960f632ccbaf-config-volume\") pod \"collect-profiles-29535165-vg597\" (UID: \"5fbff6a3-55eb-4222-92a4-960f632ccbaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.383364 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fbff6a3-55eb-4222-92a4-960f632ccbaf-secret-volume\") pod \"collect-profiles-29535165-vg597\" (UID: \"5fbff6a3-55eb-4222-92a4-960f632ccbaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.392209 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9ss9\" (UniqueName: \"kubernetes.io/projected/5fbff6a3-55eb-4222-92a4-960f632ccbaf-kube-api-access-k9ss9\") pod \"collect-profiles-29535165-vg597\" (UID: \"5fbff6a3-55eb-4222-92a4-960f632ccbaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" Feb 26 12:45:00 crc kubenswrapper[4724]: I0226 12:45:00.504125 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" Feb 26 12:45:01 crc kubenswrapper[4724]: I0226 12:45:00.997807 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597"] Feb 26 12:45:01 crc kubenswrapper[4724]: I0226 12:45:01.284056 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" event={"ID":"5fbff6a3-55eb-4222-92a4-960f632ccbaf","Type":"ContainerStarted","Data":"fc8f38930fdb54e9a403db9b885d3f9851d594e9a0ccf1b0a7c5b9f3e113b62c"} Feb 26 12:45:01 crc kubenswrapper[4724]: I0226 12:45:01.284317 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" event={"ID":"5fbff6a3-55eb-4222-92a4-960f632ccbaf","Type":"ContainerStarted","Data":"a527f8196c49e323189e4749736b5eba8aa66846adc26525fd8d4d0d9567448b"} Feb 26 12:45:01 crc kubenswrapper[4724]: I0226 12:45:01.312386 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" podStartSLOduration=1.312345261 podStartE2EDuration="1.312345261s" podCreationTimestamp="2026-02-26 12:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 12:45:01.303813693 +0000 UTC m=+5967.959552828" watchObservedRunningTime="2026-02-26 12:45:01.312345261 +0000 UTC m=+5967.968084376" Feb 26 12:45:02 crc kubenswrapper[4724]: I0226 12:45:02.294817 4724 generic.go:334] "Generic (PLEG): container finished" podID="5fbff6a3-55eb-4222-92a4-960f632ccbaf" containerID="fc8f38930fdb54e9a403db9b885d3f9851d594e9a0ccf1b0a7c5b9f3e113b62c" exitCode=0 Feb 26 12:45:02 crc kubenswrapper[4724]: I0226 12:45:02.294886 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" event={"ID":"5fbff6a3-55eb-4222-92a4-960f632ccbaf","Type":"ContainerDied","Data":"fc8f38930fdb54e9a403db9b885d3f9851d594e9a0ccf1b0a7c5b9f3e113b62c"} Feb 26 12:45:03 crc kubenswrapper[4724]: I0226 12:45:03.711242 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" Feb 26 12:45:03 crc kubenswrapper[4724]: I0226 12:45:03.845946 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fbff6a3-55eb-4222-92a4-960f632ccbaf-secret-volume\") pod \"5fbff6a3-55eb-4222-92a4-960f632ccbaf\" (UID: \"5fbff6a3-55eb-4222-92a4-960f632ccbaf\") " Feb 26 12:45:03 crc kubenswrapper[4724]: I0226 12:45:03.846037 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9ss9\" (UniqueName: \"kubernetes.io/projected/5fbff6a3-55eb-4222-92a4-960f632ccbaf-kube-api-access-k9ss9\") pod \"5fbff6a3-55eb-4222-92a4-960f632ccbaf\" (UID: \"5fbff6a3-55eb-4222-92a4-960f632ccbaf\") " Feb 26 12:45:03 crc kubenswrapper[4724]: I0226 12:45:03.846206 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fbff6a3-55eb-4222-92a4-960f632ccbaf-config-volume\") pod \"5fbff6a3-55eb-4222-92a4-960f632ccbaf\" (UID: \"5fbff6a3-55eb-4222-92a4-960f632ccbaf\") " Feb 26 12:45:03 crc kubenswrapper[4724]: I0226 12:45:03.847390 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fbff6a3-55eb-4222-92a4-960f632ccbaf-config-volume" (OuterVolumeSpecName: "config-volume") pod "5fbff6a3-55eb-4222-92a4-960f632ccbaf" (UID: "5fbff6a3-55eb-4222-92a4-960f632ccbaf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 12:45:03 crc kubenswrapper[4724]: I0226 12:45:03.866191 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fbff6a3-55eb-4222-92a4-960f632ccbaf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5fbff6a3-55eb-4222-92a4-960f632ccbaf" (UID: "5fbff6a3-55eb-4222-92a4-960f632ccbaf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 12:45:03 crc kubenswrapper[4724]: I0226 12:45:03.866835 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fbff6a3-55eb-4222-92a4-960f632ccbaf-kube-api-access-k9ss9" (OuterVolumeSpecName: "kube-api-access-k9ss9") pod "5fbff6a3-55eb-4222-92a4-960f632ccbaf" (UID: "5fbff6a3-55eb-4222-92a4-960f632ccbaf"). InnerVolumeSpecName "kube-api-access-k9ss9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:45:03 crc kubenswrapper[4724]: I0226 12:45:03.948461 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fbff6a3-55eb-4222-92a4-960f632ccbaf-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 12:45:03 crc kubenswrapper[4724]: I0226 12:45:03.948510 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fbff6a3-55eb-4222-92a4-960f632ccbaf-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 12:45:03 crc kubenswrapper[4724]: I0226 12:45:03.948521 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9ss9\" (UniqueName: \"kubernetes.io/projected/5fbff6a3-55eb-4222-92a4-960f632ccbaf-kube-api-access-k9ss9\") on node \"crc\" DevicePath \"\"" Feb 26 12:45:04 crc kubenswrapper[4724]: I0226 12:45:04.312528 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" event={"ID":"5fbff6a3-55eb-4222-92a4-960f632ccbaf","Type":"ContainerDied","Data":"a527f8196c49e323189e4749736b5eba8aa66846adc26525fd8d4d0d9567448b"} Feb 26 12:45:04 crc kubenswrapper[4724]: I0226 12:45:04.312573 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a527f8196c49e323189e4749736b5eba8aa66846adc26525fd8d4d0d9567448b" Feb 26 12:45:04 crc kubenswrapper[4724]: I0226 12:45:04.312640 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597" Feb 26 12:45:04 crc kubenswrapper[4724]: I0226 12:45:04.382653 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb"] Feb 26 12:45:04 crc kubenswrapper[4724]: I0226 12:45:04.391816 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535120-8bzwb"] Feb 26 12:45:05 crc kubenswrapper[4724]: I0226 12:45:05.988584 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1efc605b-a275-457e-baf3-3548c0eb929e" path="/var/lib/kubelet/pods/1efc605b-a275-457e-baf3-3548c0eb929e/volumes" Feb 26 12:45:23 crc kubenswrapper[4724]: I0226 12:45:23.183268 4724 scope.go:117] "RemoveContainer" containerID="ecf70371a2ac911d14ef78fb9b42c1844c914cdeea5dc8ef48f408c7f9676572" Feb 26 12:45:25 crc kubenswrapper[4724]: I0226 12:45:25.461515 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qv7kc"] Feb 26 12:45:25 crc kubenswrapper[4724]: E0226 12:45:25.462995 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fbff6a3-55eb-4222-92a4-960f632ccbaf" containerName="collect-profiles" Feb 26 12:45:25 crc kubenswrapper[4724]: I0226 12:45:25.463025 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fbff6a3-55eb-4222-92a4-960f632ccbaf" containerName="collect-profiles" Feb 26 12:45:25 crc kubenswrapper[4724]: I0226 12:45:25.463315 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fbff6a3-55eb-4222-92a4-960f632ccbaf" containerName="collect-profiles" Feb 26 12:45:25 crc kubenswrapper[4724]: I0226 12:45:25.470996 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:25 crc kubenswrapper[4724]: I0226 12:45:25.483095 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qv7kc"] Feb 26 12:45:25 crc kubenswrapper[4724]: I0226 12:45:25.558727 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d90037e8-0b2d-4fed-b7dc-743cfd74e695-catalog-content\") pod \"redhat-marketplace-qv7kc\" (UID: \"d90037e8-0b2d-4fed-b7dc-743cfd74e695\") " pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:25 crc kubenswrapper[4724]: I0226 12:45:25.558827 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d90037e8-0b2d-4fed-b7dc-743cfd74e695-utilities\") pod \"redhat-marketplace-qv7kc\" (UID: \"d90037e8-0b2d-4fed-b7dc-743cfd74e695\") " pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:25 crc kubenswrapper[4724]: I0226 12:45:25.558903 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfn65\" (UniqueName: \"kubernetes.io/projected/d90037e8-0b2d-4fed-b7dc-743cfd74e695-kube-api-access-rfn65\") pod \"redhat-marketplace-qv7kc\" (UID: \"d90037e8-0b2d-4fed-b7dc-743cfd74e695\") " pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:25 crc kubenswrapper[4724]: I0226 12:45:25.660928 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfn65\" (UniqueName: \"kubernetes.io/projected/d90037e8-0b2d-4fed-b7dc-743cfd74e695-kube-api-access-rfn65\") pod \"redhat-marketplace-qv7kc\" (UID: \"d90037e8-0b2d-4fed-b7dc-743cfd74e695\") " pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:25 crc kubenswrapper[4724]: I0226 12:45:25.661055 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d90037e8-0b2d-4fed-b7dc-743cfd74e695-catalog-content\") pod \"redhat-marketplace-qv7kc\" (UID: \"d90037e8-0b2d-4fed-b7dc-743cfd74e695\") " pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:25 crc kubenswrapper[4724]: I0226 12:45:25.661111 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d90037e8-0b2d-4fed-b7dc-743cfd74e695-utilities\") pod \"redhat-marketplace-qv7kc\" (UID: \"d90037e8-0b2d-4fed-b7dc-743cfd74e695\") " pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:25 crc kubenswrapper[4724]: I0226 12:45:25.661590 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d90037e8-0b2d-4fed-b7dc-743cfd74e695-catalog-content\") pod \"redhat-marketplace-qv7kc\" (UID: \"d90037e8-0b2d-4fed-b7dc-743cfd74e695\") " pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:25 crc kubenswrapper[4724]: I0226 12:45:25.661693 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d90037e8-0b2d-4fed-b7dc-743cfd74e695-utilities\") pod \"redhat-marketplace-qv7kc\" (UID: \"d90037e8-0b2d-4fed-b7dc-743cfd74e695\") " pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:25 crc kubenswrapper[4724]: I0226 12:45:25.681097 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfn65\" (UniqueName: \"kubernetes.io/projected/d90037e8-0b2d-4fed-b7dc-743cfd74e695-kube-api-access-rfn65\") pod \"redhat-marketplace-qv7kc\" (UID: \"d90037e8-0b2d-4fed-b7dc-743cfd74e695\") " pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:25 crc kubenswrapper[4724]: I0226 12:45:25.795259 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:26 crc kubenswrapper[4724]: I0226 12:45:26.261638 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qv7kc"] Feb 26 12:45:26 crc kubenswrapper[4724]: I0226 12:45:26.534243 4724 generic.go:334] "Generic (PLEG): container finished" podID="d90037e8-0b2d-4fed-b7dc-743cfd74e695" containerID="76a547fa81760d0f6e9aef4c95654447d49cbb5e37680d9bc599594f2e041d06" exitCode=0 Feb 26 12:45:26 crc kubenswrapper[4724]: I0226 12:45:26.537830 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qv7kc" event={"ID":"d90037e8-0b2d-4fed-b7dc-743cfd74e695","Type":"ContainerDied","Data":"76a547fa81760d0f6e9aef4c95654447d49cbb5e37680d9bc599594f2e041d06"} Feb 26 12:45:26 crc kubenswrapper[4724]: I0226 12:45:26.538200 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qv7kc" event={"ID":"d90037e8-0b2d-4fed-b7dc-743cfd74e695","Type":"ContainerStarted","Data":"bc544d865ab0e6e2d95dd428db36d421d5f15b50fa85283231e86b7fe9437603"} Feb 26 12:45:26 crc kubenswrapper[4724]: I0226 12:45:26.541169 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 12:45:27 crc kubenswrapper[4724]: I0226 12:45:27.546603 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qv7kc" event={"ID":"d90037e8-0b2d-4fed-b7dc-743cfd74e695","Type":"ContainerStarted","Data":"24f0485b0540c93cba29afb34b8536ccfeb14bd4864944685adb744c261c8da6"} Feb 26 12:45:29 crc kubenswrapper[4724]: I0226 12:45:29.564939 4724 generic.go:334] "Generic (PLEG): container finished" podID="d90037e8-0b2d-4fed-b7dc-743cfd74e695" containerID="24f0485b0540c93cba29afb34b8536ccfeb14bd4864944685adb744c261c8da6" exitCode=0 Feb 26 12:45:29 crc kubenswrapper[4724]: I0226 12:45:29.565214 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qv7kc" event={"ID":"d90037e8-0b2d-4fed-b7dc-743cfd74e695","Type":"ContainerDied","Data":"24f0485b0540c93cba29afb34b8536ccfeb14bd4864944685adb744c261c8da6"} Feb 26 12:45:30 crc kubenswrapper[4724]: I0226 12:45:30.576327 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qv7kc" event={"ID":"d90037e8-0b2d-4fed-b7dc-743cfd74e695","Type":"ContainerStarted","Data":"14534280d5dd88da22bf9ea15cb624d29e593d92a75a6877d0dd03b95947449c"} Feb 26 12:45:30 crc kubenswrapper[4724]: I0226 12:45:30.595247 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qv7kc" podStartSLOduration=2.152027515 podStartE2EDuration="5.595223218s" podCreationTimestamp="2026-02-26 12:45:25 +0000 UTC" firstStartedPulling="2026-02-26 12:45:26.540910144 +0000 UTC m=+5993.196649249" lastFinishedPulling="2026-02-26 12:45:29.984105837 +0000 UTC m=+5996.639844952" observedRunningTime="2026-02-26 12:45:30.594347146 +0000 UTC m=+5997.250086261" watchObservedRunningTime="2026-02-26 12:45:30.595223218 +0000 UTC m=+5997.250962343" Feb 26 12:45:35 crc kubenswrapper[4724]: I0226 12:45:35.796069 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:35 crc kubenswrapper[4724]: I0226 12:45:35.798372 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:36 crc kubenswrapper[4724]: I0226 12:45:36.849609 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-qv7kc" podUID="d90037e8-0b2d-4fed-b7dc-743cfd74e695" containerName="registry-server" probeResult="failure" output=< Feb 26 12:45:36 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:45:36 crc kubenswrapper[4724]: > Feb 26 12:45:45 crc kubenswrapper[4724]: I0226 12:45:45.862299 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:45 crc kubenswrapper[4724]: I0226 12:45:45.922867 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:46 crc kubenswrapper[4724]: I0226 12:45:46.100929 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qv7kc"] Feb 26 12:45:47 crc kubenswrapper[4724]: I0226 12:45:47.758613 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qv7kc" podUID="d90037e8-0b2d-4fed-b7dc-743cfd74e695" containerName="registry-server" containerID="cri-o://14534280d5dd88da22bf9ea15cb624d29e593d92a75a6877d0dd03b95947449c" gracePeriod=2 Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.653477 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.729854 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfn65\" (UniqueName: \"kubernetes.io/projected/d90037e8-0b2d-4fed-b7dc-743cfd74e695-kube-api-access-rfn65\") pod \"d90037e8-0b2d-4fed-b7dc-743cfd74e695\" (UID: \"d90037e8-0b2d-4fed-b7dc-743cfd74e695\") " Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.730038 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d90037e8-0b2d-4fed-b7dc-743cfd74e695-utilities\") pod \"d90037e8-0b2d-4fed-b7dc-743cfd74e695\" (UID: \"d90037e8-0b2d-4fed-b7dc-743cfd74e695\") " Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.730076 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d90037e8-0b2d-4fed-b7dc-743cfd74e695-catalog-content\") pod \"d90037e8-0b2d-4fed-b7dc-743cfd74e695\" (UID: \"d90037e8-0b2d-4fed-b7dc-743cfd74e695\") " Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.731214 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d90037e8-0b2d-4fed-b7dc-743cfd74e695-utilities" (OuterVolumeSpecName: "utilities") pod "d90037e8-0b2d-4fed-b7dc-743cfd74e695" (UID: "d90037e8-0b2d-4fed-b7dc-743cfd74e695"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.757809 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d90037e8-0b2d-4fed-b7dc-743cfd74e695-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d90037e8-0b2d-4fed-b7dc-743cfd74e695" (UID: "d90037e8-0b2d-4fed-b7dc-743cfd74e695"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.762398 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d90037e8-0b2d-4fed-b7dc-743cfd74e695-kube-api-access-rfn65" (OuterVolumeSpecName: "kube-api-access-rfn65") pod "d90037e8-0b2d-4fed-b7dc-743cfd74e695" (UID: "d90037e8-0b2d-4fed-b7dc-743cfd74e695"). InnerVolumeSpecName "kube-api-access-rfn65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.774316 4724 generic.go:334] "Generic (PLEG): container finished" podID="d90037e8-0b2d-4fed-b7dc-743cfd74e695" containerID="14534280d5dd88da22bf9ea15cb624d29e593d92a75a6877d0dd03b95947449c" exitCode=0 Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.774432 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qv7kc" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.774779 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qv7kc" event={"ID":"d90037e8-0b2d-4fed-b7dc-743cfd74e695","Type":"ContainerDied","Data":"14534280d5dd88da22bf9ea15cb624d29e593d92a75a6877d0dd03b95947449c"} Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.774865 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qv7kc" event={"ID":"d90037e8-0b2d-4fed-b7dc-743cfd74e695","Type":"ContainerDied","Data":"bc544d865ab0e6e2d95dd428db36d421d5f15b50fa85283231e86b7fe9437603"} Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.774894 4724 scope.go:117] "RemoveContainer" containerID="14534280d5dd88da22bf9ea15cb624d29e593d92a75a6877d0dd03b95947449c" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.829227 4724 scope.go:117] "RemoveContainer" containerID="24f0485b0540c93cba29afb34b8536ccfeb14bd4864944685adb744c261c8da6" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.832432 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qv7kc"] Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.833813 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfn65\" (UniqueName: \"kubernetes.io/projected/d90037e8-0b2d-4fed-b7dc-743cfd74e695-kube-api-access-rfn65\") on node \"crc\" DevicePath \"\"" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.835483 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d90037e8-0b2d-4fed-b7dc-743cfd74e695-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.835535 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d90037e8-0b2d-4fed-b7dc-743cfd74e695-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.842596 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qv7kc"] Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.848988 4724 scope.go:117] "RemoveContainer" containerID="76a547fa81760d0f6e9aef4c95654447d49cbb5e37680d9bc599594f2e041d06" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.900501 4724 scope.go:117] "RemoveContainer" containerID="14534280d5dd88da22bf9ea15cb624d29e593d92a75a6877d0dd03b95947449c" Feb 26 12:45:48 crc kubenswrapper[4724]: E0226 12:45:48.901276 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14534280d5dd88da22bf9ea15cb624d29e593d92a75a6877d0dd03b95947449c\": container with ID starting with 14534280d5dd88da22bf9ea15cb624d29e593d92a75a6877d0dd03b95947449c not found: ID does not exist" containerID="14534280d5dd88da22bf9ea15cb624d29e593d92a75a6877d0dd03b95947449c" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.901324 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14534280d5dd88da22bf9ea15cb624d29e593d92a75a6877d0dd03b95947449c"} err="failed to get container status \"14534280d5dd88da22bf9ea15cb624d29e593d92a75a6877d0dd03b95947449c\": rpc error: code = NotFound desc = could not find container \"14534280d5dd88da22bf9ea15cb624d29e593d92a75a6877d0dd03b95947449c\": container with ID starting with 14534280d5dd88da22bf9ea15cb624d29e593d92a75a6877d0dd03b95947449c not found: ID does not exist" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.901351 4724 scope.go:117] "RemoveContainer" containerID="24f0485b0540c93cba29afb34b8536ccfeb14bd4864944685adb744c261c8da6" Feb 26 12:45:48 crc kubenswrapper[4724]: E0226 12:45:48.901634 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24f0485b0540c93cba29afb34b8536ccfeb14bd4864944685adb744c261c8da6\": container with ID starting with 24f0485b0540c93cba29afb34b8536ccfeb14bd4864944685adb744c261c8da6 not found: ID does not exist" containerID="24f0485b0540c93cba29afb34b8536ccfeb14bd4864944685adb744c261c8da6" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.901660 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24f0485b0540c93cba29afb34b8536ccfeb14bd4864944685adb744c261c8da6"} err="failed to get container status \"24f0485b0540c93cba29afb34b8536ccfeb14bd4864944685adb744c261c8da6\": rpc error: code = NotFound desc = could not find container \"24f0485b0540c93cba29afb34b8536ccfeb14bd4864944685adb744c261c8da6\": container with ID starting with 24f0485b0540c93cba29afb34b8536ccfeb14bd4864944685adb744c261c8da6 not found: ID does not exist" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.901674 4724 scope.go:117] "RemoveContainer" containerID="76a547fa81760d0f6e9aef4c95654447d49cbb5e37680d9bc599594f2e041d06" Feb 26 12:45:48 crc kubenswrapper[4724]: E0226 12:45:48.901998 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76a547fa81760d0f6e9aef4c95654447d49cbb5e37680d9bc599594f2e041d06\": container with ID starting with 76a547fa81760d0f6e9aef4c95654447d49cbb5e37680d9bc599594f2e041d06 not found: ID does not exist" containerID="76a547fa81760d0f6e9aef4c95654447d49cbb5e37680d9bc599594f2e041d06" Feb 26 12:45:48 crc kubenswrapper[4724]: I0226 12:45:48.902028 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a547fa81760d0f6e9aef4c95654447d49cbb5e37680d9bc599594f2e041d06"} err="failed to get container status \"76a547fa81760d0f6e9aef4c95654447d49cbb5e37680d9bc599594f2e041d06\": rpc error: code = NotFound desc = could not find container \"76a547fa81760d0f6e9aef4c95654447d49cbb5e37680d9bc599594f2e041d06\": container with ID starting with 76a547fa81760d0f6e9aef4c95654447d49cbb5e37680d9bc599594f2e041d06 not found: ID does not exist" Feb 26 12:45:50 crc kubenswrapper[4724]: I0226 12:45:50.000549 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d90037e8-0b2d-4fed-b7dc-743cfd74e695" path="/var/lib/kubelet/pods/d90037e8-0b2d-4fed-b7dc-743cfd74e695/volumes" Feb 26 12:46:00 crc kubenswrapper[4724]: I0226 12:46:00.248211 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535166-48pp6"] Feb 26 12:46:00 crc kubenswrapper[4724]: E0226 12:46:00.249161 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90037e8-0b2d-4fed-b7dc-743cfd74e695" containerName="extract-content" Feb 26 12:46:00 crc kubenswrapper[4724]: I0226 12:46:00.249196 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90037e8-0b2d-4fed-b7dc-743cfd74e695" containerName="extract-content" Feb 26 12:46:00 crc kubenswrapper[4724]: E0226 12:46:00.249213 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90037e8-0b2d-4fed-b7dc-743cfd74e695" containerName="extract-utilities" Feb 26 12:46:00 crc kubenswrapper[4724]: I0226 12:46:00.249223 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90037e8-0b2d-4fed-b7dc-743cfd74e695" containerName="extract-utilities" Feb 26 12:46:00 crc kubenswrapper[4724]: E0226 12:46:00.249250 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90037e8-0b2d-4fed-b7dc-743cfd74e695" containerName="registry-server" Feb 26 12:46:00 crc kubenswrapper[4724]: I0226 12:46:00.249260 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90037e8-0b2d-4fed-b7dc-743cfd74e695" containerName="registry-server" Feb 26 12:46:00 crc kubenswrapper[4724]: I0226 12:46:00.249501 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="d90037e8-0b2d-4fed-b7dc-743cfd74e695" containerName="registry-server" Feb 26 12:46:00 crc kubenswrapper[4724]: I0226 12:46:00.250281 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535166-48pp6" Feb 26 12:46:00 crc kubenswrapper[4724]: I0226 12:46:00.253939 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:46:00 crc kubenswrapper[4724]: I0226 12:46:00.256433 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:46:00 crc kubenswrapper[4724]: I0226 12:46:00.256833 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:46:00 crc kubenswrapper[4724]: I0226 12:46:00.296380 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535166-48pp6"] Feb 26 12:46:00 crc kubenswrapper[4724]: I0226 12:46:00.352883 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pqqk\" (UniqueName: \"kubernetes.io/projected/a0136676-8165-4f49-9969-a479cdb70132-kube-api-access-8pqqk\") pod \"auto-csr-approver-29535166-48pp6\" (UID: \"a0136676-8165-4f49-9969-a479cdb70132\") " pod="openshift-infra/auto-csr-approver-29535166-48pp6" Feb 26 12:46:00 crc kubenswrapper[4724]: I0226 12:46:00.454075 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pqqk\" (UniqueName: \"kubernetes.io/projected/a0136676-8165-4f49-9969-a479cdb70132-kube-api-access-8pqqk\") pod \"auto-csr-approver-29535166-48pp6\" (UID: \"a0136676-8165-4f49-9969-a479cdb70132\") " pod="openshift-infra/auto-csr-approver-29535166-48pp6" Feb 26 12:46:00 crc kubenswrapper[4724]: I0226 12:46:00.472168 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pqqk\" (UniqueName: \"kubernetes.io/projected/a0136676-8165-4f49-9969-a479cdb70132-kube-api-access-8pqqk\") pod \"auto-csr-approver-29535166-48pp6\" (UID: \"a0136676-8165-4f49-9969-a479cdb70132\") " pod="openshift-infra/auto-csr-approver-29535166-48pp6" Feb 26 12:46:00 crc kubenswrapper[4724]: I0226 12:46:00.607194 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535166-48pp6" Feb 26 12:46:01 crc kubenswrapper[4724]: I0226 12:46:01.110372 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535166-48pp6"] Feb 26 12:46:01 crc kubenswrapper[4724]: I0226 12:46:01.889055 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535166-48pp6" event={"ID":"a0136676-8165-4f49-9969-a479cdb70132","Type":"ContainerStarted","Data":"762d5ffd7fdcf13ddff8966c961f08d7b1bd7ee018e8c4376c0d6d6646bae726"} Feb 26 12:46:02 crc kubenswrapper[4724]: I0226 12:46:02.900080 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535166-48pp6" event={"ID":"a0136676-8165-4f49-9969-a479cdb70132","Type":"ContainerStarted","Data":"2908efb57d36f6bbb1943a2cf284a51443e2b3c6d8993be35dfa97561d90df92"} Feb 26 12:46:02 crc kubenswrapper[4724]: I0226 12:46:02.918680 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535166-48pp6" podStartSLOduration=2.012072497 podStartE2EDuration="2.918658082s" podCreationTimestamp="2026-02-26 12:46:00 +0000 UTC" firstStartedPulling="2026-02-26 12:46:01.113239771 +0000 UTC m=+6027.768978896" lastFinishedPulling="2026-02-26 12:46:02.019825366 +0000 UTC m=+6028.675564481" observedRunningTime="2026-02-26 12:46:02.91624388 +0000 UTC m=+6029.571982995" watchObservedRunningTime="2026-02-26 12:46:02.918658082 +0000 UTC m=+6029.574397197" Feb 26 12:46:03 crc kubenswrapper[4724]: I0226 12:46:03.941144 4724 generic.go:334] "Generic (PLEG): container finished" podID="a0136676-8165-4f49-9969-a479cdb70132" containerID="2908efb57d36f6bbb1943a2cf284a51443e2b3c6d8993be35dfa97561d90df92" exitCode=0 Feb 26 12:46:03 crc kubenswrapper[4724]: I0226 12:46:03.942521 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535166-48pp6" event={"ID":"a0136676-8165-4f49-9969-a479cdb70132","Type":"ContainerDied","Data":"2908efb57d36f6bbb1943a2cf284a51443e2b3c6d8993be35dfa97561d90df92"} Feb 26 12:46:05 crc kubenswrapper[4724]: I0226 12:46:05.334025 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535166-48pp6" Feb 26 12:46:05 crc kubenswrapper[4724]: I0226 12:46:05.352600 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pqqk\" (UniqueName: \"kubernetes.io/projected/a0136676-8165-4f49-9969-a479cdb70132-kube-api-access-8pqqk\") pod \"a0136676-8165-4f49-9969-a479cdb70132\" (UID: \"a0136676-8165-4f49-9969-a479cdb70132\") " Feb 26 12:46:05 crc kubenswrapper[4724]: I0226 12:46:05.384786 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0136676-8165-4f49-9969-a479cdb70132-kube-api-access-8pqqk" (OuterVolumeSpecName: "kube-api-access-8pqqk") pod "a0136676-8165-4f49-9969-a479cdb70132" (UID: "a0136676-8165-4f49-9969-a479cdb70132"). InnerVolumeSpecName "kube-api-access-8pqqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:46:05 crc kubenswrapper[4724]: I0226 12:46:05.454873 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pqqk\" (UniqueName: \"kubernetes.io/projected/a0136676-8165-4f49-9969-a479cdb70132-kube-api-access-8pqqk\") on node \"crc\" DevicePath \"\"" Feb 26 12:46:05 crc kubenswrapper[4724]: I0226 12:46:05.961303 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535166-48pp6" event={"ID":"a0136676-8165-4f49-9969-a479cdb70132","Type":"ContainerDied","Data":"762d5ffd7fdcf13ddff8966c961f08d7b1bd7ee018e8c4376c0d6d6646bae726"} Feb 26 12:46:05 crc kubenswrapper[4724]: I0226 12:46:05.961355 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="762d5ffd7fdcf13ddff8966c961f08d7b1bd7ee018e8c4376c0d6d6646bae726" Feb 26 12:46:05 crc kubenswrapper[4724]: I0226 12:46:05.961418 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535166-48pp6" Feb 26 12:46:06 crc kubenswrapper[4724]: I0226 12:46:06.038631 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535160-ngp7b"] Feb 26 12:46:06 crc kubenswrapper[4724]: I0226 12:46:06.051896 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535160-ngp7b"] Feb 26 12:46:07 crc kubenswrapper[4724]: I0226 12:46:07.987201 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f3cd17f-2082-4864-865f-699e05d4fc84" path="/var/lib/kubelet/pods/6f3cd17f-2082-4864-865f-699e05d4fc84/volumes" Feb 26 12:46:16 crc kubenswrapper[4724]: I0226 12:46:16.906425 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:46:16 crc kubenswrapper[4724]: I0226 12:46:16.906941 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:46:18 crc kubenswrapper[4724]: I0226 12:46:18.155460 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-27b96"] Feb 26 12:46:18 crc kubenswrapper[4724]: E0226 12:46:18.155874 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0136676-8165-4f49-9969-a479cdb70132" containerName="oc" Feb 26 12:46:18 crc kubenswrapper[4724]: I0226 12:46:18.155886 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0136676-8165-4f49-9969-a479cdb70132" containerName="oc" Feb 26 12:46:18 crc kubenswrapper[4724]: I0226 12:46:18.156104 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0136676-8165-4f49-9969-a479cdb70132" containerName="oc" Feb 26 12:46:18 crc kubenswrapper[4724]: I0226 12:46:18.157479 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:46:18 crc kubenswrapper[4724]: I0226 12:46:18.173648 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-27b96"] Feb 26 12:46:18 crc kubenswrapper[4724]: I0226 12:46:18.305532 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecae9857-24a5-4747-8f13-be3d458d48f9-utilities\") pod \"redhat-operators-27b96\" (UID: \"ecae9857-24a5-4747-8f13-be3d458d48f9\") " pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:46:18 crc kubenswrapper[4724]: I0226 12:46:18.305828 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hnks\" (UniqueName: \"kubernetes.io/projected/ecae9857-24a5-4747-8f13-be3d458d48f9-kube-api-access-7hnks\") pod \"redhat-operators-27b96\" (UID: \"ecae9857-24a5-4747-8f13-be3d458d48f9\") " pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:46:18 crc kubenswrapper[4724]: I0226 12:46:18.305898 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecae9857-24a5-4747-8f13-be3d458d48f9-catalog-content\") pod \"redhat-operators-27b96\" (UID: \"ecae9857-24a5-4747-8f13-be3d458d48f9\") " pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:46:18 crc kubenswrapper[4724]: I0226 12:46:18.409243 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecae9857-24a5-4747-8f13-be3d458d48f9-utilities\") pod \"redhat-operators-27b96\" (UID: \"ecae9857-24a5-4747-8f13-be3d458d48f9\") " pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:46:18 crc kubenswrapper[4724]: I0226 12:46:18.409464 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hnks\" (UniqueName: \"kubernetes.io/projected/ecae9857-24a5-4747-8f13-be3d458d48f9-kube-api-access-7hnks\") pod \"redhat-operators-27b96\" (UID: \"ecae9857-24a5-4747-8f13-be3d458d48f9\") " pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:46:18 crc kubenswrapper[4724]: I0226 12:46:18.409651 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecae9857-24a5-4747-8f13-be3d458d48f9-catalog-content\") pod \"redhat-operators-27b96\" (UID: \"ecae9857-24a5-4747-8f13-be3d458d48f9\") " pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:46:18 crc kubenswrapper[4724]: I0226 12:46:18.409781 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecae9857-24a5-4747-8f13-be3d458d48f9-utilities\") pod \"redhat-operators-27b96\" (UID: \"ecae9857-24a5-4747-8f13-be3d458d48f9\") " pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:46:18 crc kubenswrapper[4724]: I0226 12:46:18.410072 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecae9857-24a5-4747-8f13-be3d458d48f9-catalog-content\") pod \"redhat-operators-27b96\" (UID: \"ecae9857-24a5-4747-8f13-be3d458d48f9\") " pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:46:18 crc kubenswrapper[4724]: I0226 12:46:18.442994 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hnks\" (UniqueName: \"kubernetes.io/projected/ecae9857-24a5-4747-8f13-be3d458d48f9-kube-api-access-7hnks\") pod \"redhat-operators-27b96\" (UID: \"ecae9857-24a5-4747-8f13-be3d458d48f9\") " pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:46:18 crc kubenswrapper[4724]: I0226 12:46:18.494706 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:46:19 crc kubenswrapper[4724]: I0226 12:46:19.040092 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-27b96"] Feb 26 12:46:19 crc kubenswrapper[4724]: I0226 12:46:19.081065 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27b96" event={"ID":"ecae9857-24a5-4747-8f13-be3d458d48f9","Type":"ContainerStarted","Data":"e90fea6d6cd38aeb9f86eba1109aa005f9ea52dffaaa2535f4eac18f01c4457e"} Feb 26 12:46:20 crc kubenswrapper[4724]: I0226 12:46:20.090861 4724 generic.go:334] "Generic (PLEG): container finished" podID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerID="3ce19b8779469bf6bb5e0d4ee93c9262b1e73302c1e7617a9b5f06cf621d8f68" exitCode=0 Feb 26 12:46:20 crc kubenswrapper[4724]: I0226 12:46:20.091247 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27b96" event={"ID":"ecae9857-24a5-4747-8f13-be3d458d48f9","Type":"ContainerDied","Data":"3ce19b8779469bf6bb5e0d4ee93c9262b1e73302c1e7617a9b5f06cf621d8f68"} Feb 26 12:46:23 crc kubenswrapper[4724]: I0226 12:46:23.126238 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27b96" event={"ID":"ecae9857-24a5-4747-8f13-be3d458d48f9","Type":"ContainerStarted","Data":"40757a409fab4e0deafbde8065aa4f2a47d65b0d755a58ff156c4b810b844e9d"} Feb 26 12:46:23 crc kubenswrapper[4724]: I0226 12:46:23.240463 4724 scope.go:117] "RemoveContainer" containerID="c03d006443ebcd4b41826f98155c754c5cd7f1f0ce325b13120e9957b3ee428e" Feb 26 12:46:38 crc kubenswrapper[4724]: I0226 12:46:38.261928 4724 generic.go:334] "Generic (PLEG): container finished" podID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerID="40757a409fab4e0deafbde8065aa4f2a47d65b0d755a58ff156c4b810b844e9d" exitCode=0 Feb 26 12:46:38 crc kubenswrapper[4724]: I0226 12:46:38.262009 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27b96" event={"ID":"ecae9857-24a5-4747-8f13-be3d458d48f9","Type":"ContainerDied","Data":"40757a409fab4e0deafbde8065aa4f2a47d65b0d755a58ff156c4b810b844e9d"} Feb 26 12:46:40 crc kubenswrapper[4724]: I0226 12:46:40.287604 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27b96" event={"ID":"ecae9857-24a5-4747-8f13-be3d458d48f9","Type":"ContainerStarted","Data":"939f3e68757847519a60d89e2f3b07b59c862023e88d1bfa65db374e8426e2e2"} Feb 26 12:46:40 crc kubenswrapper[4724]: I0226 12:46:40.349093 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-27b96" podStartSLOduration=3.433918691 podStartE2EDuration="22.349071396s" podCreationTimestamp="2026-02-26 12:46:18 +0000 UTC" firstStartedPulling="2026-02-26 12:46:20.093542821 +0000 UTC m=+6046.749281936" lastFinishedPulling="2026-02-26 12:46:39.008695536 +0000 UTC m=+6065.664434641" observedRunningTime="2026-02-26 12:46:40.34534538 +0000 UTC m=+6067.001084495" watchObservedRunningTime="2026-02-26 12:46:40.349071396 +0000 UTC m=+6067.004810531" Feb 26 12:46:46 crc kubenswrapper[4724]: I0226 12:46:46.908977 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:46:46 crc kubenswrapper[4724]: I0226 12:46:46.909693 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:46:48 crc kubenswrapper[4724]: I0226 12:46:48.494883 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:46:48 crc kubenswrapper[4724]: I0226 12:46:48.495674 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:46:49 crc kubenswrapper[4724]: I0226 12:46:49.559737 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-27b96" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" probeResult="failure" output=< Feb 26 12:46:49 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:46:49 crc kubenswrapper[4724]: > Feb 26 12:46:58 crc kubenswrapper[4724]: I0226 12:46:58.526414 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-746558bfbf-gbdpm" podUID="acbb8b99-0b04-48c7-904e-a5c5304813a3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 26 12:46:59 crc kubenswrapper[4724]: I0226 12:46:59.552783 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-27b96" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" probeResult="failure" output=< Feb 26 12:46:59 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:46:59 crc kubenswrapper[4724]: > Feb 26 12:47:09 crc kubenswrapper[4724]: I0226 12:47:09.687905 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-27b96" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" probeResult="failure" output=< Feb 26 12:47:09 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:47:09 crc kubenswrapper[4724]: > Feb 26 12:47:16 crc kubenswrapper[4724]: I0226 12:47:16.905644 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:47:16 crc kubenswrapper[4724]: I0226 12:47:16.906055 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:47:16 crc kubenswrapper[4724]: I0226 12:47:16.906105 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 12:47:16 crc kubenswrapper[4724]: I0226 12:47:16.907267 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 12:47:16 crc kubenswrapper[4724]: I0226 12:47:16.907338 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" gracePeriod=600 Feb 26 12:47:17 crc kubenswrapper[4724]: E0226 12:47:17.060373 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:47:17 crc kubenswrapper[4724]: I0226 12:47:17.623040 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" exitCode=0 Feb 26 12:47:17 crc kubenswrapper[4724]: I0226 12:47:17.623114 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b"} Feb 26 12:47:17 crc kubenswrapper[4724]: I0226 12:47:17.623197 4724 scope.go:117] "RemoveContainer" containerID="c13f8aaffd91bf6bdd288b56b623f9d351168988e4fe880b8bdef6c9cf96524e" Feb 26 12:47:17 crc kubenswrapper[4724]: I0226 12:47:17.624109 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:47:17 crc kubenswrapper[4724]: E0226 12:47:17.624704 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:47:19 crc kubenswrapper[4724]: I0226 12:47:19.537559 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-27b96" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" probeResult="failure" output=< Feb 26 12:47:19 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:47:19 crc kubenswrapper[4724]: > Feb 26 12:47:21 crc kubenswrapper[4724]: I0226 12:47:21.935586 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h8zqr"] Feb 26 12:47:21 crc kubenswrapper[4724]: I0226 12:47:21.959033 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8zqr"] Feb 26 12:47:21 crc kubenswrapper[4724]: I0226 12:47:21.959198 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:47:22 crc kubenswrapper[4724]: I0226 12:47:22.115296 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5caee04e-240d-4a57-b2f7-d0b40854b130-catalog-content\") pod \"community-operators-h8zqr\" (UID: \"5caee04e-240d-4a57-b2f7-d0b40854b130\") " pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:47:22 crc kubenswrapper[4724]: I0226 12:47:22.115677 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5caee04e-240d-4a57-b2f7-d0b40854b130-utilities\") pod \"community-operators-h8zqr\" (UID: \"5caee04e-240d-4a57-b2f7-d0b40854b130\") " pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:47:22 crc kubenswrapper[4724]: I0226 12:47:22.115740 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llvf2\" (UniqueName: \"kubernetes.io/projected/5caee04e-240d-4a57-b2f7-d0b40854b130-kube-api-access-llvf2\") pod \"community-operators-h8zqr\" (UID: \"5caee04e-240d-4a57-b2f7-d0b40854b130\") " pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:47:22 crc kubenswrapper[4724]: I0226 12:47:22.219362 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5caee04e-240d-4a57-b2f7-d0b40854b130-utilities\") pod \"community-operators-h8zqr\" (UID: \"5caee04e-240d-4a57-b2f7-d0b40854b130\") " pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:47:22 crc kubenswrapper[4724]: I0226 12:47:22.219425 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llvf2\" (UniqueName: \"kubernetes.io/projected/5caee04e-240d-4a57-b2f7-d0b40854b130-kube-api-access-llvf2\") pod \"community-operators-h8zqr\" (UID: \"5caee04e-240d-4a57-b2f7-d0b40854b130\") " pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:47:22 crc kubenswrapper[4724]: I0226 12:47:22.219488 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5caee04e-240d-4a57-b2f7-d0b40854b130-catalog-content\") pod \"community-operators-h8zqr\" (UID: \"5caee04e-240d-4a57-b2f7-d0b40854b130\") " pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:47:22 crc kubenswrapper[4724]: I0226 12:47:22.220477 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5caee04e-240d-4a57-b2f7-d0b40854b130-utilities\") pod \"community-operators-h8zqr\" (UID: \"5caee04e-240d-4a57-b2f7-d0b40854b130\") " pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:47:22 crc kubenswrapper[4724]: I0226 12:47:22.221106 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5caee04e-240d-4a57-b2f7-d0b40854b130-catalog-content\") pod \"community-operators-h8zqr\" (UID: \"5caee04e-240d-4a57-b2f7-d0b40854b130\") " pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:47:22 crc kubenswrapper[4724]: I0226 12:47:22.350949 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llvf2\" (UniqueName: \"kubernetes.io/projected/5caee04e-240d-4a57-b2f7-d0b40854b130-kube-api-access-llvf2\") pod \"community-operators-h8zqr\" (UID: \"5caee04e-240d-4a57-b2f7-d0b40854b130\") " pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:47:22 crc kubenswrapper[4724]: I0226 12:47:22.600525 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:47:24 crc kubenswrapper[4724]: I0226 12:47:24.940531 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8zqr"] Feb 26 12:47:25 crc kubenswrapper[4724]: I0226 12:47:25.725245 4724 generic.go:334] "Generic (PLEG): container finished" podID="5caee04e-240d-4a57-b2f7-d0b40854b130" containerID="126ed0bf240ae06eeec483ee13a8b3e21da1b3c0ea314ca89a3501262a21dbbc" exitCode=0 Feb 26 12:47:25 crc kubenswrapper[4724]: I0226 12:47:25.725525 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8zqr" event={"ID":"5caee04e-240d-4a57-b2f7-d0b40854b130","Type":"ContainerDied","Data":"126ed0bf240ae06eeec483ee13a8b3e21da1b3c0ea314ca89a3501262a21dbbc"} Feb 26 12:47:25 crc kubenswrapper[4724]: I0226 12:47:25.725557 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8zqr" event={"ID":"5caee04e-240d-4a57-b2f7-d0b40854b130","Type":"ContainerStarted","Data":"c3cfd6dd0805c15cd76f3523414c8ddcf287537919c95d11cb2e2d1cc74ef5aa"} Feb 26 12:47:27 crc kubenswrapper[4724]: I0226 12:47:27.760746 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8zqr" event={"ID":"5caee04e-240d-4a57-b2f7-d0b40854b130","Type":"ContainerStarted","Data":"99b0675b50040c8bfc52712b723f4d85421a5d763058b8869ba3614ba566e0cd"} Feb 26 12:47:28 crc kubenswrapper[4724]: I0226 12:47:28.976156 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:47:28 crc kubenswrapper[4724]: E0226 12:47:28.983968 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:47:29 crc kubenswrapper[4724]: I0226 12:47:29.715140 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-27b96" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" probeResult="failure" output=< Feb 26 12:47:29 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:47:29 crc kubenswrapper[4724]: > Feb 26 12:47:32 crc kubenswrapper[4724]: I0226 12:47:32.804916 4724 generic.go:334] "Generic (PLEG): container finished" podID="5caee04e-240d-4a57-b2f7-d0b40854b130" containerID="99b0675b50040c8bfc52712b723f4d85421a5d763058b8869ba3614ba566e0cd" exitCode=0 Feb 26 12:47:32 crc kubenswrapper[4724]: I0226 12:47:32.804970 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8zqr" event={"ID":"5caee04e-240d-4a57-b2f7-d0b40854b130","Type":"ContainerDied","Data":"99b0675b50040c8bfc52712b723f4d85421a5d763058b8869ba3614ba566e0cd"} Feb 26 12:47:34 crc kubenswrapper[4724]: I0226 12:47:34.823677 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8zqr" event={"ID":"5caee04e-240d-4a57-b2f7-d0b40854b130","Type":"ContainerStarted","Data":"8bb4b05f127e82f595f0d362f7d5990bcc0ecf19d683477cea47fa3b6062a01b"} Feb 26 12:47:34 crc kubenswrapper[4724]: I0226 12:47:34.851534 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h8zqr" podStartSLOduration=5.838133641 podStartE2EDuration="13.851506s" podCreationTimestamp="2026-02-26 12:47:21 +0000 UTC" firstStartedPulling="2026-02-26 12:47:25.7264706 +0000 UTC m=+6112.382209725" lastFinishedPulling="2026-02-26 12:47:33.739842959 +0000 UTC m=+6120.395582084" observedRunningTime="2026-02-26 12:47:34.842739436 +0000 UTC m=+6121.498478571" watchObservedRunningTime="2026-02-26 12:47:34.851506 +0000 UTC m=+6121.507245135" Feb 26 12:47:39 crc kubenswrapper[4724]: I0226 12:47:39.549302 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-27b96" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" probeResult="failure" output=< Feb 26 12:47:39 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:47:39 crc kubenswrapper[4724]: > Feb 26 12:47:41 crc kubenswrapper[4724]: I0226 12:47:41.980479 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:47:41 crc kubenswrapper[4724]: E0226 12:47:41.981448 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:47:42 crc kubenswrapper[4724]: I0226 12:47:42.601288 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:47:42 crc kubenswrapper[4724]: I0226 12:47:42.601344 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:47:43 crc kubenswrapper[4724]: I0226 12:47:43.650822 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-h8zqr" podUID="5caee04e-240d-4a57-b2f7-d0b40854b130" containerName="registry-server" probeResult="failure" output=< Feb 26 12:47:43 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:47:43 crc kubenswrapper[4724]: > Feb 26 12:47:49 crc kubenswrapper[4724]: I0226 12:47:49.544796 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-27b96" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" probeResult="failure" output=< Feb 26 12:47:49 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:47:49 crc kubenswrapper[4724]: > Feb 26 12:47:53 crc kubenswrapper[4724]: I0226 12:47:53.661000 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-h8zqr" podUID="5caee04e-240d-4a57-b2f7-d0b40854b130" containerName="registry-server" probeResult="failure" output=< Feb 26 12:47:53 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:47:53 crc kubenswrapper[4724]: > Feb 26 12:47:54 crc kubenswrapper[4724]: I0226 12:47:54.975636 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:47:54 crc kubenswrapper[4724]: E0226 12:47:54.976267 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:47:59 crc kubenswrapper[4724]: I0226 12:47:59.549711 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-27b96" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" probeResult="failure" output=< Feb 26 12:47:59 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:47:59 crc kubenswrapper[4724]: > Feb 26 12:48:00 crc kubenswrapper[4724]: I0226 12:48:00.222153 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535168-fwhgm"] Feb 26 12:48:00 crc kubenswrapper[4724]: I0226 12:48:00.227967 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535168-fwhgm" Feb 26 12:48:00 crc kubenswrapper[4724]: I0226 12:48:00.243075 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535168-fwhgm"] Feb 26 12:48:00 crc kubenswrapper[4724]: I0226 12:48:00.256474 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:48:00 crc kubenswrapper[4724]: I0226 12:48:00.256475 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:48:00 crc kubenswrapper[4724]: I0226 12:48:00.259420 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:48:00 crc kubenswrapper[4724]: I0226 12:48:00.318701 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f86gr\" (UniqueName: \"kubernetes.io/projected/196c307b-c5ee-45ec-b31d-10d3340edeee-kube-api-access-f86gr\") pod \"auto-csr-approver-29535168-fwhgm\" (UID: \"196c307b-c5ee-45ec-b31d-10d3340edeee\") " pod="openshift-infra/auto-csr-approver-29535168-fwhgm" Feb 26 12:48:00 crc kubenswrapper[4724]: I0226 12:48:00.420635 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f86gr\" (UniqueName: \"kubernetes.io/projected/196c307b-c5ee-45ec-b31d-10d3340edeee-kube-api-access-f86gr\") pod \"auto-csr-approver-29535168-fwhgm\" (UID: \"196c307b-c5ee-45ec-b31d-10d3340edeee\") " pod="openshift-infra/auto-csr-approver-29535168-fwhgm" Feb 26 12:48:00 crc kubenswrapper[4724]: I0226 12:48:00.447893 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f86gr\" (UniqueName: \"kubernetes.io/projected/196c307b-c5ee-45ec-b31d-10d3340edeee-kube-api-access-f86gr\") pod \"auto-csr-approver-29535168-fwhgm\" (UID: \"196c307b-c5ee-45ec-b31d-10d3340edeee\") " pod="openshift-infra/auto-csr-approver-29535168-fwhgm" Feb 26 12:48:00 crc kubenswrapper[4724]: I0226 12:48:00.591589 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535168-fwhgm" Feb 26 12:48:01 crc kubenswrapper[4724]: I0226 12:48:01.679366 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535168-fwhgm"] Feb 26 12:48:02 crc kubenswrapper[4724]: I0226 12:48:02.415564 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535168-fwhgm" event={"ID":"196c307b-c5ee-45ec-b31d-10d3340edeee","Type":"ContainerStarted","Data":"d57fb476449179e78f3b0933d603b5d1a81cb7b6a8aa3665f6c28b4386fabe08"} Feb 26 12:48:03 crc kubenswrapper[4724]: I0226 12:48:03.647043 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-h8zqr" podUID="5caee04e-240d-4a57-b2f7-d0b40854b130" containerName="registry-server" probeResult="failure" output=< Feb 26 12:48:03 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:48:03 crc kubenswrapper[4724]: > Feb 26 12:48:05 crc kubenswrapper[4724]: I0226 12:48:05.442037 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535168-fwhgm" event={"ID":"196c307b-c5ee-45ec-b31d-10d3340edeee","Type":"ContainerStarted","Data":"d2f02ac65ac647fa8b165a665b0872619c2020ecf0d9887b531355a7236045e0"} Feb 26 12:48:05 crc kubenswrapper[4724]: I0226 12:48:05.465560 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535168-fwhgm" podStartSLOduration=3.9523394290000002 podStartE2EDuration="5.465541801s" podCreationTimestamp="2026-02-26 12:48:00 +0000 UTC" firstStartedPulling="2026-02-26 12:48:01.70318976 +0000 UTC m=+6148.358928865" lastFinishedPulling="2026-02-26 12:48:03.216392122 +0000 UTC m=+6149.872131237" observedRunningTime="2026-02-26 12:48:05.457027504 +0000 UTC m=+6152.112766619" watchObservedRunningTime="2026-02-26 12:48:05.465541801 +0000 UTC m=+6152.121280916" Feb 26 12:48:08 crc kubenswrapper[4724]: I0226 12:48:08.468520 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535168-fwhgm" event={"ID":"196c307b-c5ee-45ec-b31d-10d3340edeee","Type":"ContainerDied","Data":"d2f02ac65ac647fa8b165a665b0872619c2020ecf0d9887b531355a7236045e0"} Feb 26 12:48:08 crc kubenswrapper[4724]: I0226 12:48:08.468434 4724 generic.go:334] "Generic (PLEG): container finished" podID="196c307b-c5ee-45ec-b31d-10d3340edeee" containerID="d2f02ac65ac647fa8b165a665b0872619c2020ecf0d9887b531355a7236045e0" exitCode=0 Feb 26 12:48:08 crc kubenswrapper[4724]: I0226 12:48:08.975704 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:48:08 crc kubenswrapper[4724]: E0226 12:48:08.975997 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:48:09 crc kubenswrapper[4724]: I0226 12:48:09.551830 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-27b96" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" probeResult="failure" output=< Feb 26 12:48:09 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:48:09 crc kubenswrapper[4724]: > Feb 26 12:48:10 crc kubenswrapper[4724]: I0226 12:48:10.035915 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535168-fwhgm" Feb 26 12:48:10 crc kubenswrapper[4724]: I0226 12:48:10.151821 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f86gr\" (UniqueName: \"kubernetes.io/projected/196c307b-c5ee-45ec-b31d-10d3340edeee-kube-api-access-f86gr\") pod \"196c307b-c5ee-45ec-b31d-10d3340edeee\" (UID: \"196c307b-c5ee-45ec-b31d-10d3340edeee\") " Feb 26 12:48:10 crc kubenswrapper[4724]: I0226 12:48:10.160619 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/196c307b-c5ee-45ec-b31d-10d3340edeee-kube-api-access-f86gr" (OuterVolumeSpecName: "kube-api-access-f86gr") pod "196c307b-c5ee-45ec-b31d-10d3340edeee" (UID: "196c307b-c5ee-45ec-b31d-10d3340edeee"). InnerVolumeSpecName "kube-api-access-f86gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:48:10 crc kubenswrapper[4724]: I0226 12:48:10.254964 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f86gr\" (UniqueName: \"kubernetes.io/projected/196c307b-c5ee-45ec-b31d-10d3340edeee-kube-api-access-f86gr\") on node \"crc\" DevicePath \"\"" Feb 26 12:48:10 crc kubenswrapper[4724]: I0226 12:48:10.493652 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535168-fwhgm" event={"ID":"196c307b-c5ee-45ec-b31d-10d3340edeee","Type":"ContainerDied","Data":"d57fb476449179e78f3b0933d603b5d1a81cb7b6a8aa3665f6c28b4386fabe08"} Feb 26 12:48:10 crc kubenswrapper[4724]: I0226 12:48:10.494264 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d57fb476449179e78f3b0933d603b5d1a81cb7b6a8aa3665f6c28b4386fabe08" Feb 26 12:48:10 crc kubenswrapper[4724]: I0226 12:48:10.494372 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535168-fwhgm" Feb 26 12:48:10 crc kubenswrapper[4724]: I0226 12:48:10.584238 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535162-m9bs5"] Feb 26 12:48:10 crc kubenswrapper[4724]: I0226 12:48:10.593283 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535162-m9bs5"] Feb 26 12:48:12 crc kubenswrapper[4724]: I0226 12:48:12.015034 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06b194c0-b53e-4974-9646-4febbeb34a16" path="/var/lib/kubelet/pods/06b194c0-b53e-4974-9646-4febbeb34a16/volumes" Feb 26 12:48:12 crc kubenswrapper[4724]: I0226 12:48:12.660022 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:48:12 crc kubenswrapper[4724]: I0226 12:48:12.722562 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:48:12 crc kubenswrapper[4724]: I0226 12:48:12.899537 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h8zqr"] Feb 26 12:48:14 crc kubenswrapper[4724]: I0226 12:48:14.541636 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h8zqr" podUID="5caee04e-240d-4a57-b2f7-d0b40854b130" containerName="registry-server" containerID="cri-o://8bb4b05f127e82f595f0d362f7d5990bcc0ecf19d683477cea47fa3b6062a01b" gracePeriod=2 Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.103664 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.169741 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5caee04e-240d-4a57-b2f7-d0b40854b130-utilities\") pod \"5caee04e-240d-4a57-b2f7-d0b40854b130\" (UID: \"5caee04e-240d-4a57-b2f7-d0b40854b130\") " Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.170104 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5caee04e-240d-4a57-b2f7-d0b40854b130-catalog-content\") pod \"5caee04e-240d-4a57-b2f7-d0b40854b130\" (UID: \"5caee04e-240d-4a57-b2f7-d0b40854b130\") " Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.170233 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llvf2\" (UniqueName: \"kubernetes.io/projected/5caee04e-240d-4a57-b2f7-d0b40854b130-kube-api-access-llvf2\") pod \"5caee04e-240d-4a57-b2f7-d0b40854b130\" (UID: \"5caee04e-240d-4a57-b2f7-d0b40854b130\") " Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.181598 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5caee04e-240d-4a57-b2f7-d0b40854b130-utilities" (OuterVolumeSpecName: "utilities") pod "5caee04e-240d-4a57-b2f7-d0b40854b130" (UID: "5caee04e-240d-4a57-b2f7-d0b40854b130"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.193648 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5caee04e-240d-4a57-b2f7-d0b40854b130-kube-api-access-llvf2" (OuterVolumeSpecName: "kube-api-access-llvf2") pod "5caee04e-240d-4a57-b2f7-d0b40854b130" (UID: "5caee04e-240d-4a57-b2f7-d0b40854b130"). InnerVolumeSpecName "kube-api-access-llvf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.272718 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5caee04e-240d-4a57-b2f7-d0b40854b130-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.272751 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llvf2\" (UniqueName: \"kubernetes.io/projected/5caee04e-240d-4a57-b2f7-d0b40854b130-kube-api-access-llvf2\") on node \"crc\" DevicePath \"\"" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.386858 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5caee04e-240d-4a57-b2f7-d0b40854b130-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5caee04e-240d-4a57-b2f7-d0b40854b130" (UID: "5caee04e-240d-4a57-b2f7-d0b40854b130"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.476709 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5caee04e-240d-4a57-b2f7-d0b40854b130-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.556237 4724 generic.go:334] "Generic (PLEG): container finished" podID="5caee04e-240d-4a57-b2f7-d0b40854b130" containerID="8bb4b05f127e82f595f0d362f7d5990bcc0ecf19d683477cea47fa3b6062a01b" exitCode=0 Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.556286 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8zqr" event={"ID":"5caee04e-240d-4a57-b2f7-d0b40854b130","Type":"ContainerDied","Data":"8bb4b05f127e82f595f0d362f7d5990bcc0ecf19d683477cea47fa3b6062a01b"} Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.556314 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8zqr" event={"ID":"5caee04e-240d-4a57-b2f7-d0b40854b130","Type":"ContainerDied","Data":"c3cfd6dd0805c15cd76f3523414c8ddcf287537919c95d11cb2e2d1cc74ef5aa"} Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.556333 4724 scope.go:117] "RemoveContainer" containerID="8bb4b05f127e82f595f0d362f7d5990bcc0ecf19d683477cea47fa3b6062a01b" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.556460 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8zqr" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.605001 4724 scope.go:117] "RemoveContainer" containerID="99b0675b50040c8bfc52712b723f4d85421a5d763058b8869ba3614ba566e0cd" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.605166 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h8zqr"] Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.613813 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h8zqr"] Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.639548 4724 scope.go:117] "RemoveContainer" containerID="126ed0bf240ae06eeec483ee13a8b3e21da1b3c0ea314ca89a3501262a21dbbc" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.702819 4724 scope.go:117] "RemoveContainer" containerID="8bb4b05f127e82f595f0d362f7d5990bcc0ecf19d683477cea47fa3b6062a01b" Feb 26 12:48:15 crc kubenswrapper[4724]: E0226 12:48:15.709668 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bb4b05f127e82f595f0d362f7d5990bcc0ecf19d683477cea47fa3b6062a01b\": container with ID starting with 8bb4b05f127e82f595f0d362f7d5990bcc0ecf19d683477cea47fa3b6062a01b not found: ID does not exist" containerID="8bb4b05f127e82f595f0d362f7d5990bcc0ecf19d683477cea47fa3b6062a01b" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.709709 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bb4b05f127e82f595f0d362f7d5990bcc0ecf19d683477cea47fa3b6062a01b"} err="failed to get container status \"8bb4b05f127e82f595f0d362f7d5990bcc0ecf19d683477cea47fa3b6062a01b\": rpc error: code = NotFound desc = could not find container \"8bb4b05f127e82f595f0d362f7d5990bcc0ecf19d683477cea47fa3b6062a01b\": container with ID starting with 8bb4b05f127e82f595f0d362f7d5990bcc0ecf19d683477cea47fa3b6062a01b not found: ID does not exist" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.709754 4724 scope.go:117] "RemoveContainer" containerID="99b0675b50040c8bfc52712b723f4d85421a5d763058b8869ba3614ba566e0cd" Feb 26 12:48:15 crc kubenswrapper[4724]: E0226 12:48:15.710213 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99b0675b50040c8bfc52712b723f4d85421a5d763058b8869ba3614ba566e0cd\": container with ID starting with 99b0675b50040c8bfc52712b723f4d85421a5d763058b8869ba3614ba566e0cd not found: ID does not exist" containerID="99b0675b50040c8bfc52712b723f4d85421a5d763058b8869ba3614ba566e0cd" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.710289 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99b0675b50040c8bfc52712b723f4d85421a5d763058b8869ba3614ba566e0cd"} err="failed to get container status \"99b0675b50040c8bfc52712b723f4d85421a5d763058b8869ba3614ba566e0cd\": rpc error: code = NotFound desc = could not find container \"99b0675b50040c8bfc52712b723f4d85421a5d763058b8869ba3614ba566e0cd\": container with ID starting with 99b0675b50040c8bfc52712b723f4d85421a5d763058b8869ba3614ba566e0cd not found: ID does not exist" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.710327 4724 scope.go:117] "RemoveContainer" containerID="126ed0bf240ae06eeec483ee13a8b3e21da1b3c0ea314ca89a3501262a21dbbc" Feb 26 12:48:15 crc kubenswrapper[4724]: E0226 12:48:15.710665 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"126ed0bf240ae06eeec483ee13a8b3e21da1b3c0ea314ca89a3501262a21dbbc\": container with ID starting with 126ed0bf240ae06eeec483ee13a8b3e21da1b3c0ea314ca89a3501262a21dbbc not found: ID does not exist" containerID="126ed0bf240ae06eeec483ee13a8b3e21da1b3c0ea314ca89a3501262a21dbbc" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.710695 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126ed0bf240ae06eeec483ee13a8b3e21da1b3c0ea314ca89a3501262a21dbbc"} err="failed to get container status \"126ed0bf240ae06eeec483ee13a8b3e21da1b3c0ea314ca89a3501262a21dbbc\": rpc error: code = NotFound desc = could not find container \"126ed0bf240ae06eeec483ee13a8b3e21da1b3c0ea314ca89a3501262a21dbbc\": container with ID starting with 126ed0bf240ae06eeec483ee13a8b3e21da1b3c0ea314ca89a3501262a21dbbc not found: ID does not exist" Feb 26 12:48:15 crc kubenswrapper[4724]: I0226 12:48:15.990609 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5caee04e-240d-4a57-b2f7-d0b40854b130" path="/var/lib/kubelet/pods/5caee04e-240d-4a57-b2f7-d0b40854b130/volumes" Feb 26 12:48:19 crc kubenswrapper[4724]: I0226 12:48:19.543476 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-27b96" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" probeResult="failure" output=< Feb 26 12:48:19 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:48:19 crc kubenswrapper[4724]: > Feb 26 12:48:19 crc kubenswrapper[4724]: I0226 12:48:19.543838 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:48:19 crc kubenswrapper[4724]: I0226 12:48:19.545375 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"939f3e68757847519a60d89e2f3b07b59c862023e88d1bfa65db374e8426e2e2"} pod="openshift-marketplace/redhat-operators-27b96" containerMessage="Container registry-server failed startup probe, will be restarted" Feb 26 12:48:19 crc kubenswrapper[4724]: I0226 12:48:19.545414 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-27b96" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" containerID="cri-o://939f3e68757847519a60d89e2f3b07b59c862023e88d1bfa65db374e8426e2e2" gracePeriod=30 Feb 26 12:48:21 crc kubenswrapper[4724]: I0226 12:48:21.978109 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:48:21 crc kubenswrapper[4724]: E0226 12:48:21.978837 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:48:23 crc kubenswrapper[4724]: I0226 12:48:23.604061 4724 scope.go:117] "RemoveContainer" containerID="f34973db78430d636aa38b31c8bbf391d6d0fddfb100723d7c197452afb5726b" Feb 26 12:48:29 crc kubenswrapper[4724]: I0226 12:48:29.685950 4724 generic.go:334] "Generic (PLEG): container finished" podID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerID="939f3e68757847519a60d89e2f3b07b59c862023e88d1bfa65db374e8426e2e2" exitCode=0 Feb 26 12:48:29 crc kubenswrapper[4724]: I0226 12:48:29.686037 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27b96" event={"ID":"ecae9857-24a5-4747-8f13-be3d458d48f9","Type":"ContainerDied","Data":"939f3e68757847519a60d89e2f3b07b59c862023e88d1bfa65db374e8426e2e2"} Feb 26 12:48:29 crc kubenswrapper[4724]: I0226 12:48:29.686482 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27b96" event={"ID":"ecae9857-24a5-4747-8f13-be3d458d48f9","Type":"ContainerStarted","Data":"8a7e7cf953e3a123bfa790d1c0a320ded53fcea4303b358efd57971d026cac99"} Feb 26 12:48:34 crc kubenswrapper[4724]: I0226 12:48:34.975828 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:48:34 crc kubenswrapper[4724]: E0226 12:48:34.976766 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:48:38 crc kubenswrapper[4724]: I0226 12:48:38.495127 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:48:38 crc kubenswrapper[4724]: I0226 12:48:38.495489 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:48:39 crc kubenswrapper[4724]: I0226 12:48:39.567558 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-27b96" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" probeResult="failure" output=< Feb 26 12:48:39 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:48:39 crc kubenswrapper[4724]: > Feb 26 12:48:49 crc kubenswrapper[4724]: I0226 12:48:49.551672 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-27b96" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" probeResult="failure" output=< Feb 26 12:48:49 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:48:49 crc kubenswrapper[4724]: > Feb 26 12:48:49 crc kubenswrapper[4724]: I0226 12:48:49.975668 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:48:49 crc kubenswrapper[4724]: E0226 12:48:49.976039 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:48:58 crc kubenswrapper[4724]: I0226 12:48:58.554783 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:48:58 crc kubenswrapper[4724]: I0226 12:48:58.624854 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:48:58 crc kubenswrapper[4724]: I0226 12:48:58.804800 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-27b96"] Feb 26 12:48:59 crc kubenswrapper[4724]: I0226 12:48:59.996260 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-27b96" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" containerID="cri-o://8a7e7cf953e3a123bfa790d1c0a320ded53fcea4303b358efd57971d026cac99" gracePeriod=2 Feb 26 12:49:01 crc kubenswrapper[4724]: I0226 12:49:01.005655 4724 generic.go:334] "Generic (PLEG): container finished" podID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerID="8a7e7cf953e3a123bfa790d1c0a320ded53fcea4303b358efd57971d026cac99" exitCode=0 Feb 26 12:49:01 crc kubenswrapper[4724]: I0226 12:49:01.005738 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27b96" event={"ID":"ecae9857-24a5-4747-8f13-be3d458d48f9","Type":"ContainerDied","Data":"8a7e7cf953e3a123bfa790d1c0a320ded53fcea4303b358efd57971d026cac99"} Feb 26 12:49:01 crc kubenswrapper[4724]: I0226 12:49:01.006146 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27b96" event={"ID":"ecae9857-24a5-4747-8f13-be3d458d48f9","Type":"ContainerDied","Data":"e90fea6d6cd38aeb9f86eba1109aa005f9ea52dffaaa2535f4eac18f01c4457e"} Feb 26 12:49:01 crc kubenswrapper[4724]: I0226 12:49:01.006163 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e90fea6d6cd38aeb9f86eba1109aa005f9ea52dffaaa2535f4eac18f01c4457e" Feb 26 12:49:01 crc kubenswrapper[4724]: I0226 12:49:01.006195 4724 scope.go:117] "RemoveContainer" containerID="939f3e68757847519a60d89e2f3b07b59c862023e88d1bfa65db374e8426e2e2" Feb 26 12:49:01 crc kubenswrapper[4724]: I0226 12:49:01.070872 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:49:01 crc kubenswrapper[4724]: I0226 12:49:01.132969 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecae9857-24a5-4747-8f13-be3d458d48f9-catalog-content\") pod \"ecae9857-24a5-4747-8f13-be3d458d48f9\" (UID: \"ecae9857-24a5-4747-8f13-be3d458d48f9\") " Feb 26 12:49:01 crc kubenswrapper[4724]: I0226 12:49:01.133172 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hnks\" (UniqueName: \"kubernetes.io/projected/ecae9857-24a5-4747-8f13-be3d458d48f9-kube-api-access-7hnks\") pod \"ecae9857-24a5-4747-8f13-be3d458d48f9\" (UID: \"ecae9857-24a5-4747-8f13-be3d458d48f9\") " Feb 26 12:49:01 crc kubenswrapper[4724]: I0226 12:49:01.133241 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecae9857-24a5-4747-8f13-be3d458d48f9-utilities\") pod \"ecae9857-24a5-4747-8f13-be3d458d48f9\" (UID: \"ecae9857-24a5-4747-8f13-be3d458d48f9\") " Feb 26 12:49:01 crc kubenswrapper[4724]: I0226 12:49:01.133973 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecae9857-24a5-4747-8f13-be3d458d48f9-utilities" (OuterVolumeSpecName: "utilities") pod "ecae9857-24a5-4747-8f13-be3d458d48f9" (UID: "ecae9857-24a5-4747-8f13-be3d458d48f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:49:01 crc kubenswrapper[4724]: I0226 12:49:01.147312 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecae9857-24a5-4747-8f13-be3d458d48f9-kube-api-access-7hnks" (OuterVolumeSpecName: "kube-api-access-7hnks") pod "ecae9857-24a5-4747-8f13-be3d458d48f9" (UID: "ecae9857-24a5-4747-8f13-be3d458d48f9"). InnerVolumeSpecName "kube-api-access-7hnks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:49:01 crc kubenswrapper[4724]: I0226 12:49:01.235811 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hnks\" (UniqueName: \"kubernetes.io/projected/ecae9857-24a5-4747-8f13-be3d458d48f9-kube-api-access-7hnks\") on node \"crc\" DevicePath \"\"" Feb 26 12:49:01 crc kubenswrapper[4724]: I0226 12:49:01.236147 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecae9857-24a5-4747-8f13-be3d458d48f9-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:49:01 crc kubenswrapper[4724]: I0226 12:49:01.269915 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecae9857-24a5-4747-8f13-be3d458d48f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecae9857-24a5-4747-8f13-be3d458d48f9" (UID: "ecae9857-24a5-4747-8f13-be3d458d48f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:49:01 crc kubenswrapper[4724]: I0226 12:49:01.337979 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecae9857-24a5-4747-8f13-be3d458d48f9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:49:02 crc kubenswrapper[4724]: I0226 12:49:02.035276 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27b96" Feb 26 12:49:02 crc kubenswrapper[4724]: I0226 12:49:02.090958 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-27b96"] Feb 26 12:49:02 crc kubenswrapper[4724]: I0226 12:49:02.098653 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-27b96"] Feb 26 12:49:02 crc kubenswrapper[4724]: I0226 12:49:02.976266 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:49:02 crc kubenswrapper[4724]: E0226 12:49:02.976924 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:49:03 crc kubenswrapper[4724]: I0226 12:49:03.996117 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" path="/var/lib/kubelet/pods/ecae9857-24a5-4747-8f13-be3d458d48f9/volumes" Feb 26 12:49:16 crc kubenswrapper[4724]: I0226 12:49:16.975741 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:49:16 crc kubenswrapper[4724]: E0226 12:49:16.976752 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:49:31 crc kubenswrapper[4724]: I0226 12:49:31.975531 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:49:31 crc kubenswrapper[4724]: E0226 12:49:31.976488 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:49:46 crc kubenswrapper[4724]: I0226 12:49:46.976285 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:49:46 crc kubenswrapper[4724]: E0226 12:49:46.977326 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:49:58 crc kubenswrapper[4724]: I0226 12:49:58.975238 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:49:58 crc kubenswrapper[4724]: E0226 12:49:58.976134 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.230407 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535170-gmwvw"] Feb 26 12:50:00 crc kubenswrapper[4724]: E0226 12:50:00.237533 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="extract-content" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.237576 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="extract-content" Feb 26 12:50:00 crc kubenswrapper[4724]: E0226 12:50:00.237615 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5caee04e-240d-4a57-b2f7-d0b40854b130" containerName="registry-server" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.237621 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5caee04e-240d-4a57-b2f7-d0b40854b130" containerName="registry-server" Feb 26 12:50:00 crc kubenswrapper[4724]: E0226 12:50:00.237631 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.237637 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" Feb 26 12:50:00 crc kubenswrapper[4724]: E0226 12:50:00.237663 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5caee04e-240d-4a57-b2f7-d0b40854b130" containerName="extract-content" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.237669 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5caee04e-240d-4a57-b2f7-d0b40854b130" containerName="extract-content" Feb 26 12:50:00 crc kubenswrapper[4724]: E0226 12:50:00.237687 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196c307b-c5ee-45ec-b31d-10d3340edeee" containerName="oc" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.237695 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="196c307b-c5ee-45ec-b31d-10d3340edeee" containerName="oc" Feb 26 12:50:00 crc kubenswrapper[4724]: E0226 12:50:00.237727 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="extract-utilities" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.237734 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="extract-utilities" Feb 26 12:50:00 crc kubenswrapper[4724]: E0226 12:50:00.237744 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5caee04e-240d-4a57-b2f7-d0b40854b130" containerName="extract-utilities" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.237750 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5caee04e-240d-4a57-b2f7-d0b40854b130" containerName="extract-utilities" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.245271 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.245346 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.245363 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5caee04e-240d-4a57-b2f7-d0b40854b130" containerName="registry-server" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.245407 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="196c307b-c5ee-45ec-b31d-10d3340edeee" containerName="oc" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.246131 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535170-gmwvw" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.249169 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535170-gmwvw"] Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.251586 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.254319 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.263045 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.415888 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dmbz\" (UniqueName: \"kubernetes.io/projected/a7b3534a-0338-4fd7-9f04-574a30cdfea5-kube-api-access-8dmbz\") pod \"auto-csr-approver-29535170-gmwvw\" (UID: \"a7b3534a-0338-4fd7-9f04-574a30cdfea5\") " pod="openshift-infra/auto-csr-approver-29535170-gmwvw" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.517826 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dmbz\" (UniqueName: \"kubernetes.io/projected/a7b3534a-0338-4fd7-9f04-574a30cdfea5-kube-api-access-8dmbz\") pod \"auto-csr-approver-29535170-gmwvw\" (UID: \"a7b3534a-0338-4fd7-9f04-574a30cdfea5\") " pod="openshift-infra/auto-csr-approver-29535170-gmwvw" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.536555 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dmbz\" (UniqueName: \"kubernetes.io/projected/a7b3534a-0338-4fd7-9f04-574a30cdfea5-kube-api-access-8dmbz\") pod \"auto-csr-approver-29535170-gmwvw\" (UID: \"a7b3534a-0338-4fd7-9f04-574a30cdfea5\") " pod="openshift-infra/auto-csr-approver-29535170-gmwvw" Feb 26 12:50:00 crc kubenswrapper[4724]: I0226 12:50:00.577706 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535170-gmwvw" Feb 26 12:50:01 crc kubenswrapper[4724]: W0226 12:50:01.105022 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7b3534a_0338_4fd7_9f04_574a30cdfea5.slice/crio-3782392d3c4aeac23138cd92bfadd179d31c530896545d84b81868cf5d3671fc WatchSource:0}: Error finding container 3782392d3c4aeac23138cd92bfadd179d31c530896545d84b81868cf5d3671fc: Status 404 returned error can't find the container with id 3782392d3c4aeac23138cd92bfadd179d31c530896545d84b81868cf5d3671fc Feb 26 12:50:01 crc kubenswrapper[4724]: I0226 12:50:01.117483 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535170-gmwvw"] Feb 26 12:50:01 crc kubenswrapper[4724]: I0226 12:50:01.579122 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535170-gmwvw" event={"ID":"a7b3534a-0338-4fd7-9f04-574a30cdfea5","Type":"ContainerStarted","Data":"3782392d3c4aeac23138cd92bfadd179d31c530896545d84b81868cf5d3671fc"} Feb 26 12:50:03 crc kubenswrapper[4724]: I0226 12:50:03.603344 4724 generic.go:334] "Generic (PLEG): container finished" podID="a7b3534a-0338-4fd7-9f04-574a30cdfea5" containerID="79411f505662e7b31d383e63bd1879f82b4c44b2ad4978a5398d48e130f9f8dc" exitCode=0 Feb 26 12:50:03 crc kubenswrapper[4724]: I0226 12:50:03.603946 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535170-gmwvw" event={"ID":"a7b3534a-0338-4fd7-9f04-574a30cdfea5","Type":"ContainerDied","Data":"79411f505662e7b31d383e63bd1879f82b4c44b2ad4978a5398d48e130f9f8dc"} Feb 26 12:50:05 crc kubenswrapper[4724]: I0226 12:50:05.034430 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535170-gmwvw" Feb 26 12:50:05 crc kubenswrapper[4724]: I0226 12:50:05.206724 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dmbz\" (UniqueName: \"kubernetes.io/projected/a7b3534a-0338-4fd7-9f04-574a30cdfea5-kube-api-access-8dmbz\") pod \"a7b3534a-0338-4fd7-9f04-574a30cdfea5\" (UID: \"a7b3534a-0338-4fd7-9f04-574a30cdfea5\") " Feb 26 12:50:05 crc kubenswrapper[4724]: I0226 12:50:05.215469 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7b3534a-0338-4fd7-9f04-574a30cdfea5-kube-api-access-8dmbz" (OuterVolumeSpecName: "kube-api-access-8dmbz") pod "a7b3534a-0338-4fd7-9f04-574a30cdfea5" (UID: "a7b3534a-0338-4fd7-9f04-574a30cdfea5"). InnerVolumeSpecName "kube-api-access-8dmbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:50:05 crc kubenswrapper[4724]: I0226 12:50:05.311439 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dmbz\" (UniqueName: \"kubernetes.io/projected/a7b3534a-0338-4fd7-9f04-574a30cdfea5-kube-api-access-8dmbz\") on node \"crc\" DevicePath \"\"" Feb 26 12:50:05 crc kubenswrapper[4724]: I0226 12:50:05.677056 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535170-gmwvw" event={"ID":"a7b3534a-0338-4fd7-9f04-574a30cdfea5","Type":"ContainerDied","Data":"3782392d3c4aeac23138cd92bfadd179d31c530896545d84b81868cf5d3671fc"} Feb 26 12:50:05 crc kubenswrapper[4724]: I0226 12:50:05.677106 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3782392d3c4aeac23138cd92bfadd179d31c530896545d84b81868cf5d3671fc" Feb 26 12:50:05 crc kubenswrapper[4724]: I0226 12:50:05.677169 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535170-gmwvw" Feb 26 12:50:06 crc kubenswrapper[4724]: I0226 12:50:06.114167 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535164-7jhrh"] Feb 26 12:50:06 crc kubenswrapper[4724]: I0226 12:50:06.124134 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535164-7jhrh"] Feb 26 12:50:07 crc kubenswrapper[4724]: I0226 12:50:07.991634 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf21403-b280-4c82-88d2-53f27e1bda8c" path="/var/lib/kubelet/pods/3cf21403-b280-4c82-88d2-53f27e1bda8c/volumes" Feb 26 12:50:13 crc kubenswrapper[4724]: I0226 12:50:13.982247 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:50:13 crc kubenswrapper[4724]: E0226 12:50:13.983077 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:50:23 crc kubenswrapper[4724]: I0226 12:50:23.781000 4724 scope.go:117] "RemoveContainer" containerID="74d08d306ff8f834909f20891556b71a08158b19edfee42d56a8808da3322cbe" Feb 26 12:50:27 crc kubenswrapper[4724]: I0226 12:50:27.976316 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:50:27 crc kubenswrapper[4724]: E0226 12:50:27.976964 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:50:40 crc kubenswrapper[4724]: I0226 12:50:40.975448 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:50:40 crc kubenswrapper[4724]: E0226 12:50:40.976078 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:50:53 crc kubenswrapper[4724]: I0226 12:50:53.985294 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:50:53 crc kubenswrapper[4724]: E0226 12:50:53.985858 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:51:08 crc kubenswrapper[4724]: I0226 12:51:08.976473 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:51:08 crc kubenswrapper[4724]: E0226 12:51:08.977446 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:51:23 crc kubenswrapper[4724]: I0226 12:51:23.995138 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:51:23 crc kubenswrapper[4724]: E0226 12:51:23.995912 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:51:36 crc kubenswrapper[4724]: I0226 12:51:36.975709 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:51:36 crc kubenswrapper[4724]: E0226 12:51:36.976735 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:51:51 crc kubenswrapper[4724]: I0226 12:51:51.976147 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:51:51 crc kubenswrapper[4724]: E0226 12:51:51.977211 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:52:00 crc kubenswrapper[4724]: I0226 12:52:00.157347 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535172-529sg"] Feb 26 12:52:00 crc kubenswrapper[4724]: E0226 12:52:00.158609 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" Feb 26 12:52:00 crc kubenswrapper[4724]: I0226 12:52:00.158625 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecae9857-24a5-4747-8f13-be3d458d48f9" containerName="registry-server" Feb 26 12:52:00 crc kubenswrapper[4724]: E0226 12:52:00.158638 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b3534a-0338-4fd7-9f04-574a30cdfea5" containerName="oc" Feb 26 12:52:00 crc kubenswrapper[4724]: I0226 12:52:00.158644 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b3534a-0338-4fd7-9f04-574a30cdfea5" containerName="oc" Feb 26 12:52:00 crc kubenswrapper[4724]: I0226 12:52:00.158864 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7b3534a-0338-4fd7-9f04-574a30cdfea5" containerName="oc" Feb 26 12:52:00 crc kubenswrapper[4724]: I0226 12:52:00.173048 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535172-529sg" Feb 26 12:52:00 crc kubenswrapper[4724]: I0226 12:52:00.177805 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:52:00 crc kubenswrapper[4724]: I0226 12:52:00.178707 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:52:00 crc kubenswrapper[4724]: I0226 12:52:00.179404 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:52:00 crc kubenswrapper[4724]: I0226 12:52:00.190170 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535172-529sg"] Feb 26 12:52:00 crc kubenswrapper[4724]: I0226 12:52:00.221389 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78497\" (UniqueName: \"kubernetes.io/projected/9ab15f4d-a142-4a56-b8de-5dd5503a2801-kube-api-access-78497\") pod \"auto-csr-approver-29535172-529sg\" (UID: \"9ab15f4d-a142-4a56-b8de-5dd5503a2801\") " pod="openshift-infra/auto-csr-approver-29535172-529sg" Feb 26 12:52:00 crc kubenswrapper[4724]: I0226 12:52:00.323596 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78497\" (UniqueName: \"kubernetes.io/projected/9ab15f4d-a142-4a56-b8de-5dd5503a2801-kube-api-access-78497\") pod \"auto-csr-approver-29535172-529sg\" (UID: \"9ab15f4d-a142-4a56-b8de-5dd5503a2801\") " pod="openshift-infra/auto-csr-approver-29535172-529sg" Feb 26 12:52:00 crc kubenswrapper[4724]: I0226 12:52:00.342072 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78497\" (UniqueName: \"kubernetes.io/projected/9ab15f4d-a142-4a56-b8de-5dd5503a2801-kube-api-access-78497\") pod \"auto-csr-approver-29535172-529sg\" (UID: \"9ab15f4d-a142-4a56-b8de-5dd5503a2801\") " pod="openshift-infra/auto-csr-approver-29535172-529sg" Feb 26 12:52:00 crc kubenswrapper[4724]: I0226 12:52:00.506100 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535172-529sg" Feb 26 12:52:01 crc kubenswrapper[4724]: I0226 12:52:01.032486 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535172-529sg"] Feb 26 12:52:01 crc kubenswrapper[4724]: I0226 12:52:01.039463 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 12:52:01 crc kubenswrapper[4724]: I0226 12:52:01.251620 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535172-529sg" event={"ID":"9ab15f4d-a142-4a56-b8de-5dd5503a2801","Type":"ContainerStarted","Data":"9b5a1b3208a43850186de149257292707f6d6de07d68d3cb8953d1d4d027a3c3"} Feb 26 12:52:03 crc kubenswrapper[4724]: I0226 12:52:03.271070 4724 generic.go:334] "Generic (PLEG): container finished" podID="9ab15f4d-a142-4a56-b8de-5dd5503a2801" containerID="b081d60392d7e9b58c2fcbf85450d2c8290dbde21bb2b3731eaabe06a4e42d27" exitCode=0 Feb 26 12:52:03 crc kubenswrapper[4724]: I0226 12:52:03.271135 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535172-529sg" event={"ID":"9ab15f4d-a142-4a56-b8de-5dd5503a2801","Type":"ContainerDied","Data":"b081d60392d7e9b58c2fcbf85450d2c8290dbde21bb2b3731eaabe06a4e42d27"} Feb 26 12:52:04 crc kubenswrapper[4724]: I0226 12:52:04.706991 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535172-529sg" Feb 26 12:52:04 crc kubenswrapper[4724]: I0226 12:52:04.746425 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78497\" (UniqueName: \"kubernetes.io/projected/9ab15f4d-a142-4a56-b8de-5dd5503a2801-kube-api-access-78497\") pod \"9ab15f4d-a142-4a56-b8de-5dd5503a2801\" (UID: \"9ab15f4d-a142-4a56-b8de-5dd5503a2801\") " Feb 26 12:52:04 crc kubenswrapper[4724]: I0226 12:52:04.752827 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ab15f4d-a142-4a56-b8de-5dd5503a2801-kube-api-access-78497" (OuterVolumeSpecName: "kube-api-access-78497") pod "9ab15f4d-a142-4a56-b8de-5dd5503a2801" (UID: "9ab15f4d-a142-4a56-b8de-5dd5503a2801"). InnerVolumeSpecName "kube-api-access-78497". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:52:04 crc kubenswrapper[4724]: I0226 12:52:04.849655 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78497\" (UniqueName: \"kubernetes.io/projected/9ab15f4d-a142-4a56-b8de-5dd5503a2801-kube-api-access-78497\") on node \"crc\" DevicePath \"\"" Feb 26 12:52:05 crc kubenswrapper[4724]: I0226 12:52:05.300937 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535172-529sg" event={"ID":"9ab15f4d-a142-4a56-b8de-5dd5503a2801","Type":"ContainerDied","Data":"9b5a1b3208a43850186de149257292707f6d6de07d68d3cb8953d1d4d027a3c3"} Feb 26 12:52:05 crc kubenswrapper[4724]: I0226 12:52:05.301584 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b5a1b3208a43850186de149257292707f6d6de07d68d3cb8953d1d4d027a3c3" Feb 26 12:52:05 crc kubenswrapper[4724]: I0226 12:52:05.300953 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535172-529sg" Feb 26 12:52:05 crc kubenswrapper[4724]: I0226 12:52:05.815194 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535166-48pp6"] Feb 26 12:52:05 crc kubenswrapper[4724]: I0226 12:52:05.827455 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535166-48pp6"] Feb 26 12:52:05 crc kubenswrapper[4724]: I0226 12:52:05.992214 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0136676-8165-4f49-9969-a479cdb70132" path="/var/lib/kubelet/pods/a0136676-8165-4f49-9969-a479cdb70132/volumes" Feb 26 12:52:06 crc kubenswrapper[4724]: I0226 12:52:06.975869 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:52:06 crc kubenswrapper[4724]: E0226 12:52:06.976348 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:52:19 crc kubenswrapper[4724]: I0226 12:52:19.975773 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:52:20 crc kubenswrapper[4724]: I0226 12:52:20.432848 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"3e26710043bff6da437222d56ea51a7da623eabb0b8e2e2eb93e241e3e4a190d"} Feb 26 12:52:22 crc kubenswrapper[4724]: I0226 12:52:22.942401 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qqh2r"] Feb 26 12:52:22 crc kubenswrapper[4724]: E0226 12:52:22.943231 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ab15f4d-a142-4a56-b8de-5dd5503a2801" containerName="oc" Feb 26 12:52:22 crc kubenswrapper[4724]: I0226 12:52:22.943244 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ab15f4d-a142-4a56-b8de-5dd5503a2801" containerName="oc" Feb 26 12:52:22 crc kubenswrapper[4724]: I0226 12:52:22.943461 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ab15f4d-a142-4a56-b8de-5dd5503a2801" containerName="oc" Feb 26 12:52:22 crc kubenswrapper[4724]: I0226 12:52:22.946071 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:22 crc kubenswrapper[4724]: I0226 12:52:22.954077 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qqh2r"] Feb 26 12:52:23 crc kubenswrapper[4724]: I0226 12:52:23.028720 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acda1906-4aeb-4391-b678-cc0222611ae4-catalog-content\") pod \"certified-operators-qqh2r\" (UID: \"acda1906-4aeb-4391-b678-cc0222611ae4\") " pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:23 crc kubenswrapper[4724]: I0226 12:52:23.028818 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acda1906-4aeb-4391-b678-cc0222611ae4-utilities\") pod \"certified-operators-qqh2r\" (UID: \"acda1906-4aeb-4391-b678-cc0222611ae4\") " pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:23 crc kubenswrapper[4724]: I0226 12:52:23.028877 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bljg8\" (UniqueName: \"kubernetes.io/projected/acda1906-4aeb-4391-b678-cc0222611ae4-kube-api-access-bljg8\") pod \"certified-operators-qqh2r\" (UID: \"acda1906-4aeb-4391-b678-cc0222611ae4\") " pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:23 crc kubenswrapper[4724]: I0226 12:52:23.130827 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acda1906-4aeb-4391-b678-cc0222611ae4-catalog-content\") pod \"certified-operators-qqh2r\" (UID: \"acda1906-4aeb-4391-b678-cc0222611ae4\") " pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:23 crc kubenswrapper[4724]: I0226 12:52:23.130931 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acda1906-4aeb-4391-b678-cc0222611ae4-utilities\") pod \"certified-operators-qqh2r\" (UID: \"acda1906-4aeb-4391-b678-cc0222611ae4\") " pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:23 crc kubenswrapper[4724]: I0226 12:52:23.130987 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bljg8\" (UniqueName: \"kubernetes.io/projected/acda1906-4aeb-4391-b678-cc0222611ae4-kube-api-access-bljg8\") pod \"certified-operators-qqh2r\" (UID: \"acda1906-4aeb-4391-b678-cc0222611ae4\") " pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:23 crc kubenswrapper[4724]: I0226 12:52:23.131299 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acda1906-4aeb-4391-b678-cc0222611ae4-catalog-content\") pod \"certified-operators-qqh2r\" (UID: \"acda1906-4aeb-4391-b678-cc0222611ae4\") " pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:23 crc kubenswrapper[4724]: I0226 12:52:23.131915 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acda1906-4aeb-4391-b678-cc0222611ae4-utilities\") pod \"certified-operators-qqh2r\" (UID: \"acda1906-4aeb-4391-b678-cc0222611ae4\") " pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:23 crc kubenswrapper[4724]: I0226 12:52:23.152022 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bljg8\" (UniqueName: \"kubernetes.io/projected/acda1906-4aeb-4391-b678-cc0222611ae4-kube-api-access-bljg8\") pod \"certified-operators-qqh2r\" (UID: \"acda1906-4aeb-4391-b678-cc0222611ae4\") " pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:23 crc kubenswrapper[4724]: I0226 12:52:23.261587 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:23 crc kubenswrapper[4724]: I0226 12:52:23.929015 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qqh2r"] Feb 26 12:52:23 crc kubenswrapper[4724]: I0226 12:52:23.939629 4724 scope.go:117] "RemoveContainer" containerID="3ce19b8779469bf6bb5e0d4ee93c9262b1e73302c1e7617a9b5f06cf621d8f68" Feb 26 12:52:24 crc kubenswrapper[4724]: I0226 12:52:24.045294 4724 scope.go:117] "RemoveContainer" containerID="40757a409fab4e0deafbde8065aa4f2a47d65b0d755a58ff156c4b810b844e9d" Feb 26 12:52:24 crc kubenswrapper[4724]: I0226 12:52:24.072001 4724 scope.go:117] "RemoveContainer" containerID="2908efb57d36f6bbb1943a2cf284a51443e2b3c6d8993be35dfa97561d90df92" Feb 26 12:52:24 crc kubenswrapper[4724]: I0226 12:52:24.482115 4724 generic.go:334] "Generic (PLEG): container finished" podID="acda1906-4aeb-4391-b678-cc0222611ae4" containerID="f43f8b062de68c575aa9ff75db87c44950445cb1f531f8053c5df36f35a9cff3" exitCode=0 Feb 26 12:52:24 crc kubenswrapper[4724]: I0226 12:52:24.482159 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqh2r" event={"ID":"acda1906-4aeb-4391-b678-cc0222611ae4","Type":"ContainerDied","Data":"f43f8b062de68c575aa9ff75db87c44950445cb1f531f8053c5df36f35a9cff3"} Feb 26 12:52:24 crc kubenswrapper[4724]: I0226 12:52:24.482204 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqh2r" event={"ID":"acda1906-4aeb-4391-b678-cc0222611ae4","Type":"ContainerStarted","Data":"66d19b8e56ded762f7cab0e9f957d72bef95212a898d7d25d71994e001fec3cb"} Feb 26 12:52:26 crc kubenswrapper[4724]: I0226 12:52:26.509576 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqh2r" event={"ID":"acda1906-4aeb-4391-b678-cc0222611ae4","Type":"ContainerStarted","Data":"1c84a200c4fa185e1622ea0de5763a164c782824931ed704468f89c0bb0039f9"} Feb 26 12:52:28 crc kubenswrapper[4724]: E0226 12:52:28.329529 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacda1906_4aeb_4391_b678_cc0222611ae4.slice/crio-1c84a200c4fa185e1622ea0de5763a164c782824931ed704468f89c0bb0039f9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacda1906_4aeb_4391_b678_cc0222611ae4.slice/crio-conmon-1c84a200c4fa185e1622ea0de5763a164c782824931ed704468f89c0bb0039f9.scope\": RecentStats: unable to find data in memory cache]" Feb 26 12:52:28 crc kubenswrapper[4724]: I0226 12:52:28.538278 4724 generic.go:334] "Generic (PLEG): container finished" podID="acda1906-4aeb-4391-b678-cc0222611ae4" containerID="1c84a200c4fa185e1622ea0de5763a164c782824931ed704468f89c0bb0039f9" exitCode=0 Feb 26 12:52:28 crc kubenswrapper[4724]: I0226 12:52:28.538378 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqh2r" event={"ID":"acda1906-4aeb-4391-b678-cc0222611ae4","Type":"ContainerDied","Data":"1c84a200c4fa185e1622ea0de5763a164c782824931ed704468f89c0bb0039f9"} Feb 26 12:52:29 crc kubenswrapper[4724]: I0226 12:52:29.548466 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqh2r" event={"ID":"acda1906-4aeb-4391-b678-cc0222611ae4","Type":"ContainerStarted","Data":"a95c9b73559c6006e1303310f5a4cc22f4f8be784c512fee00f1ee09034158a9"} Feb 26 12:52:29 crc kubenswrapper[4724]: I0226 12:52:29.574884 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qqh2r" podStartSLOduration=3.116546951 podStartE2EDuration="7.5748476s" podCreationTimestamp="2026-02-26 12:52:22 +0000 UTC" firstStartedPulling="2026-02-26 12:52:24.484819494 +0000 UTC m=+6411.140558609" lastFinishedPulling="2026-02-26 12:52:28.943120143 +0000 UTC m=+6415.598859258" observedRunningTime="2026-02-26 12:52:29.564355072 +0000 UTC m=+6416.220094207" watchObservedRunningTime="2026-02-26 12:52:29.5748476 +0000 UTC m=+6416.230586715" Feb 26 12:52:33 crc kubenswrapper[4724]: I0226 12:52:33.261821 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:33 crc kubenswrapper[4724]: I0226 12:52:33.262367 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:34 crc kubenswrapper[4724]: I0226 12:52:34.315912 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qqh2r" podUID="acda1906-4aeb-4391-b678-cc0222611ae4" containerName="registry-server" probeResult="failure" output=< Feb 26 12:52:34 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:52:34 crc kubenswrapper[4724]: > Feb 26 12:52:43 crc kubenswrapper[4724]: I0226 12:52:43.319524 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:43 crc kubenswrapper[4724]: I0226 12:52:43.374672 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:43 crc kubenswrapper[4724]: I0226 12:52:43.577423 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qqh2r"] Feb 26 12:52:44 crc kubenswrapper[4724]: I0226 12:52:44.760494 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qqh2r" podUID="acda1906-4aeb-4391-b678-cc0222611ae4" containerName="registry-server" containerID="cri-o://a95c9b73559c6006e1303310f5a4cc22f4f8be784c512fee00f1ee09034158a9" gracePeriod=2 Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.523681 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.651268 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acda1906-4aeb-4391-b678-cc0222611ae4-catalog-content\") pod \"acda1906-4aeb-4391-b678-cc0222611ae4\" (UID: \"acda1906-4aeb-4391-b678-cc0222611ae4\") " Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.651392 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acda1906-4aeb-4391-b678-cc0222611ae4-utilities\") pod \"acda1906-4aeb-4391-b678-cc0222611ae4\" (UID: \"acda1906-4aeb-4391-b678-cc0222611ae4\") " Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.651452 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bljg8\" (UniqueName: \"kubernetes.io/projected/acda1906-4aeb-4391-b678-cc0222611ae4-kube-api-access-bljg8\") pod \"acda1906-4aeb-4391-b678-cc0222611ae4\" (UID: \"acda1906-4aeb-4391-b678-cc0222611ae4\") " Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.652258 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acda1906-4aeb-4391-b678-cc0222611ae4-utilities" (OuterVolumeSpecName: "utilities") pod "acda1906-4aeb-4391-b678-cc0222611ae4" (UID: "acda1906-4aeb-4391-b678-cc0222611ae4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.660266 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acda1906-4aeb-4391-b678-cc0222611ae4-kube-api-access-bljg8" (OuterVolumeSpecName: "kube-api-access-bljg8") pod "acda1906-4aeb-4391-b678-cc0222611ae4" (UID: "acda1906-4aeb-4391-b678-cc0222611ae4"). InnerVolumeSpecName "kube-api-access-bljg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.713490 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acda1906-4aeb-4391-b678-cc0222611ae4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "acda1906-4aeb-4391-b678-cc0222611ae4" (UID: "acda1906-4aeb-4391-b678-cc0222611ae4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.754031 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acda1906-4aeb-4391-b678-cc0222611ae4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.754070 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acda1906-4aeb-4391-b678-cc0222611ae4-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.754085 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bljg8\" (UniqueName: \"kubernetes.io/projected/acda1906-4aeb-4391-b678-cc0222611ae4-kube-api-access-bljg8\") on node \"crc\" DevicePath \"\"" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.774396 4724 generic.go:334] "Generic (PLEG): container finished" podID="acda1906-4aeb-4391-b678-cc0222611ae4" containerID="a95c9b73559c6006e1303310f5a4cc22f4f8be784c512fee00f1ee09034158a9" exitCode=0 Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.774449 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqh2r" event={"ID":"acda1906-4aeb-4391-b678-cc0222611ae4","Type":"ContainerDied","Data":"a95c9b73559c6006e1303310f5a4cc22f4f8be784c512fee00f1ee09034158a9"} Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.774476 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqh2r" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.774523 4724 scope.go:117] "RemoveContainer" containerID="a95c9b73559c6006e1303310f5a4cc22f4f8be784c512fee00f1ee09034158a9" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.774483 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqh2r" event={"ID":"acda1906-4aeb-4391-b678-cc0222611ae4","Type":"ContainerDied","Data":"66d19b8e56ded762f7cab0e9f957d72bef95212a898d7d25d71994e001fec3cb"} Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.822939 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qqh2r"] Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.824705 4724 scope.go:117] "RemoveContainer" containerID="1c84a200c4fa185e1622ea0de5763a164c782824931ed704468f89c0bb0039f9" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.832583 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qqh2r"] Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.853499 4724 scope.go:117] "RemoveContainer" containerID="f43f8b062de68c575aa9ff75db87c44950445cb1f531f8053c5df36f35a9cff3" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.902345 4724 scope.go:117] "RemoveContainer" containerID="a95c9b73559c6006e1303310f5a4cc22f4f8be784c512fee00f1ee09034158a9" Feb 26 12:52:45 crc kubenswrapper[4724]: E0226 12:52:45.903022 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a95c9b73559c6006e1303310f5a4cc22f4f8be784c512fee00f1ee09034158a9\": container with ID starting with a95c9b73559c6006e1303310f5a4cc22f4f8be784c512fee00f1ee09034158a9 not found: ID does not exist" containerID="a95c9b73559c6006e1303310f5a4cc22f4f8be784c512fee00f1ee09034158a9" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.903053 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a95c9b73559c6006e1303310f5a4cc22f4f8be784c512fee00f1ee09034158a9"} err="failed to get container status \"a95c9b73559c6006e1303310f5a4cc22f4f8be784c512fee00f1ee09034158a9\": rpc error: code = NotFound desc = could not find container \"a95c9b73559c6006e1303310f5a4cc22f4f8be784c512fee00f1ee09034158a9\": container with ID starting with a95c9b73559c6006e1303310f5a4cc22f4f8be784c512fee00f1ee09034158a9 not found: ID does not exist" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.903079 4724 scope.go:117] "RemoveContainer" containerID="1c84a200c4fa185e1622ea0de5763a164c782824931ed704468f89c0bb0039f9" Feb 26 12:52:45 crc kubenswrapper[4724]: E0226 12:52:45.903418 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c84a200c4fa185e1622ea0de5763a164c782824931ed704468f89c0bb0039f9\": container with ID starting with 1c84a200c4fa185e1622ea0de5763a164c782824931ed704468f89c0bb0039f9 not found: ID does not exist" containerID="1c84a200c4fa185e1622ea0de5763a164c782824931ed704468f89c0bb0039f9" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.903452 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c84a200c4fa185e1622ea0de5763a164c782824931ed704468f89c0bb0039f9"} err="failed to get container status \"1c84a200c4fa185e1622ea0de5763a164c782824931ed704468f89c0bb0039f9\": rpc error: code = NotFound desc = could not find container \"1c84a200c4fa185e1622ea0de5763a164c782824931ed704468f89c0bb0039f9\": container with ID starting with 1c84a200c4fa185e1622ea0de5763a164c782824931ed704468f89c0bb0039f9 not found: ID does not exist" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.903479 4724 scope.go:117] "RemoveContainer" containerID="f43f8b062de68c575aa9ff75db87c44950445cb1f531f8053c5df36f35a9cff3" Feb 26 12:52:45 crc kubenswrapper[4724]: E0226 12:52:45.903799 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f43f8b062de68c575aa9ff75db87c44950445cb1f531f8053c5df36f35a9cff3\": container with ID starting with f43f8b062de68c575aa9ff75db87c44950445cb1f531f8053c5df36f35a9cff3 not found: ID does not exist" containerID="f43f8b062de68c575aa9ff75db87c44950445cb1f531f8053c5df36f35a9cff3" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.903821 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f43f8b062de68c575aa9ff75db87c44950445cb1f531f8053c5df36f35a9cff3"} err="failed to get container status \"f43f8b062de68c575aa9ff75db87c44950445cb1f531f8053c5df36f35a9cff3\": rpc error: code = NotFound desc = could not find container \"f43f8b062de68c575aa9ff75db87c44950445cb1f531f8053c5df36f35a9cff3\": container with ID starting with f43f8b062de68c575aa9ff75db87c44950445cb1f531f8053c5df36f35a9cff3 not found: ID does not exist" Feb 26 12:52:45 crc kubenswrapper[4724]: I0226 12:52:45.991514 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acda1906-4aeb-4391-b678-cc0222611ae4" path="/var/lib/kubelet/pods/acda1906-4aeb-4391-b678-cc0222611ae4/volumes" Feb 26 12:54:00 crc kubenswrapper[4724]: I0226 12:54:00.172491 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535174-5p96g"] Feb 26 12:54:00 crc kubenswrapper[4724]: E0226 12:54:00.173578 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acda1906-4aeb-4391-b678-cc0222611ae4" containerName="extract-utilities" Feb 26 12:54:00 crc kubenswrapper[4724]: I0226 12:54:00.173592 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="acda1906-4aeb-4391-b678-cc0222611ae4" containerName="extract-utilities" Feb 26 12:54:00 crc kubenswrapper[4724]: E0226 12:54:00.173612 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acda1906-4aeb-4391-b678-cc0222611ae4" containerName="extract-content" Feb 26 12:54:00 crc kubenswrapper[4724]: I0226 12:54:00.173618 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="acda1906-4aeb-4391-b678-cc0222611ae4" containerName="extract-content" Feb 26 12:54:00 crc kubenswrapper[4724]: E0226 12:54:00.173648 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acda1906-4aeb-4391-b678-cc0222611ae4" containerName="registry-server" Feb 26 12:54:00 crc kubenswrapper[4724]: I0226 12:54:00.173654 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="acda1906-4aeb-4391-b678-cc0222611ae4" containerName="registry-server" Feb 26 12:54:00 crc kubenswrapper[4724]: I0226 12:54:00.173846 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="acda1906-4aeb-4391-b678-cc0222611ae4" containerName="registry-server" Feb 26 12:54:00 crc kubenswrapper[4724]: I0226 12:54:00.174584 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535174-5p96g" Feb 26 12:54:00 crc kubenswrapper[4724]: I0226 12:54:00.183952 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535174-5p96g"] Feb 26 12:54:00 crc kubenswrapper[4724]: I0226 12:54:00.218816 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:54:00 crc kubenswrapper[4724]: I0226 12:54:00.219075 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:54:00 crc kubenswrapper[4724]: I0226 12:54:00.219218 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:54:00 crc kubenswrapper[4724]: I0226 12:54:00.223306 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gd9w\" (UniqueName: \"kubernetes.io/projected/b998271d-dfc5-4a88-847b-a3223fa163e4-kube-api-access-2gd9w\") pod \"auto-csr-approver-29535174-5p96g\" (UID: \"b998271d-dfc5-4a88-847b-a3223fa163e4\") " pod="openshift-infra/auto-csr-approver-29535174-5p96g" Feb 26 12:54:00 crc kubenswrapper[4724]: I0226 12:54:00.324651 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gd9w\" (UniqueName: \"kubernetes.io/projected/b998271d-dfc5-4a88-847b-a3223fa163e4-kube-api-access-2gd9w\") pod \"auto-csr-approver-29535174-5p96g\" (UID: \"b998271d-dfc5-4a88-847b-a3223fa163e4\") " pod="openshift-infra/auto-csr-approver-29535174-5p96g" Feb 26 12:54:00 crc kubenswrapper[4724]: I0226 12:54:00.347102 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gd9w\" (UniqueName: \"kubernetes.io/projected/b998271d-dfc5-4a88-847b-a3223fa163e4-kube-api-access-2gd9w\") pod \"auto-csr-approver-29535174-5p96g\" (UID: \"b998271d-dfc5-4a88-847b-a3223fa163e4\") " pod="openshift-infra/auto-csr-approver-29535174-5p96g" Feb 26 12:54:00 crc kubenswrapper[4724]: I0226 12:54:00.535509 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535174-5p96g" Feb 26 12:54:01 crc kubenswrapper[4724]: I0226 12:54:01.166395 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535174-5p96g"] Feb 26 12:54:01 crc kubenswrapper[4724]: I0226 12:54:01.803142 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535174-5p96g" event={"ID":"b998271d-dfc5-4a88-847b-a3223fa163e4","Type":"ContainerStarted","Data":"bdbdcca133a13cd09336e9a49379e3e2184668466aef286e9cbf0638ac7b2686"} Feb 26 12:54:02 crc kubenswrapper[4724]: I0226 12:54:02.812029 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535174-5p96g" event={"ID":"b998271d-dfc5-4a88-847b-a3223fa163e4","Type":"ContainerStarted","Data":"7db680ae75e13faf0d0e01f9963436847eba6b7f701a870d9b2c7cda7b6a579a"} Feb 26 12:54:03 crc kubenswrapper[4724]: I0226 12:54:03.823248 4724 generic.go:334] "Generic (PLEG): container finished" podID="b998271d-dfc5-4a88-847b-a3223fa163e4" containerID="7db680ae75e13faf0d0e01f9963436847eba6b7f701a870d9b2c7cda7b6a579a" exitCode=0 Feb 26 12:54:03 crc kubenswrapper[4724]: I0226 12:54:03.823338 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535174-5p96g" event={"ID":"b998271d-dfc5-4a88-847b-a3223fa163e4","Type":"ContainerDied","Data":"7db680ae75e13faf0d0e01f9963436847eba6b7f701a870d9b2c7cda7b6a579a"} Feb 26 12:54:05 crc kubenswrapper[4724]: I0226 12:54:05.246136 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535174-5p96g" Feb 26 12:54:05 crc kubenswrapper[4724]: I0226 12:54:05.324996 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gd9w\" (UniqueName: \"kubernetes.io/projected/b998271d-dfc5-4a88-847b-a3223fa163e4-kube-api-access-2gd9w\") pod \"b998271d-dfc5-4a88-847b-a3223fa163e4\" (UID: \"b998271d-dfc5-4a88-847b-a3223fa163e4\") " Feb 26 12:54:05 crc kubenswrapper[4724]: I0226 12:54:05.356437 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b998271d-dfc5-4a88-847b-a3223fa163e4-kube-api-access-2gd9w" (OuterVolumeSpecName: "kube-api-access-2gd9w") pod "b998271d-dfc5-4a88-847b-a3223fa163e4" (UID: "b998271d-dfc5-4a88-847b-a3223fa163e4"). InnerVolumeSpecName "kube-api-access-2gd9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:54:05 crc kubenswrapper[4724]: I0226 12:54:05.427597 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gd9w\" (UniqueName: \"kubernetes.io/projected/b998271d-dfc5-4a88-847b-a3223fa163e4-kube-api-access-2gd9w\") on node \"crc\" DevicePath \"\"" Feb 26 12:54:05 crc kubenswrapper[4724]: I0226 12:54:05.841253 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535174-5p96g" event={"ID":"b998271d-dfc5-4a88-847b-a3223fa163e4","Type":"ContainerDied","Data":"bdbdcca133a13cd09336e9a49379e3e2184668466aef286e9cbf0638ac7b2686"} Feb 26 12:54:05 crc kubenswrapper[4724]: I0226 12:54:05.841299 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdbdcca133a13cd09336e9a49379e3e2184668466aef286e9cbf0638ac7b2686" Feb 26 12:54:05 crc kubenswrapper[4724]: I0226 12:54:05.841308 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535174-5p96g" Feb 26 12:54:05 crc kubenswrapper[4724]: I0226 12:54:05.911782 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535168-fwhgm"] Feb 26 12:54:05 crc kubenswrapper[4724]: I0226 12:54:05.921577 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535168-fwhgm"] Feb 26 12:54:05 crc kubenswrapper[4724]: I0226 12:54:05.991620 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="196c307b-c5ee-45ec-b31d-10d3340edeee" path="/var/lib/kubelet/pods/196c307b-c5ee-45ec-b31d-10d3340edeee/volumes" Feb 26 12:54:24 crc kubenswrapper[4724]: I0226 12:54:24.205279 4724 scope.go:117] "RemoveContainer" containerID="d2f02ac65ac647fa8b165a665b0872619c2020ecf0d9887b531355a7236045e0" Feb 26 12:54:46 crc kubenswrapper[4724]: I0226 12:54:46.906361 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:54:46 crc kubenswrapper[4724]: I0226 12:54:46.907016 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:55:16 crc kubenswrapper[4724]: I0226 12:55:16.908609 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:55:16 crc kubenswrapper[4724]: I0226 12:55:16.909689 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:55:24 crc kubenswrapper[4724]: I0226 12:55:24.273743 4724 scope.go:117] "RemoveContainer" containerID="8a7e7cf953e3a123bfa790d1c0a320ded53fcea4303b358efd57971d026cac99" Feb 26 12:55:46 crc kubenswrapper[4724]: I0226 12:55:46.906021 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:55:46 crc kubenswrapper[4724]: I0226 12:55:46.906769 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:55:46 crc kubenswrapper[4724]: I0226 12:55:46.906841 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 12:55:46 crc kubenswrapper[4724]: I0226 12:55:46.908088 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e26710043bff6da437222d56ea51a7da623eabb0b8e2e2eb93e241e3e4a190d"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 12:55:46 crc kubenswrapper[4724]: I0226 12:55:46.908251 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://3e26710043bff6da437222d56ea51a7da623eabb0b8e2e2eb93e241e3e4a190d" gracePeriod=600 Feb 26 12:55:47 crc kubenswrapper[4724]: I0226 12:55:47.819535 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="3e26710043bff6da437222d56ea51a7da623eabb0b8e2e2eb93e241e3e4a190d" exitCode=0 Feb 26 12:55:47 crc kubenswrapper[4724]: I0226 12:55:47.819608 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"3e26710043bff6da437222d56ea51a7da623eabb0b8e2e2eb93e241e3e4a190d"} Feb 26 12:55:47 crc kubenswrapper[4724]: I0226 12:55:47.819890 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54"} Feb 26 12:55:47 crc kubenswrapper[4724]: I0226 12:55:47.819919 4724 scope.go:117] "RemoveContainer" containerID="8ad61f3c7951cde1cf4688ff6e8d35d11417b05168323f83cce8e92de65f225b" Feb 26 12:56:00 crc kubenswrapper[4724]: I0226 12:56:00.161688 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535176-bg2ld"] Feb 26 12:56:00 crc kubenswrapper[4724]: E0226 12:56:00.162637 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b998271d-dfc5-4a88-847b-a3223fa163e4" containerName="oc" Feb 26 12:56:00 crc kubenswrapper[4724]: I0226 12:56:00.162651 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b998271d-dfc5-4a88-847b-a3223fa163e4" containerName="oc" Feb 26 12:56:00 crc kubenswrapper[4724]: I0226 12:56:00.162836 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b998271d-dfc5-4a88-847b-a3223fa163e4" containerName="oc" Feb 26 12:56:00 crc kubenswrapper[4724]: I0226 12:56:00.163555 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535176-bg2ld" Feb 26 12:56:00 crc kubenswrapper[4724]: I0226 12:56:00.168448 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:56:00 crc kubenswrapper[4724]: I0226 12:56:00.169289 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:56:00 crc kubenswrapper[4724]: I0226 12:56:00.173195 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535176-bg2ld"] Feb 26 12:56:00 crc kubenswrapper[4724]: I0226 12:56:00.179273 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:56:00 crc kubenswrapper[4724]: I0226 12:56:00.320115 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cmxp\" (UniqueName: \"kubernetes.io/projected/ab95d02d-14dc-4bbe-bc19-79e4bc9a5384-kube-api-access-4cmxp\") pod \"auto-csr-approver-29535176-bg2ld\" (UID: \"ab95d02d-14dc-4bbe-bc19-79e4bc9a5384\") " pod="openshift-infra/auto-csr-approver-29535176-bg2ld" Feb 26 12:56:00 crc kubenswrapper[4724]: I0226 12:56:00.422545 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cmxp\" (UniqueName: \"kubernetes.io/projected/ab95d02d-14dc-4bbe-bc19-79e4bc9a5384-kube-api-access-4cmxp\") pod \"auto-csr-approver-29535176-bg2ld\" (UID: \"ab95d02d-14dc-4bbe-bc19-79e4bc9a5384\") " pod="openshift-infra/auto-csr-approver-29535176-bg2ld" Feb 26 12:56:00 crc kubenswrapper[4724]: I0226 12:56:00.441996 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cmxp\" (UniqueName: \"kubernetes.io/projected/ab95d02d-14dc-4bbe-bc19-79e4bc9a5384-kube-api-access-4cmxp\") pod \"auto-csr-approver-29535176-bg2ld\" (UID: \"ab95d02d-14dc-4bbe-bc19-79e4bc9a5384\") " pod="openshift-infra/auto-csr-approver-29535176-bg2ld" Feb 26 12:56:00 crc kubenswrapper[4724]: I0226 12:56:00.479008 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535176-bg2ld" Feb 26 12:56:00 crc kubenswrapper[4724]: I0226 12:56:00.844984 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535176-bg2ld"] Feb 26 12:56:00 crc kubenswrapper[4724]: I0226 12:56:00.938641 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535176-bg2ld" event={"ID":"ab95d02d-14dc-4bbe-bc19-79e4bc9a5384","Type":"ContainerStarted","Data":"bdaed9ee6c61a3fd0c7de838f868c9903915feb221b2aad9721c64c86fd7b2dd"} Feb 26 12:56:02 crc kubenswrapper[4724]: I0226 12:56:02.955398 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535176-bg2ld" event={"ID":"ab95d02d-14dc-4bbe-bc19-79e4bc9a5384","Type":"ContainerStarted","Data":"1bd75aa3245f4c261eb253cce2e2d9e95e353d3c46382fc73386a37c2668515e"} Feb 26 12:56:02 crc kubenswrapper[4724]: I0226 12:56:02.982485 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535176-bg2ld" podStartSLOduration=1.510650897 podStartE2EDuration="2.982468787s" podCreationTimestamp="2026-02-26 12:56:00 +0000 UTC" firstStartedPulling="2026-02-26 12:56:00.862414554 +0000 UTC m=+6627.518153669" lastFinishedPulling="2026-02-26 12:56:02.334232424 +0000 UTC m=+6628.989971559" observedRunningTime="2026-02-26 12:56:02.98103303 +0000 UTC m=+6629.636772165" watchObservedRunningTime="2026-02-26 12:56:02.982468787 +0000 UTC m=+6629.638207902" Feb 26 12:56:03 crc kubenswrapper[4724]: I0226 12:56:03.966112 4724 generic.go:334] "Generic (PLEG): container finished" podID="ab95d02d-14dc-4bbe-bc19-79e4bc9a5384" containerID="1bd75aa3245f4c261eb253cce2e2d9e95e353d3c46382fc73386a37c2668515e" exitCode=0 Feb 26 12:56:03 crc kubenswrapper[4724]: I0226 12:56:03.966160 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535176-bg2ld" event={"ID":"ab95d02d-14dc-4bbe-bc19-79e4bc9a5384","Type":"ContainerDied","Data":"1bd75aa3245f4c261eb253cce2e2d9e95e353d3c46382fc73386a37c2668515e"} Feb 26 12:56:05 crc kubenswrapper[4724]: I0226 12:56:05.482469 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535176-bg2ld" Feb 26 12:56:05 crc kubenswrapper[4724]: I0226 12:56:05.567072 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cmxp\" (UniqueName: \"kubernetes.io/projected/ab95d02d-14dc-4bbe-bc19-79e4bc9a5384-kube-api-access-4cmxp\") pod \"ab95d02d-14dc-4bbe-bc19-79e4bc9a5384\" (UID: \"ab95d02d-14dc-4bbe-bc19-79e4bc9a5384\") " Feb 26 12:56:05 crc kubenswrapper[4724]: I0226 12:56:05.575478 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab95d02d-14dc-4bbe-bc19-79e4bc9a5384-kube-api-access-4cmxp" (OuterVolumeSpecName: "kube-api-access-4cmxp") pod "ab95d02d-14dc-4bbe-bc19-79e4bc9a5384" (UID: "ab95d02d-14dc-4bbe-bc19-79e4bc9a5384"). InnerVolumeSpecName "kube-api-access-4cmxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:56:05 crc kubenswrapper[4724]: I0226 12:56:05.668679 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cmxp\" (UniqueName: \"kubernetes.io/projected/ab95d02d-14dc-4bbe-bc19-79e4bc9a5384-kube-api-access-4cmxp\") on node \"crc\" DevicePath \"\"" Feb 26 12:56:05 crc kubenswrapper[4724]: I0226 12:56:05.984304 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535176-bg2ld" Feb 26 12:56:05 crc kubenswrapper[4724]: I0226 12:56:05.984823 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535176-bg2ld" event={"ID":"ab95d02d-14dc-4bbe-bc19-79e4bc9a5384","Type":"ContainerDied","Data":"bdaed9ee6c61a3fd0c7de838f868c9903915feb221b2aad9721c64c86fd7b2dd"} Feb 26 12:56:05 crc kubenswrapper[4724]: I0226 12:56:05.984853 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdaed9ee6c61a3fd0c7de838f868c9903915feb221b2aad9721c64c86fd7b2dd" Feb 26 12:56:06 crc kubenswrapper[4724]: I0226 12:56:06.066441 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535170-gmwvw"] Feb 26 12:56:06 crc kubenswrapper[4724]: I0226 12:56:06.086138 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535170-gmwvw"] Feb 26 12:56:07 crc kubenswrapper[4724]: I0226 12:56:07.987883 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7b3534a-0338-4fd7-9f04-574a30cdfea5" path="/var/lib/kubelet/pods/a7b3534a-0338-4fd7-9f04-574a30cdfea5/volumes" Feb 26 12:56:24 crc kubenswrapper[4724]: I0226 12:56:24.325374 4724 scope.go:117] "RemoveContainer" containerID="79411f505662e7b31d383e63bd1879f82b4c44b2ad4978a5398d48e130f9f8dc" Feb 26 12:58:00 crc kubenswrapper[4724]: I0226 12:58:00.161880 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535178-ld56v"] Feb 26 12:58:00 crc kubenswrapper[4724]: E0226 12:58:00.163065 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab95d02d-14dc-4bbe-bc19-79e4bc9a5384" containerName="oc" Feb 26 12:58:00 crc kubenswrapper[4724]: I0226 12:58:00.163083 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab95d02d-14dc-4bbe-bc19-79e4bc9a5384" containerName="oc" Feb 26 12:58:00 crc kubenswrapper[4724]: I0226 12:58:00.163375 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab95d02d-14dc-4bbe-bc19-79e4bc9a5384" containerName="oc" Feb 26 12:58:00 crc kubenswrapper[4724]: I0226 12:58:00.164233 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535178-ld56v" Feb 26 12:58:00 crc kubenswrapper[4724]: I0226 12:58:00.166852 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 12:58:00 crc kubenswrapper[4724]: I0226 12:58:00.167226 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 12:58:00 crc kubenswrapper[4724]: I0226 12:58:00.167804 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 12:58:00 crc kubenswrapper[4724]: I0226 12:58:00.208922 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535178-ld56v"] Feb 26 12:58:00 crc kubenswrapper[4724]: I0226 12:58:00.316452 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m4z2\" (UniqueName: \"kubernetes.io/projected/0a1882c9-33ec-4665-a39c-50f62e73280f-kube-api-access-6m4z2\") pod \"auto-csr-approver-29535178-ld56v\" (UID: \"0a1882c9-33ec-4665-a39c-50f62e73280f\") " pod="openshift-infra/auto-csr-approver-29535178-ld56v" Feb 26 12:58:00 crc kubenswrapper[4724]: I0226 12:58:00.417818 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m4z2\" (UniqueName: \"kubernetes.io/projected/0a1882c9-33ec-4665-a39c-50f62e73280f-kube-api-access-6m4z2\") pod \"auto-csr-approver-29535178-ld56v\" (UID: \"0a1882c9-33ec-4665-a39c-50f62e73280f\") " pod="openshift-infra/auto-csr-approver-29535178-ld56v" Feb 26 12:58:00 crc kubenswrapper[4724]: I0226 12:58:00.440933 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m4z2\" (UniqueName: \"kubernetes.io/projected/0a1882c9-33ec-4665-a39c-50f62e73280f-kube-api-access-6m4z2\") pod \"auto-csr-approver-29535178-ld56v\" (UID: \"0a1882c9-33ec-4665-a39c-50f62e73280f\") " pod="openshift-infra/auto-csr-approver-29535178-ld56v" Feb 26 12:58:00 crc kubenswrapper[4724]: I0226 12:58:00.494666 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535178-ld56v" Feb 26 12:58:00 crc kubenswrapper[4724]: I0226 12:58:00.986098 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 12:58:00 crc kubenswrapper[4724]: I0226 12:58:00.995346 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535178-ld56v"] Feb 26 12:58:01 crc kubenswrapper[4724]: I0226 12:58:01.170765 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535178-ld56v" event={"ID":"0a1882c9-33ec-4665-a39c-50f62e73280f","Type":"ContainerStarted","Data":"fec41c1b58cee92b626463d1b9915f9960277a3508c23f617600f66f99a6d197"} Feb 26 12:58:04 crc kubenswrapper[4724]: I0226 12:58:04.204878 4724 generic.go:334] "Generic (PLEG): container finished" podID="0a1882c9-33ec-4665-a39c-50f62e73280f" containerID="9f97590cafdf4ab6071f87ecc25bdd22bfcdcd705003df33857d60aaa46e220d" exitCode=0 Feb 26 12:58:04 crc kubenswrapper[4724]: I0226 12:58:04.204949 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535178-ld56v" event={"ID":"0a1882c9-33ec-4665-a39c-50f62e73280f","Type":"ContainerDied","Data":"9f97590cafdf4ab6071f87ecc25bdd22bfcdcd705003df33857d60aaa46e220d"} Feb 26 12:58:05 crc kubenswrapper[4724]: I0226 12:58:05.696874 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535178-ld56v" Feb 26 12:58:05 crc kubenswrapper[4724]: I0226 12:58:05.830537 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m4z2\" (UniqueName: \"kubernetes.io/projected/0a1882c9-33ec-4665-a39c-50f62e73280f-kube-api-access-6m4z2\") pod \"0a1882c9-33ec-4665-a39c-50f62e73280f\" (UID: \"0a1882c9-33ec-4665-a39c-50f62e73280f\") " Feb 26 12:58:05 crc kubenswrapper[4724]: I0226 12:58:05.836447 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a1882c9-33ec-4665-a39c-50f62e73280f-kube-api-access-6m4z2" (OuterVolumeSpecName: "kube-api-access-6m4z2") pod "0a1882c9-33ec-4665-a39c-50f62e73280f" (UID: "0a1882c9-33ec-4665-a39c-50f62e73280f"). InnerVolumeSpecName "kube-api-access-6m4z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:58:05 crc kubenswrapper[4724]: I0226 12:58:05.933065 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m4z2\" (UniqueName: \"kubernetes.io/projected/0a1882c9-33ec-4665-a39c-50f62e73280f-kube-api-access-6m4z2\") on node \"crc\" DevicePath \"\"" Feb 26 12:58:06 crc kubenswrapper[4724]: I0226 12:58:06.231537 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535178-ld56v" event={"ID":"0a1882c9-33ec-4665-a39c-50f62e73280f","Type":"ContainerDied","Data":"fec41c1b58cee92b626463d1b9915f9960277a3508c23f617600f66f99a6d197"} Feb 26 12:58:06 crc kubenswrapper[4724]: I0226 12:58:06.231600 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fec41c1b58cee92b626463d1b9915f9960277a3508c23f617600f66f99a6d197" Feb 26 12:58:06 crc kubenswrapper[4724]: I0226 12:58:06.231683 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535178-ld56v" Feb 26 12:58:06 crc kubenswrapper[4724]: I0226 12:58:06.777673 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535172-529sg"] Feb 26 12:58:06 crc kubenswrapper[4724]: I0226 12:58:06.786659 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535172-529sg"] Feb 26 12:58:07 crc kubenswrapper[4724]: I0226 12:58:07.988603 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ab15f4d-a142-4a56-b8de-5dd5503a2801" path="/var/lib/kubelet/pods/9ab15f4d-a142-4a56-b8de-5dd5503a2801/volumes" Feb 26 12:58:16 crc kubenswrapper[4724]: I0226 12:58:16.906131 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:58:16 crc kubenswrapper[4724]: I0226 12:58:16.906711 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:58:24 crc kubenswrapper[4724]: I0226 12:58:24.419139 4724 scope.go:117] "RemoveContainer" containerID="b081d60392d7e9b58c2fcbf85450d2c8290dbde21bb2b3731eaabe06a4e42d27" Feb 26 12:58:31 crc kubenswrapper[4724]: I0226 12:58:31.608755 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rrj4m"] Feb 26 12:58:31 crc kubenswrapper[4724]: E0226 12:58:31.609786 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a1882c9-33ec-4665-a39c-50f62e73280f" containerName="oc" Feb 26 12:58:31 crc kubenswrapper[4724]: I0226 12:58:31.609804 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a1882c9-33ec-4665-a39c-50f62e73280f" containerName="oc" Feb 26 12:58:31 crc kubenswrapper[4724]: I0226 12:58:31.610039 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a1882c9-33ec-4665-a39c-50f62e73280f" containerName="oc" Feb 26 12:58:31 crc kubenswrapper[4724]: I0226 12:58:31.611707 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:31 crc kubenswrapper[4724]: I0226 12:58:31.629771 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rrj4m"] Feb 26 12:58:31 crc kubenswrapper[4724]: I0226 12:58:31.776802 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-catalog-content\") pod \"community-operators-rrj4m\" (UID: \"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b\") " pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:31 crc kubenswrapper[4724]: I0226 12:58:31.777598 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcfqg\" (UniqueName: \"kubernetes.io/projected/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-kube-api-access-xcfqg\") pod \"community-operators-rrj4m\" (UID: \"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b\") " pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:31 crc kubenswrapper[4724]: I0226 12:58:31.777772 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-utilities\") pod \"community-operators-rrj4m\" (UID: \"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b\") " pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:31 crc kubenswrapper[4724]: I0226 12:58:31.879108 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-catalog-content\") pod \"community-operators-rrj4m\" (UID: \"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b\") " pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:31 crc kubenswrapper[4724]: I0226 12:58:31.879298 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcfqg\" (UniqueName: \"kubernetes.io/projected/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-kube-api-access-xcfqg\") pod \"community-operators-rrj4m\" (UID: \"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b\") " pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:31 crc kubenswrapper[4724]: I0226 12:58:31.879377 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-utilities\") pod \"community-operators-rrj4m\" (UID: \"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b\") " pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:31 crc kubenswrapper[4724]: I0226 12:58:31.879753 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-catalog-content\") pod \"community-operators-rrj4m\" (UID: \"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b\") " pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:31 crc kubenswrapper[4724]: I0226 12:58:31.879767 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-utilities\") pod \"community-operators-rrj4m\" (UID: \"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b\") " pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:31 crc kubenswrapper[4724]: I0226 12:58:31.900356 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcfqg\" (UniqueName: \"kubernetes.io/projected/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-kube-api-access-xcfqg\") pod \"community-operators-rrj4m\" (UID: \"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b\") " pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:31 crc kubenswrapper[4724]: I0226 12:58:31.946682 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:32 crc kubenswrapper[4724]: I0226 12:58:32.484404 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rrj4m"] Feb 26 12:58:33 crc kubenswrapper[4724]: I0226 12:58:33.500109 4724 generic.go:334] "Generic (PLEG): container finished" podID="53e8f45e-4f84-4cff-9eef-a87a8a1dee4b" containerID="039e62ed56b24138bb322cf80ac7c2010dfdd8c4a47f49a04fb9ad4c814c0e8a" exitCode=0 Feb 26 12:58:33 crc kubenswrapper[4724]: I0226 12:58:33.500404 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrj4m" event={"ID":"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b","Type":"ContainerDied","Data":"039e62ed56b24138bb322cf80ac7c2010dfdd8c4a47f49a04fb9ad4c814c0e8a"} Feb 26 12:58:33 crc kubenswrapper[4724]: I0226 12:58:33.500437 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrj4m" event={"ID":"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b","Type":"ContainerStarted","Data":"382d6fc20428b79013ecea861e283137891b4adaa44d6569c9725c0560d2f587"} Feb 26 12:58:35 crc kubenswrapper[4724]: I0226 12:58:35.524794 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrj4m" event={"ID":"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b","Type":"ContainerStarted","Data":"eecbb28fd989aabf337a617d1a8aa164a7a5d4c58bcaf54e56821679bc7745e7"} Feb 26 12:58:36 crc kubenswrapper[4724]: I0226 12:58:36.538234 4724 generic.go:334] "Generic (PLEG): container finished" podID="53e8f45e-4f84-4cff-9eef-a87a8a1dee4b" containerID="eecbb28fd989aabf337a617d1a8aa164a7a5d4c58bcaf54e56821679bc7745e7" exitCode=0 Feb 26 12:58:36 crc kubenswrapper[4724]: I0226 12:58:36.538538 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrj4m" event={"ID":"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b","Type":"ContainerDied","Data":"eecbb28fd989aabf337a617d1a8aa164a7a5d4c58bcaf54e56821679bc7745e7"} Feb 26 12:58:37 crc kubenswrapper[4724]: I0226 12:58:37.556911 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrj4m" event={"ID":"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b","Type":"ContainerStarted","Data":"e84f9ae8218b88b59b9ca205bf2e05c3d643cf44ba717da831fea277882447b0"} Feb 26 12:58:37 crc kubenswrapper[4724]: I0226 12:58:37.598327 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rrj4m" podStartSLOduration=3.148361676 podStartE2EDuration="6.598292667s" podCreationTimestamp="2026-02-26 12:58:31 +0000 UTC" firstStartedPulling="2026-02-26 12:58:33.502186206 +0000 UTC m=+6780.157925321" lastFinishedPulling="2026-02-26 12:58:36.952117177 +0000 UTC m=+6783.607856312" observedRunningTime="2026-02-26 12:58:37.582748271 +0000 UTC m=+6784.238487406" watchObservedRunningTime="2026-02-26 12:58:37.598292667 +0000 UTC m=+6784.254031782" Feb 26 12:58:41 crc kubenswrapper[4724]: I0226 12:58:41.947416 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:41 crc kubenswrapper[4724]: I0226 12:58:41.948007 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:42 crc kubenswrapper[4724]: I0226 12:58:42.005131 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:42 crc kubenswrapper[4724]: I0226 12:58:42.642092 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:42 crc kubenswrapper[4724]: I0226 12:58:42.703309 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rrj4m"] Feb 26 12:58:44 crc kubenswrapper[4724]: I0226 12:58:44.613012 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rrj4m" podUID="53e8f45e-4f84-4cff-9eef-a87a8a1dee4b" containerName="registry-server" containerID="cri-o://e84f9ae8218b88b59b9ca205bf2e05c3d643cf44ba717da831fea277882447b0" gracePeriod=2 Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.140021 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.263747 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-utilities\") pod \"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b\" (UID: \"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b\") " Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.263890 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-catalog-content\") pod \"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b\" (UID: \"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b\") " Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.264579 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-utilities" (OuterVolumeSpecName: "utilities") pod "53e8f45e-4f84-4cff-9eef-a87a8a1dee4b" (UID: "53e8f45e-4f84-4cff-9eef-a87a8a1dee4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.265502 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcfqg\" (UniqueName: \"kubernetes.io/projected/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-kube-api-access-xcfqg\") pod \"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b\" (UID: \"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b\") " Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.266713 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.271195 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-kube-api-access-xcfqg" (OuterVolumeSpecName: "kube-api-access-xcfqg") pod "53e8f45e-4f84-4cff-9eef-a87a8a1dee4b" (UID: "53e8f45e-4f84-4cff-9eef-a87a8a1dee4b"). InnerVolumeSpecName "kube-api-access-xcfqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.308425 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53e8f45e-4f84-4cff-9eef-a87a8a1dee4b" (UID: "53e8f45e-4f84-4cff-9eef-a87a8a1dee4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.368742 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.368774 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcfqg\" (UniqueName: \"kubernetes.io/projected/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b-kube-api-access-xcfqg\") on node \"crc\" DevicePath \"\"" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.632984 4724 generic.go:334] "Generic (PLEG): container finished" podID="53e8f45e-4f84-4cff-9eef-a87a8a1dee4b" containerID="e84f9ae8218b88b59b9ca205bf2e05c3d643cf44ba717da831fea277882447b0" exitCode=0 Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.633029 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrj4m" event={"ID":"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b","Type":"ContainerDied","Data":"e84f9ae8218b88b59b9ca205bf2e05c3d643cf44ba717da831fea277882447b0"} Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.633059 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrj4m" event={"ID":"53e8f45e-4f84-4cff-9eef-a87a8a1dee4b","Type":"ContainerDied","Data":"382d6fc20428b79013ecea861e283137891b4adaa44d6569c9725c0560d2f587"} Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.633078 4724 scope.go:117] "RemoveContainer" containerID="e84f9ae8218b88b59b9ca205bf2e05c3d643cf44ba717da831fea277882447b0" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.633246 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrj4m" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.669696 4724 scope.go:117] "RemoveContainer" containerID="eecbb28fd989aabf337a617d1a8aa164a7a5d4c58bcaf54e56821679bc7745e7" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.681230 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rrj4m"] Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.696679 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rrj4m"] Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.725388 4724 scope.go:117] "RemoveContainer" containerID="039e62ed56b24138bb322cf80ac7c2010dfdd8c4a47f49a04fb9ad4c814c0e8a" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.754622 4724 scope.go:117] "RemoveContainer" containerID="e84f9ae8218b88b59b9ca205bf2e05c3d643cf44ba717da831fea277882447b0" Feb 26 12:58:45 crc kubenswrapper[4724]: E0226 12:58:45.755275 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e84f9ae8218b88b59b9ca205bf2e05c3d643cf44ba717da831fea277882447b0\": container with ID starting with e84f9ae8218b88b59b9ca205bf2e05c3d643cf44ba717da831fea277882447b0 not found: ID does not exist" containerID="e84f9ae8218b88b59b9ca205bf2e05c3d643cf44ba717da831fea277882447b0" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.755317 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e84f9ae8218b88b59b9ca205bf2e05c3d643cf44ba717da831fea277882447b0"} err="failed to get container status \"e84f9ae8218b88b59b9ca205bf2e05c3d643cf44ba717da831fea277882447b0\": rpc error: code = NotFound desc = could not find container \"e84f9ae8218b88b59b9ca205bf2e05c3d643cf44ba717da831fea277882447b0\": container with ID starting with e84f9ae8218b88b59b9ca205bf2e05c3d643cf44ba717da831fea277882447b0 not found: ID does not exist" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.755348 4724 scope.go:117] "RemoveContainer" containerID="eecbb28fd989aabf337a617d1a8aa164a7a5d4c58bcaf54e56821679bc7745e7" Feb 26 12:58:45 crc kubenswrapper[4724]: E0226 12:58:45.755824 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eecbb28fd989aabf337a617d1a8aa164a7a5d4c58bcaf54e56821679bc7745e7\": container with ID starting with eecbb28fd989aabf337a617d1a8aa164a7a5d4c58bcaf54e56821679bc7745e7 not found: ID does not exist" containerID="eecbb28fd989aabf337a617d1a8aa164a7a5d4c58bcaf54e56821679bc7745e7" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.755849 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eecbb28fd989aabf337a617d1a8aa164a7a5d4c58bcaf54e56821679bc7745e7"} err="failed to get container status \"eecbb28fd989aabf337a617d1a8aa164a7a5d4c58bcaf54e56821679bc7745e7\": rpc error: code = NotFound desc = could not find container \"eecbb28fd989aabf337a617d1a8aa164a7a5d4c58bcaf54e56821679bc7745e7\": container with ID starting with eecbb28fd989aabf337a617d1a8aa164a7a5d4c58bcaf54e56821679bc7745e7 not found: ID does not exist" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.755867 4724 scope.go:117] "RemoveContainer" containerID="039e62ed56b24138bb322cf80ac7c2010dfdd8c4a47f49a04fb9ad4c814c0e8a" Feb 26 12:58:45 crc kubenswrapper[4724]: E0226 12:58:45.756154 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"039e62ed56b24138bb322cf80ac7c2010dfdd8c4a47f49a04fb9ad4c814c0e8a\": container with ID starting with 039e62ed56b24138bb322cf80ac7c2010dfdd8c4a47f49a04fb9ad4c814c0e8a not found: ID does not exist" containerID="039e62ed56b24138bb322cf80ac7c2010dfdd8c4a47f49a04fb9ad4c814c0e8a" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.756199 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"039e62ed56b24138bb322cf80ac7c2010dfdd8c4a47f49a04fb9ad4c814c0e8a"} err="failed to get container status \"039e62ed56b24138bb322cf80ac7c2010dfdd8c4a47f49a04fb9ad4c814c0e8a\": rpc error: code = NotFound desc = could not find container \"039e62ed56b24138bb322cf80ac7c2010dfdd8c4a47f49a04fb9ad4c814c0e8a\": container with ID starting with 039e62ed56b24138bb322cf80ac7c2010dfdd8c4a47f49a04fb9ad4c814c0e8a not found: ID does not exist" Feb 26 12:58:45 crc kubenswrapper[4724]: I0226 12:58:45.986350 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e8f45e-4f84-4cff-9eef-a87a8a1dee4b" path="/var/lib/kubelet/pods/53e8f45e-4f84-4cff-9eef-a87a8a1dee4b/volumes" Feb 26 12:58:46 crc kubenswrapper[4724]: I0226 12:58:46.906096 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:58:46 crc kubenswrapper[4724]: I0226 12:58:46.906437 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.480357 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2xvqf"] Feb 26 12:58:56 crc kubenswrapper[4724]: E0226 12:58:56.481148 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e8f45e-4f84-4cff-9eef-a87a8a1dee4b" containerName="extract-content" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.481160 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e8f45e-4f84-4cff-9eef-a87a8a1dee4b" containerName="extract-content" Feb 26 12:58:56 crc kubenswrapper[4724]: E0226 12:58:56.481208 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e8f45e-4f84-4cff-9eef-a87a8a1dee4b" containerName="extract-utilities" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.481215 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e8f45e-4f84-4cff-9eef-a87a8a1dee4b" containerName="extract-utilities" Feb 26 12:58:56 crc kubenswrapper[4724]: E0226 12:58:56.481229 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e8f45e-4f84-4cff-9eef-a87a8a1dee4b" containerName="registry-server" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.481234 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e8f45e-4f84-4cff-9eef-a87a8a1dee4b" containerName="registry-server" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.481423 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e8f45e-4f84-4cff-9eef-a87a8a1dee4b" containerName="registry-server" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.482663 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.484655 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9475c04e-194b-4271-b003-d728149c4c40-utilities\") pod \"redhat-operators-2xvqf\" (UID: \"9475c04e-194b-4271-b003-d728149c4c40\") " pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.484782 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9475c04e-194b-4271-b003-d728149c4c40-catalog-content\") pod \"redhat-operators-2xvqf\" (UID: \"9475c04e-194b-4271-b003-d728149c4c40\") " pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.484856 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhx9r\" (UniqueName: \"kubernetes.io/projected/9475c04e-194b-4271-b003-d728149c4c40-kube-api-access-vhx9r\") pod \"redhat-operators-2xvqf\" (UID: \"9475c04e-194b-4271-b003-d728149c4c40\") " pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.507643 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2xvqf"] Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.586530 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9475c04e-194b-4271-b003-d728149c4c40-catalog-content\") pod \"redhat-operators-2xvqf\" (UID: \"9475c04e-194b-4271-b003-d728149c4c40\") " pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.586658 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhx9r\" (UniqueName: \"kubernetes.io/projected/9475c04e-194b-4271-b003-d728149c4c40-kube-api-access-vhx9r\") pod \"redhat-operators-2xvqf\" (UID: \"9475c04e-194b-4271-b003-d728149c4c40\") " pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.586774 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9475c04e-194b-4271-b003-d728149c4c40-utilities\") pod \"redhat-operators-2xvqf\" (UID: \"9475c04e-194b-4271-b003-d728149c4c40\") " pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.586962 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9475c04e-194b-4271-b003-d728149c4c40-catalog-content\") pod \"redhat-operators-2xvqf\" (UID: \"9475c04e-194b-4271-b003-d728149c4c40\") " pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.587269 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9475c04e-194b-4271-b003-d728149c4c40-utilities\") pod \"redhat-operators-2xvqf\" (UID: \"9475c04e-194b-4271-b003-d728149c4c40\") " pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.609285 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhx9r\" (UniqueName: \"kubernetes.io/projected/9475c04e-194b-4271-b003-d728149c4c40-kube-api-access-vhx9r\") pod \"redhat-operators-2xvqf\" (UID: \"9475c04e-194b-4271-b003-d728149c4c40\") " pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:58:56 crc kubenswrapper[4724]: I0226 12:58:56.806774 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:58:57 crc kubenswrapper[4724]: I0226 12:58:57.347525 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2xvqf"] Feb 26 12:58:57 crc kubenswrapper[4724]: I0226 12:58:57.735207 4724 generic.go:334] "Generic (PLEG): container finished" podID="9475c04e-194b-4271-b003-d728149c4c40" containerID="e903bb6b8d4e9299f2aa85fa992819a97c4a8ed903f5004d7945b8412079a513" exitCode=0 Feb 26 12:58:57 crc kubenswrapper[4724]: I0226 12:58:57.735709 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2xvqf" event={"ID":"9475c04e-194b-4271-b003-d728149c4c40","Type":"ContainerDied","Data":"e903bb6b8d4e9299f2aa85fa992819a97c4a8ed903f5004d7945b8412079a513"} Feb 26 12:58:57 crc kubenswrapper[4724]: I0226 12:58:57.735767 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2xvqf" event={"ID":"9475c04e-194b-4271-b003-d728149c4c40","Type":"ContainerStarted","Data":"7f5238d706cd50425a054f8736f0ff35d8ef6c86495f5eb67d988372ddda571a"} Feb 26 12:58:59 crc kubenswrapper[4724]: I0226 12:58:59.755003 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2xvqf" event={"ID":"9475c04e-194b-4271-b003-d728149c4c40","Type":"ContainerStarted","Data":"07c787aac720c3b7a914271e631538d906e125300b2d6a60d36eb5f05788bee0"} Feb 26 12:59:06 crc kubenswrapper[4724]: I0226 12:59:06.815642 4724 generic.go:334] "Generic (PLEG): container finished" podID="9475c04e-194b-4271-b003-d728149c4c40" containerID="07c787aac720c3b7a914271e631538d906e125300b2d6a60d36eb5f05788bee0" exitCode=0 Feb 26 12:59:06 crc kubenswrapper[4724]: I0226 12:59:06.815663 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2xvqf" event={"ID":"9475c04e-194b-4271-b003-d728149c4c40","Type":"ContainerDied","Data":"07c787aac720c3b7a914271e631538d906e125300b2d6a60d36eb5f05788bee0"} Feb 26 12:59:07 crc kubenswrapper[4724]: I0226 12:59:07.825441 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2xvqf" event={"ID":"9475c04e-194b-4271-b003-d728149c4c40","Type":"ContainerStarted","Data":"dfed30aa37fea40fedab2bd1d58b2fa08e5255651dec0cbf843a793fd4209d29"} Feb 26 12:59:07 crc kubenswrapper[4724]: I0226 12:59:07.845209 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2xvqf" podStartSLOduration=2.132840781 podStartE2EDuration="11.845172832s" podCreationTimestamp="2026-02-26 12:58:56 +0000 UTC" firstStartedPulling="2026-02-26 12:58:57.737267957 +0000 UTC m=+6804.393007072" lastFinishedPulling="2026-02-26 12:59:07.449600008 +0000 UTC m=+6814.105339123" observedRunningTime="2026-02-26 12:59:07.843302795 +0000 UTC m=+6814.499041910" watchObservedRunningTime="2026-02-26 12:59:07.845172832 +0000 UTC m=+6814.500911967" Feb 26 12:59:16 crc kubenswrapper[4724]: I0226 12:59:16.807747 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:59:16 crc kubenswrapper[4724]: I0226 12:59:16.808362 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:59:16 crc kubenswrapper[4724]: I0226 12:59:16.906841 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 12:59:16 crc kubenswrapper[4724]: I0226 12:59:16.906916 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 12:59:16 crc kubenswrapper[4724]: I0226 12:59:16.906979 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 12:59:16 crc kubenswrapper[4724]: I0226 12:59:16.908047 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 12:59:16 crc kubenswrapper[4724]: I0226 12:59:16.908136 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" gracePeriod=600 Feb 26 12:59:17 crc kubenswrapper[4724]: E0226 12:59:17.038249 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:59:17 crc kubenswrapper[4724]: I0226 12:59:17.863574 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2xvqf" podUID="9475c04e-194b-4271-b003-d728149c4c40" containerName="registry-server" probeResult="failure" output=< Feb 26 12:59:17 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:59:17 crc kubenswrapper[4724]: > Feb 26 12:59:17 crc kubenswrapper[4724]: I0226 12:59:17.939934 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" exitCode=0 Feb 26 12:59:17 crc kubenswrapper[4724]: I0226 12:59:17.939979 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54"} Feb 26 12:59:17 crc kubenswrapper[4724]: I0226 12:59:17.940018 4724 scope.go:117] "RemoveContainer" containerID="3e26710043bff6da437222d56ea51a7da623eabb0b8e2e2eb93e241e3e4a190d" Feb 26 12:59:17 crc kubenswrapper[4724]: I0226 12:59:17.940955 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 12:59:17 crc kubenswrapper[4724]: E0226 12:59:17.941295 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:59:27 crc kubenswrapper[4724]: I0226 12:59:27.857568 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2xvqf" podUID="9475c04e-194b-4271-b003-d728149c4c40" containerName="registry-server" probeResult="failure" output=< Feb 26 12:59:27 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:59:27 crc kubenswrapper[4724]: > Feb 26 12:59:29 crc kubenswrapper[4724]: I0226 12:59:29.976729 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 12:59:29 crc kubenswrapper[4724]: E0226 12:59:29.976989 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:59:37 crc kubenswrapper[4724]: I0226 12:59:37.861572 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2xvqf" podUID="9475c04e-194b-4271-b003-d728149c4c40" containerName="registry-server" probeResult="failure" output=< Feb 26 12:59:37 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:59:37 crc kubenswrapper[4724]: > Feb 26 12:59:44 crc kubenswrapper[4724]: I0226 12:59:44.975432 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 12:59:44 crc kubenswrapper[4724]: E0226 12:59:44.976238 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:59:47 crc kubenswrapper[4724]: I0226 12:59:47.864422 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2xvqf" podUID="9475c04e-194b-4271-b003-d728149c4c40" containerName="registry-server" probeResult="failure" output=< Feb 26 12:59:47 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 12:59:47 crc kubenswrapper[4724]: > Feb 26 12:59:56 crc kubenswrapper[4724]: I0226 12:59:56.872089 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:59:56 crc kubenswrapper[4724]: I0226 12:59:56.959879 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:59:57 crc kubenswrapper[4724]: I0226 12:59:57.735782 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2xvqf"] Feb 26 12:59:58 crc kubenswrapper[4724]: I0226 12:59:58.391557 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2xvqf" podUID="9475c04e-194b-4271-b003-d728149c4c40" containerName="registry-server" containerID="cri-o://dfed30aa37fea40fedab2bd1d58b2fa08e5255651dec0cbf843a793fd4209d29" gracePeriod=2 Feb 26 12:59:59 crc kubenswrapper[4724]: I0226 12:59:59.264073 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 12:59:59 crc kubenswrapper[4724]: E0226 12:59:59.264835 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 12:59:59 crc kubenswrapper[4724]: I0226 12:59:59.402381 4724 generic.go:334] "Generic (PLEG): container finished" podID="9475c04e-194b-4271-b003-d728149c4c40" containerID="dfed30aa37fea40fedab2bd1d58b2fa08e5255651dec0cbf843a793fd4209d29" exitCode=0 Feb 26 12:59:59 crc kubenswrapper[4724]: I0226 12:59:59.402435 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2xvqf" event={"ID":"9475c04e-194b-4271-b003-d728149c4c40","Type":"ContainerDied","Data":"dfed30aa37fea40fedab2bd1d58b2fa08e5255651dec0cbf843a793fd4209d29"} Feb 26 12:59:59 crc kubenswrapper[4724]: I0226 12:59:59.531431 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 12:59:59 crc kubenswrapper[4724]: I0226 12:59:59.696653 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9475c04e-194b-4271-b003-d728149c4c40-utilities\") pod \"9475c04e-194b-4271-b003-d728149c4c40\" (UID: \"9475c04e-194b-4271-b003-d728149c4c40\") " Feb 26 12:59:59 crc kubenswrapper[4724]: I0226 12:59:59.697160 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhx9r\" (UniqueName: \"kubernetes.io/projected/9475c04e-194b-4271-b003-d728149c4c40-kube-api-access-vhx9r\") pod \"9475c04e-194b-4271-b003-d728149c4c40\" (UID: \"9475c04e-194b-4271-b003-d728149c4c40\") " Feb 26 12:59:59 crc kubenswrapper[4724]: I0226 12:59:59.697382 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9475c04e-194b-4271-b003-d728149c4c40-catalog-content\") pod \"9475c04e-194b-4271-b003-d728149c4c40\" (UID: \"9475c04e-194b-4271-b003-d728149c4c40\") " Feb 26 12:59:59 crc kubenswrapper[4724]: I0226 12:59:59.697380 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9475c04e-194b-4271-b003-d728149c4c40-utilities" (OuterVolumeSpecName: "utilities") pod "9475c04e-194b-4271-b003-d728149c4c40" (UID: "9475c04e-194b-4271-b003-d728149c4c40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:59:59 crc kubenswrapper[4724]: I0226 12:59:59.698491 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9475c04e-194b-4271-b003-d728149c4c40-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 12:59:59 crc kubenswrapper[4724]: I0226 12:59:59.719894 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9475c04e-194b-4271-b003-d728149c4c40-kube-api-access-vhx9r" (OuterVolumeSpecName: "kube-api-access-vhx9r") pod "9475c04e-194b-4271-b003-d728149c4c40" (UID: "9475c04e-194b-4271-b003-d728149c4c40"). InnerVolumeSpecName "kube-api-access-vhx9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 12:59:59 crc kubenswrapper[4724]: I0226 12:59:59.808068 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhx9r\" (UniqueName: \"kubernetes.io/projected/9475c04e-194b-4271-b003-d728149c4c40-kube-api-access-vhx9r\") on node \"crc\" DevicePath \"\"" Feb 26 12:59:59 crc kubenswrapper[4724]: I0226 12:59:59.833419 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9475c04e-194b-4271-b003-d728149c4c40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9475c04e-194b-4271-b003-d728149c4c40" (UID: "9475c04e-194b-4271-b003-d728149c4c40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 12:59:59 crc kubenswrapper[4724]: I0226 12:59:59.910276 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9475c04e-194b-4271-b003-d728149c4c40-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.188006 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535180-q4552"] Feb 26 13:00:00 crc kubenswrapper[4724]: E0226 13:00:00.188445 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9475c04e-194b-4271-b003-d728149c4c40" containerName="registry-server" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.188464 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9475c04e-194b-4271-b003-d728149c4c40" containerName="registry-server" Feb 26 13:00:00 crc kubenswrapper[4724]: E0226 13:00:00.188474 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9475c04e-194b-4271-b003-d728149c4c40" containerName="extract-content" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.188480 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9475c04e-194b-4271-b003-d728149c4c40" containerName="extract-content" Feb 26 13:00:00 crc kubenswrapper[4724]: E0226 13:00:00.188500 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9475c04e-194b-4271-b003-d728149c4c40" containerName="extract-utilities" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.188508 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9475c04e-194b-4271-b003-d728149c4c40" containerName="extract-utilities" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.188726 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9475c04e-194b-4271-b003-d728149c4c40" containerName="registry-server" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.196498 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535180-q4552" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.214382 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.214727 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.235269 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.236348 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535180-q4552"] Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.305057 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4"] Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.307035 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.320733 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.325624 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.328658 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf9vg\" (UniqueName: \"kubernetes.io/projected/a4a13762-53eb-4d56-b416-33fc8ebc2592-kube-api-access-wf9vg\") pod \"auto-csr-approver-29535180-q4552\" (UID: \"a4a13762-53eb-4d56-b416-33fc8ebc2592\") " pod="openshift-infra/auto-csr-approver-29535180-q4552" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.383374 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4"] Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.432392 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-secret-volume\") pod \"collect-profiles-29535180-4dkb4\" (UID: \"9c6aac2e-5c33-4b6c-88ed-92c0426aae93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.432480 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-config-volume\") pod \"collect-profiles-29535180-4dkb4\" (UID: \"9c6aac2e-5c33-4b6c-88ed-92c0426aae93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.432508 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf9vg\" (UniqueName: \"kubernetes.io/projected/a4a13762-53eb-4d56-b416-33fc8ebc2592-kube-api-access-wf9vg\") pod \"auto-csr-approver-29535180-q4552\" (UID: \"a4a13762-53eb-4d56-b416-33fc8ebc2592\") " pod="openshift-infra/auto-csr-approver-29535180-q4552" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.432569 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwgd4\" (UniqueName: \"kubernetes.io/projected/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-kube-api-access-hwgd4\") pod \"collect-profiles-29535180-4dkb4\" (UID: \"9c6aac2e-5c33-4b6c-88ed-92c0426aae93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.435075 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2xvqf" event={"ID":"9475c04e-194b-4271-b003-d728149c4c40","Type":"ContainerDied","Data":"7f5238d706cd50425a054f8736f0ff35d8ef6c86495f5eb67d988372ddda571a"} Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.438487 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2xvqf" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.438734 4724 scope.go:117] "RemoveContainer" containerID="dfed30aa37fea40fedab2bd1d58b2fa08e5255651dec0cbf843a793fd4209d29" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.504348 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf9vg\" (UniqueName: \"kubernetes.io/projected/a4a13762-53eb-4d56-b416-33fc8ebc2592-kube-api-access-wf9vg\") pod \"auto-csr-approver-29535180-q4552\" (UID: \"a4a13762-53eb-4d56-b416-33fc8ebc2592\") " pod="openshift-infra/auto-csr-approver-29535180-q4552" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.510387 4724 scope.go:117] "RemoveContainer" containerID="07c787aac720c3b7a914271e631538d906e125300b2d6a60d36eb5f05788bee0" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.521717 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535180-q4552" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.531332 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2xvqf"] Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.536409 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-secret-volume\") pod \"collect-profiles-29535180-4dkb4\" (UID: \"9c6aac2e-5c33-4b6c-88ed-92c0426aae93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.536494 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-config-volume\") pod \"collect-profiles-29535180-4dkb4\" (UID: \"9c6aac2e-5c33-4b6c-88ed-92c0426aae93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.536563 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwgd4\" (UniqueName: \"kubernetes.io/projected/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-kube-api-access-hwgd4\") pod \"collect-profiles-29535180-4dkb4\" (UID: \"9c6aac2e-5c33-4b6c-88ed-92c0426aae93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.541072 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-config-volume\") pod \"collect-profiles-29535180-4dkb4\" (UID: \"9c6aac2e-5c33-4b6c-88ed-92c0426aae93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.548194 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2xvqf"] Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.555944 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-secret-volume\") pod \"collect-profiles-29535180-4dkb4\" (UID: \"9c6aac2e-5c33-4b6c-88ed-92c0426aae93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.585609 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwgd4\" (UniqueName: \"kubernetes.io/projected/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-kube-api-access-hwgd4\") pod \"collect-profiles-29535180-4dkb4\" (UID: \"9c6aac2e-5c33-4b6c-88ed-92c0426aae93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.605123 4724 scope.go:117] "RemoveContainer" containerID="e903bb6b8d4e9299f2aa85fa992819a97c4a8ed903f5004d7945b8412079a513" Feb 26 13:00:00 crc kubenswrapper[4724]: I0226 13:00:00.688684 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" Feb 26 13:00:01 crc kubenswrapper[4724]: I0226 13:00:01.463550 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535180-q4552"] Feb 26 13:00:01 crc kubenswrapper[4724]: I0226 13:00:01.551724 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4"] Feb 26 13:00:01 crc kubenswrapper[4724]: W0226 13:00:01.567330 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c6aac2e_5c33_4b6c_88ed_92c0426aae93.slice/crio-6fb745388396821bf4eedc2b40e25578a9a212634b09ecdc5c1aca5e8d0b1eb8 WatchSource:0}: Error finding container 6fb745388396821bf4eedc2b40e25578a9a212634b09ecdc5c1aca5e8d0b1eb8: Status 404 returned error can't find the container with id 6fb745388396821bf4eedc2b40e25578a9a212634b09ecdc5c1aca5e8d0b1eb8 Feb 26 13:00:01 crc kubenswrapper[4724]: I0226 13:00:01.991469 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9475c04e-194b-4271-b003-d728149c4c40" path="/var/lib/kubelet/pods/9475c04e-194b-4271-b003-d728149c4c40/volumes" Feb 26 13:00:02 crc kubenswrapper[4724]: I0226 13:00:02.463729 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" event={"ID":"9c6aac2e-5c33-4b6c-88ed-92c0426aae93","Type":"ContainerStarted","Data":"db7310188a0904fc6778787df6a59a48c57de8b1692194496c00108913d10db6"} Feb 26 13:00:02 crc kubenswrapper[4724]: I0226 13:00:02.464068 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" event={"ID":"9c6aac2e-5c33-4b6c-88ed-92c0426aae93","Type":"ContainerStarted","Data":"6fb745388396821bf4eedc2b40e25578a9a212634b09ecdc5c1aca5e8d0b1eb8"} Feb 26 13:00:02 crc kubenswrapper[4724]: I0226 13:00:02.468031 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535180-q4552" event={"ID":"a4a13762-53eb-4d56-b416-33fc8ebc2592","Type":"ContainerStarted","Data":"f22fa401de9da2ff7fe93172656febdcc9a997392b71bd7a8776ecb9c21e8d40"} Feb 26 13:00:02 crc kubenswrapper[4724]: I0226 13:00:02.499340 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" podStartSLOduration=2.499250539 podStartE2EDuration="2.499250539s" podCreationTimestamp="2026-02-26 13:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 13:00:02.495480923 +0000 UTC m=+6869.151220048" watchObservedRunningTime="2026-02-26 13:00:02.499250539 +0000 UTC m=+6869.154989674" Feb 26 13:00:03 crc kubenswrapper[4724]: I0226 13:00:03.497790 4724 generic.go:334] "Generic (PLEG): container finished" podID="9c6aac2e-5c33-4b6c-88ed-92c0426aae93" containerID="db7310188a0904fc6778787df6a59a48c57de8b1692194496c00108913d10db6" exitCode=0 Feb 26 13:00:03 crc kubenswrapper[4724]: I0226 13:00:03.497849 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" event={"ID":"9c6aac2e-5c33-4b6c-88ed-92c0426aae93","Type":"ContainerDied","Data":"db7310188a0904fc6778787df6a59a48c57de8b1692194496c00108913d10db6"} Feb 26 13:00:04 crc kubenswrapper[4724]: I0226 13:00:04.936898 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" Feb 26 13:00:05 crc kubenswrapper[4724]: I0226 13:00:05.042929 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-secret-volume\") pod \"9c6aac2e-5c33-4b6c-88ed-92c0426aae93\" (UID: \"9c6aac2e-5c33-4b6c-88ed-92c0426aae93\") " Feb 26 13:00:05 crc kubenswrapper[4724]: I0226 13:00:05.043082 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwgd4\" (UniqueName: \"kubernetes.io/projected/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-kube-api-access-hwgd4\") pod \"9c6aac2e-5c33-4b6c-88ed-92c0426aae93\" (UID: \"9c6aac2e-5c33-4b6c-88ed-92c0426aae93\") " Feb 26 13:00:05 crc kubenswrapper[4724]: I0226 13:00:05.043259 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-config-volume\") pod \"9c6aac2e-5c33-4b6c-88ed-92c0426aae93\" (UID: \"9c6aac2e-5c33-4b6c-88ed-92c0426aae93\") " Feb 26 13:00:05 crc kubenswrapper[4724]: I0226 13:00:05.045220 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-config-volume" (OuterVolumeSpecName: "config-volume") pod "9c6aac2e-5c33-4b6c-88ed-92c0426aae93" (UID: "9c6aac2e-5c33-4b6c-88ed-92c0426aae93"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 13:00:05 crc kubenswrapper[4724]: I0226 13:00:05.049237 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-kube-api-access-hwgd4" (OuterVolumeSpecName: "kube-api-access-hwgd4") pod "9c6aac2e-5c33-4b6c-88ed-92c0426aae93" (UID: "9c6aac2e-5c33-4b6c-88ed-92c0426aae93"). InnerVolumeSpecName "kube-api-access-hwgd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:00:05 crc kubenswrapper[4724]: I0226 13:00:05.052165 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9c6aac2e-5c33-4b6c-88ed-92c0426aae93" (UID: "9c6aac2e-5c33-4b6c-88ed-92c0426aae93"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:00:05 crc kubenswrapper[4724]: I0226 13:00:05.151501 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwgd4\" (UniqueName: \"kubernetes.io/projected/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-kube-api-access-hwgd4\") on node \"crc\" DevicePath \"\"" Feb 26 13:00:05 crc kubenswrapper[4724]: I0226 13:00:05.151546 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 13:00:05 crc kubenswrapper[4724]: I0226 13:00:05.151557 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c6aac2e-5c33-4b6c-88ed-92c0426aae93-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 13:00:05 crc kubenswrapper[4724]: I0226 13:00:05.516819 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" event={"ID":"9c6aac2e-5c33-4b6c-88ed-92c0426aae93","Type":"ContainerDied","Data":"6fb745388396821bf4eedc2b40e25578a9a212634b09ecdc5c1aca5e8d0b1eb8"} Feb 26 13:00:05 crc kubenswrapper[4724]: I0226 13:00:05.516867 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fb745388396821bf4eedc2b40e25578a9a212634b09ecdc5c1aca5e8d0b1eb8" Feb 26 13:00:05 crc kubenswrapper[4724]: I0226 13:00:05.516935 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4" Feb 26 13:00:06 crc kubenswrapper[4724]: I0226 13:00:06.087694 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh"] Feb 26 13:00:06 crc kubenswrapper[4724]: I0226 13:00:06.098019 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535135-gs2kh"] Feb 26 13:00:07 crc kubenswrapper[4724]: I0226 13:00:07.987376 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7019362b-8ced-4f02-9bcc-c92fcc157acd" path="/var/lib/kubelet/pods/7019362b-8ced-4f02-9bcc-c92fcc157acd/volumes" Feb 26 13:00:12 crc kubenswrapper[4724]: I0226 13:00:12.975484 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:00:12 crc kubenswrapper[4724]: E0226 13:00:12.977260 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:00:22 crc kubenswrapper[4724]: I0226 13:00:22.674671 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535180-q4552" event={"ID":"a4a13762-53eb-4d56-b416-33fc8ebc2592","Type":"ContainerStarted","Data":"f585ba93f838cb069a670e46e5d3976890253cd116595bf15ed3ce3a921a898d"} Feb 26 13:00:22 crc kubenswrapper[4724]: I0226 13:00:22.689034 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535180-q4552" podStartSLOduration=2.142749262 podStartE2EDuration="22.689017035s" podCreationTimestamp="2026-02-26 13:00:00 +0000 UTC" firstStartedPulling="2026-02-26 13:00:01.479143437 +0000 UTC m=+6868.134882552" lastFinishedPulling="2026-02-26 13:00:22.02541121 +0000 UTC m=+6888.681150325" observedRunningTime="2026-02-26 13:00:22.687545278 +0000 UTC m=+6889.343284413" watchObservedRunningTime="2026-02-26 13:00:22.689017035 +0000 UTC m=+6889.344756150" Feb 26 13:00:24 crc kubenswrapper[4724]: I0226 13:00:24.547344 4724 scope.go:117] "RemoveContainer" containerID="17ce60416393169d241b2cd08f67771a0ef14f3697375a2cbdc8e422e4d28deb" Feb 26 13:00:24 crc kubenswrapper[4724]: I0226 13:00:24.694921 4724 generic.go:334] "Generic (PLEG): container finished" podID="a4a13762-53eb-4d56-b416-33fc8ebc2592" containerID="f585ba93f838cb069a670e46e5d3976890253cd116595bf15ed3ce3a921a898d" exitCode=0 Feb 26 13:00:24 crc kubenswrapper[4724]: I0226 13:00:24.694964 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535180-q4552" event={"ID":"a4a13762-53eb-4d56-b416-33fc8ebc2592","Type":"ContainerDied","Data":"f585ba93f838cb069a670e46e5d3976890253cd116595bf15ed3ce3a921a898d"} Feb 26 13:00:26 crc kubenswrapper[4724]: I0226 13:00:26.197359 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535180-q4552" Feb 26 13:00:26 crc kubenswrapper[4724]: I0226 13:00:26.337728 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf9vg\" (UniqueName: \"kubernetes.io/projected/a4a13762-53eb-4d56-b416-33fc8ebc2592-kube-api-access-wf9vg\") pod \"a4a13762-53eb-4d56-b416-33fc8ebc2592\" (UID: \"a4a13762-53eb-4d56-b416-33fc8ebc2592\") " Feb 26 13:00:26 crc kubenswrapper[4724]: I0226 13:00:26.346487 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a13762-53eb-4d56-b416-33fc8ebc2592-kube-api-access-wf9vg" (OuterVolumeSpecName: "kube-api-access-wf9vg") pod "a4a13762-53eb-4d56-b416-33fc8ebc2592" (UID: "a4a13762-53eb-4d56-b416-33fc8ebc2592"). InnerVolumeSpecName "kube-api-access-wf9vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:00:26 crc kubenswrapper[4724]: I0226 13:00:26.439932 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf9vg\" (UniqueName: \"kubernetes.io/projected/a4a13762-53eb-4d56-b416-33fc8ebc2592-kube-api-access-wf9vg\") on node \"crc\" DevicePath \"\"" Feb 26 13:00:26 crc kubenswrapper[4724]: I0226 13:00:26.713729 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535180-q4552" event={"ID":"a4a13762-53eb-4d56-b416-33fc8ebc2592","Type":"ContainerDied","Data":"f22fa401de9da2ff7fe93172656febdcc9a997392b71bd7a8776ecb9c21e8d40"} Feb 26 13:00:26 crc kubenswrapper[4724]: I0226 13:00:26.713793 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f22fa401de9da2ff7fe93172656febdcc9a997392b71bd7a8776ecb9c21e8d40" Feb 26 13:00:26 crc kubenswrapper[4724]: I0226 13:00:26.713795 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535180-q4552" Feb 26 13:00:26 crc kubenswrapper[4724]: I0226 13:00:26.782752 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535174-5p96g"] Feb 26 13:00:26 crc kubenswrapper[4724]: I0226 13:00:26.792724 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535174-5p96g"] Feb 26 13:00:26 crc kubenswrapper[4724]: I0226 13:00:26.976007 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:00:26 crc kubenswrapper[4724]: E0226 13:00:26.976289 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:00:27 crc kubenswrapper[4724]: I0226 13:00:27.986484 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b998271d-dfc5-4a88-847b-a3223fa163e4" path="/var/lib/kubelet/pods/b998271d-dfc5-4a88-847b-a3223fa163e4/volumes" Feb 26 13:00:40 crc kubenswrapper[4724]: I0226 13:00:40.975822 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:00:40 crc kubenswrapper[4724]: E0226 13:00:40.976664 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:00:53 crc kubenswrapper[4724]: I0226 13:00:53.992565 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:00:53 crc kubenswrapper[4724]: E0226 13:00:53.993689 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.184592 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29535181-fgzvv"] Feb 26 13:01:00 crc kubenswrapper[4724]: E0226 13:01:00.186827 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a13762-53eb-4d56-b416-33fc8ebc2592" containerName="oc" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.187120 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a13762-53eb-4d56-b416-33fc8ebc2592" containerName="oc" Feb 26 13:01:00 crc kubenswrapper[4724]: E0226 13:01:00.187294 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6aac2e-5c33-4b6c-88ed-92c0426aae93" containerName="collect-profiles" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.187439 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6aac2e-5c33-4b6c-88ed-92c0426aae93" containerName="collect-profiles" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.189586 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6aac2e-5c33-4b6c-88ed-92c0426aae93" containerName="collect-profiles" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.189622 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4a13762-53eb-4d56-b416-33fc8ebc2592" containerName="oc" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.190448 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.195750 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535181-fgzvv"] Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.384380 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j6jz\" (UniqueName: \"kubernetes.io/projected/b8280e7e-39bf-4ace-b878-cc9148026c74-kube-api-access-2j6jz\") pod \"keystone-cron-29535181-fgzvv\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.384470 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-fernet-keys\") pod \"keystone-cron-29535181-fgzvv\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.384555 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-config-data\") pod \"keystone-cron-29535181-fgzvv\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.384621 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-combined-ca-bundle\") pod \"keystone-cron-29535181-fgzvv\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.486409 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-config-data\") pod \"keystone-cron-29535181-fgzvv\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.486524 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-combined-ca-bundle\") pod \"keystone-cron-29535181-fgzvv\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.486618 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j6jz\" (UniqueName: \"kubernetes.io/projected/b8280e7e-39bf-4ace-b878-cc9148026c74-kube-api-access-2j6jz\") pod \"keystone-cron-29535181-fgzvv\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.487029 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-fernet-keys\") pod \"keystone-cron-29535181-fgzvv\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.493990 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-combined-ca-bundle\") pod \"keystone-cron-29535181-fgzvv\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.494768 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-config-data\") pod \"keystone-cron-29535181-fgzvv\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.496010 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-fernet-keys\") pod \"keystone-cron-29535181-fgzvv\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.509777 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j6jz\" (UniqueName: \"kubernetes.io/projected/b8280e7e-39bf-4ace-b878-cc9148026c74-kube-api-access-2j6jz\") pod \"keystone-cron-29535181-fgzvv\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:00 crc kubenswrapper[4724]: I0226 13:01:00.532745 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:01 crc kubenswrapper[4724]: I0226 13:01:01.043767 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535181-fgzvv"] Feb 26 13:01:02 crc kubenswrapper[4724]: I0226 13:01:02.073929 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535181-fgzvv" event={"ID":"b8280e7e-39bf-4ace-b878-cc9148026c74","Type":"ContainerStarted","Data":"e2d579ead547de5e16199e4dda6515fe7548f1f97cbd860e35bb59d17fbf6df0"} Feb 26 13:01:02 crc kubenswrapper[4724]: I0226 13:01:02.074634 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535181-fgzvv" event={"ID":"b8280e7e-39bf-4ace-b878-cc9148026c74","Type":"ContainerStarted","Data":"4d0c4f9bcac8fbedc5b4ea0d41b7e94eb14b3f0c17573a20ab915ed2d7ef13f3"} Feb 26 13:01:02 crc kubenswrapper[4724]: I0226 13:01:02.094456 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29535181-fgzvv" podStartSLOduration=2.09442805 podStartE2EDuration="2.09442805s" podCreationTimestamp="2026-02-26 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 13:01:02.089575547 +0000 UTC m=+6928.745314692" watchObservedRunningTime="2026-02-26 13:01:02.09442805 +0000 UTC m=+6928.750167205" Feb 26 13:01:05 crc kubenswrapper[4724]: I0226 13:01:05.105857 4724 generic.go:334] "Generic (PLEG): container finished" podID="b8280e7e-39bf-4ace-b878-cc9148026c74" containerID="e2d579ead547de5e16199e4dda6515fe7548f1f97cbd860e35bb59d17fbf6df0" exitCode=0 Feb 26 13:01:05 crc kubenswrapper[4724]: I0226 13:01:05.105925 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535181-fgzvv" event={"ID":"b8280e7e-39bf-4ace-b878-cc9148026c74","Type":"ContainerDied","Data":"e2d579ead547de5e16199e4dda6515fe7548f1f97cbd860e35bb59d17fbf6df0"} Feb 26 13:01:06 crc kubenswrapper[4724]: I0226 13:01:06.587571 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:06 crc kubenswrapper[4724]: I0226 13:01:06.724078 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j6jz\" (UniqueName: \"kubernetes.io/projected/b8280e7e-39bf-4ace-b878-cc9148026c74-kube-api-access-2j6jz\") pod \"b8280e7e-39bf-4ace-b878-cc9148026c74\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " Feb 26 13:01:06 crc kubenswrapper[4724]: I0226 13:01:06.724217 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-fernet-keys\") pod \"b8280e7e-39bf-4ace-b878-cc9148026c74\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " Feb 26 13:01:06 crc kubenswrapper[4724]: I0226 13:01:06.724241 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-config-data\") pod \"b8280e7e-39bf-4ace-b878-cc9148026c74\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " Feb 26 13:01:06 crc kubenswrapper[4724]: I0226 13:01:06.724472 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-combined-ca-bundle\") pod \"b8280e7e-39bf-4ace-b878-cc9148026c74\" (UID: \"b8280e7e-39bf-4ace-b878-cc9148026c74\") " Feb 26 13:01:06 crc kubenswrapper[4724]: I0226 13:01:06.732558 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8280e7e-39bf-4ace-b878-cc9148026c74-kube-api-access-2j6jz" (OuterVolumeSpecName: "kube-api-access-2j6jz") pod "b8280e7e-39bf-4ace-b878-cc9148026c74" (UID: "b8280e7e-39bf-4ace-b878-cc9148026c74"). InnerVolumeSpecName "kube-api-access-2j6jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:01:06 crc kubenswrapper[4724]: I0226 13:01:06.733515 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b8280e7e-39bf-4ace-b878-cc9148026c74" (UID: "b8280e7e-39bf-4ace-b878-cc9148026c74"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:01:06 crc kubenswrapper[4724]: I0226 13:01:06.754126 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8280e7e-39bf-4ace-b878-cc9148026c74" (UID: "b8280e7e-39bf-4ace-b878-cc9148026c74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:01:06 crc kubenswrapper[4724]: I0226 13:01:06.802757 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-config-data" (OuterVolumeSpecName: "config-data") pod "b8280e7e-39bf-4ace-b878-cc9148026c74" (UID: "b8280e7e-39bf-4ace-b878-cc9148026c74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:01:06 crc kubenswrapper[4724]: I0226 13:01:06.828615 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j6jz\" (UniqueName: \"kubernetes.io/projected/b8280e7e-39bf-4ace-b878-cc9148026c74-kube-api-access-2j6jz\") on node \"crc\" DevicePath \"\"" Feb 26 13:01:06 crc kubenswrapper[4724]: I0226 13:01:06.828790 4724 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 13:01:06 crc kubenswrapper[4724]: I0226 13:01:06.828824 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 13:01:06 crc kubenswrapper[4724]: I0226 13:01:06.828834 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8280e7e-39bf-4ace-b878-cc9148026c74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 13:01:06 crc kubenswrapper[4724]: I0226 13:01:06.988689 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:01:06 crc kubenswrapper[4724]: E0226 13:01:06.989308 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:01:07 crc kubenswrapper[4724]: I0226 13:01:07.128794 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535181-fgzvv" event={"ID":"b8280e7e-39bf-4ace-b878-cc9148026c74","Type":"ContainerDied","Data":"4d0c4f9bcac8fbedc5b4ea0d41b7e94eb14b3f0c17573a20ab915ed2d7ef13f3"} Feb 26 13:01:07 crc kubenswrapper[4724]: I0226 13:01:07.128848 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d0c4f9bcac8fbedc5b4ea0d41b7e94eb14b3f0c17573a20ab915ed2d7ef13f3" Feb 26 13:01:07 crc kubenswrapper[4724]: I0226 13:01:07.128960 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535181-fgzvv" Feb 26 13:01:18 crc kubenswrapper[4724]: I0226 13:01:18.975283 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:01:18 crc kubenswrapper[4724]: E0226 13:01:18.976819 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:01:24 crc kubenswrapper[4724]: I0226 13:01:24.645282 4724 scope.go:117] "RemoveContainer" containerID="7db680ae75e13faf0d0e01f9963436847eba6b7f701a870d9b2c7cda7b6a579a" Feb 26 13:01:32 crc kubenswrapper[4724]: I0226 13:01:32.976282 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:01:32 crc kubenswrapper[4724]: E0226 13:01:32.977217 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:01:39 crc kubenswrapper[4724]: I0226 13:01:39.944227 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sj49g"] Feb 26 13:01:39 crc kubenswrapper[4724]: E0226 13:01:39.947423 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8280e7e-39bf-4ace-b878-cc9148026c74" containerName="keystone-cron" Feb 26 13:01:39 crc kubenswrapper[4724]: I0226 13:01:39.947455 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8280e7e-39bf-4ace-b878-cc9148026c74" containerName="keystone-cron" Feb 26 13:01:39 crc kubenswrapper[4724]: I0226 13:01:39.947772 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8280e7e-39bf-4ace-b878-cc9148026c74" containerName="keystone-cron" Feb 26 13:01:39 crc kubenswrapper[4724]: I0226 13:01:39.949367 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:01:39 crc kubenswrapper[4724]: I0226 13:01:39.991407 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sj49g"] Feb 26 13:01:40 crc kubenswrapper[4724]: I0226 13:01:40.046613 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62e8c28c-5893-4543-bda4-20cf2d1866ae-utilities\") pod \"redhat-marketplace-sj49g\" (UID: \"62e8c28c-5893-4543-bda4-20cf2d1866ae\") " pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:01:40 crc kubenswrapper[4724]: I0226 13:01:40.046677 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8nb4\" (UniqueName: \"kubernetes.io/projected/62e8c28c-5893-4543-bda4-20cf2d1866ae-kube-api-access-x8nb4\") pod \"redhat-marketplace-sj49g\" (UID: \"62e8c28c-5893-4543-bda4-20cf2d1866ae\") " pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:01:40 crc kubenswrapper[4724]: I0226 13:01:40.046766 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62e8c28c-5893-4543-bda4-20cf2d1866ae-catalog-content\") pod \"redhat-marketplace-sj49g\" (UID: \"62e8c28c-5893-4543-bda4-20cf2d1866ae\") " pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:01:40 crc kubenswrapper[4724]: I0226 13:01:40.148374 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62e8c28c-5893-4543-bda4-20cf2d1866ae-catalog-content\") pod \"redhat-marketplace-sj49g\" (UID: \"62e8c28c-5893-4543-bda4-20cf2d1866ae\") " pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:01:40 crc kubenswrapper[4724]: I0226 13:01:40.148867 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62e8c28c-5893-4543-bda4-20cf2d1866ae-utilities\") pod \"redhat-marketplace-sj49g\" (UID: \"62e8c28c-5893-4543-bda4-20cf2d1866ae\") " pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:01:40 crc kubenswrapper[4724]: I0226 13:01:40.148892 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8nb4\" (UniqueName: \"kubernetes.io/projected/62e8c28c-5893-4543-bda4-20cf2d1866ae-kube-api-access-x8nb4\") pod \"redhat-marketplace-sj49g\" (UID: \"62e8c28c-5893-4543-bda4-20cf2d1866ae\") " pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:01:40 crc kubenswrapper[4724]: I0226 13:01:40.148974 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62e8c28c-5893-4543-bda4-20cf2d1866ae-catalog-content\") pod \"redhat-marketplace-sj49g\" (UID: \"62e8c28c-5893-4543-bda4-20cf2d1866ae\") " pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:01:40 crc kubenswrapper[4724]: I0226 13:01:40.150075 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62e8c28c-5893-4543-bda4-20cf2d1866ae-utilities\") pod \"redhat-marketplace-sj49g\" (UID: \"62e8c28c-5893-4543-bda4-20cf2d1866ae\") " pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:01:40 crc kubenswrapper[4724]: I0226 13:01:40.170601 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8nb4\" (UniqueName: \"kubernetes.io/projected/62e8c28c-5893-4543-bda4-20cf2d1866ae-kube-api-access-x8nb4\") pod \"redhat-marketplace-sj49g\" (UID: \"62e8c28c-5893-4543-bda4-20cf2d1866ae\") " pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:01:40 crc kubenswrapper[4724]: I0226 13:01:40.283638 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:01:40 crc kubenswrapper[4724]: I0226 13:01:40.969194 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sj49g"] Feb 26 13:01:41 crc kubenswrapper[4724]: I0226 13:01:41.438461 4724 generic.go:334] "Generic (PLEG): container finished" podID="62e8c28c-5893-4543-bda4-20cf2d1866ae" containerID="aa878e0e164957b6b6585a00ab5925ef3eefcf8d9c4a4281239b98adfea87a71" exitCode=0 Feb 26 13:01:41 crc kubenswrapper[4724]: I0226 13:01:41.438509 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sj49g" event={"ID":"62e8c28c-5893-4543-bda4-20cf2d1866ae","Type":"ContainerDied","Data":"aa878e0e164957b6b6585a00ab5925ef3eefcf8d9c4a4281239b98adfea87a71"} Feb 26 13:01:41 crc kubenswrapper[4724]: I0226 13:01:41.438545 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sj49g" event={"ID":"62e8c28c-5893-4543-bda4-20cf2d1866ae","Type":"ContainerStarted","Data":"5e4126f59c5a1045a3c85e864f163470815856745688ef80fb6466340e77d183"} Feb 26 13:01:42 crc kubenswrapper[4724]: I0226 13:01:42.449312 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sj49g" event={"ID":"62e8c28c-5893-4543-bda4-20cf2d1866ae","Type":"ContainerStarted","Data":"1e36bc4536ce53782063a65960f1536bd3cf679754c1f3ce96d2d6f144bf5db9"} Feb 26 13:01:44 crc kubenswrapper[4724]: I0226 13:01:44.466479 4724 generic.go:334] "Generic (PLEG): container finished" podID="62e8c28c-5893-4543-bda4-20cf2d1866ae" containerID="1e36bc4536ce53782063a65960f1536bd3cf679754c1f3ce96d2d6f144bf5db9" exitCode=0 Feb 26 13:01:44 crc kubenswrapper[4724]: I0226 13:01:44.466811 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sj49g" event={"ID":"62e8c28c-5893-4543-bda4-20cf2d1866ae","Type":"ContainerDied","Data":"1e36bc4536ce53782063a65960f1536bd3cf679754c1f3ce96d2d6f144bf5db9"} Feb 26 13:01:45 crc kubenswrapper[4724]: I0226 13:01:45.477787 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sj49g" event={"ID":"62e8c28c-5893-4543-bda4-20cf2d1866ae","Type":"ContainerStarted","Data":"f652b5336b51c9d1768711f9185aaf0adadb455e3dd19b40f1d1400b50160966"} Feb 26 13:01:45 crc kubenswrapper[4724]: I0226 13:01:45.504785 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sj49g" podStartSLOduration=3.088473757 podStartE2EDuration="6.504762368s" podCreationTimestamp="2026-02-26 13:01:39 +0000 UTC" firstStartedPulling="2026-02-26 13:01:41.440407729 +0000 UTC m=+6968.096146844" lastFinishedPulling="2026-02-26 13:01:44.85669634 +0000 UTC m=+6971.512435455" observedRunningTime="2026-02-26 13:01:45.496716653 +0000 UTC m=+6972.152455788" watchObservedRunningTime="2026-02-26 13:01:45.504762368 +0000 UTC m=+6972.160501483" Feb 26 13:01:47 crc kubenswrapper[4724]: I0226 13:01:47.975830 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:01:47 crc kubenswrapper[4724]: E0226 13:01:47.976470 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:01:50 crc kubenswrapper[4724]: I0226 13:01:50.284070 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:01:50 crc kubenswrapper[4724]: I0226 13:01:50.285375 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:01:51 crc kubenswrapper[4724]: I0226 13:01:51.339807 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-sj49g" podUID="62e8c28c-5893-4543-bda4-20cf2d1866ae" containerName="registry-server" probeResult="failure" output=< Feb 26 13:01:51 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:01:51 crc kubenswrapper[4724]: > Feb 26 13:02:00 crc kubenswrapper[4724]: I0226 13:02:00.153095 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535182-wxmn4"] Feb 26 13:02:00 crc kubenswrapper[4724]: I0226 13:02:00.158434 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535182-wxmn4" Feb 26 13:02:00 crc kubenswrapper[4724]: I0226 13:02:00.163554 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:02:00 crc kubenswrapper[4724]: I0226 13:02:00.163929 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:02:00 crc kubenswrapper[4724]: I0226 13:02:00.164115 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:02:00 crc kubenswrapper[4724]: I0226 13:02:00.167347 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535182-wxmn4"] Feb 26 13:02:00 crc kubenswrapper[4724]: I0226 13:02:00.257013 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k4l2\" (UniqueName: \"kubernetes.io/projected/672e898e-cc7d-4920-a471-e25c47cbd89d-kube-api-access-9k4l2\") pod \"auto-csr-approver-29535182-wxmn4\" (UID: \"672e898e-cc7d-4920-a471-e25c47cbd89d\") " pod="openshift-infra/auto-csr-approver-29535182-wxmn4" Feb 26 13:02:00 crc kubenswrapper[4724]: I0226 13:02:00.339703 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:02:00 crc kubenswrapper[4724]: I0226 13:02:00.358904 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k4l2\" (UniqueName: \"kubernetes.io/projected/672e898e-cc7d-4920-a471-e25c47cbd89d-kube-api-access-9k4l2\") pod \"auto-csr-approver-29535182-wxmn4\" (UID: \"672e898e-cc7d-4920-a471-e25c47cbd89d\") " pod="openshift-infra/auto-csr-approver-29535182-wxmn4" Feb 26 13:02:00 crc kubenswrapper[4724]: I0226 13:02:00.386835 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k4l2\" (UniqueName: \"kubernetes.io/projected/672e898e-cc7d-4920-a471-e25c47cbd89d-kube-api-access-9k4l2\") pod \"auto-csr-approver-29535182-wxmn4\" (UID: \"672e898e-cc7d-4920-a471-e25c47cbd89d\") " pod="openshift-infra/auto-csr-approver-29535182-wxmn4" Feb 26 13:02:00 crc kubenswrapper[4724]: I0226 13:02:00.400631 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:02:00 crc kubenswrapper[4724]: I0226 13:02:00.489847 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535182-wxmn4" Feb 26 13:02:01 crc kubenswrapper[4724]: I0226 13:02:01.198386 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535182-wxmn4"] Feb 26 13:02:01 crc kubenswrapper[4724]: I0226 13:02:01.650590 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535182-wxmn4" event={"ID":"672e898e-cc7d-4920-a471-e25c47cbd89d","Type":"ContainerStarted","Data":"fe405532b33f20d9ebbc3744e3334ab54d51380a2e84a97e88313f3fd170ad49"} Feb 26 13:02:01 crc kubenswrapper[4724]: I0226 13:02:01.993489 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:02:01 crc kubenswrapper[4724]: E0226 13:02:01.994259 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:02:02 crc kubenswrapper[4724]: I0226 13:02:02.667713 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535182-wxmn4" event={"ID":"672e898e-cc7d-4920-a471-e25c47cbd89d","Type":"ContainerStarted","Data":"3ae345e5dbbed60285eb9316a677d67db92eb919265d9f75cc4a67ae3331ba59"} Feb 26 13:02:02 crc kubenswrapper[4724]: I0226 13:02:02.686898 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535182-wxmn4" podStartSLOduration=1.655216915 podStartE2EDuration="2.686877482s" podCreationTimestamp="2026-02-26 13:02:00 +0000 UTC" firstStartedPulling="2026-02-26 13:02:01.239075496 +0000 UTC m=+6987.894814621" lastFinishedPulling="2026-02-26 13:02:02.270736073 +0000 UTC m=+6988.926475188" observedRunningTime="2026-02-26 13:02:02.682750727 +0000 UTC m=+6989.338489862" watchObservedRunningTime="2026-02-26 13:02:02.686877482 +0000 UTC m=+6989.342616597" Feb 26 13:02:02 crc kubenswrapper[4724]: I0226 13:02:02.709674 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sj49g"] Feb 26 13:02:02 crc kubenswrapper[4724]: I0226 13:02:02.709925 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sj49g" podUID="62e8c28c-5893-4543-bda4-20cf2d1866ae" containerName="registry-server" containerID="cri-o://f652b5336b51c9d1768711f9185aaf0adadb455e3dd19b40f1d1400b50160966" gracePeriod=2 Feb 26 13:02:02 crc kubenswrapper[4724]: E0226 13:02:02.853212 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62e8c28c_5893_4543_bda4_20cf2d1866ae.slice/crio-f652b5336b51c9d1768711f9185aaf0adadb455e3dd19b40f1d1400b50160966.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62e8c28c_5893_4543_bda4_20cf2d1866ae.slice/crio-conmon-f652b5336b51c9d1768711f9185aaf0adadb455e3dd19b40f1d1400b50160966.scope\": RecentStats: unable to find data in memory cache]" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.284715 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.428385 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62e8c28c-5893-4543-bda4-20cf2d1866ae-catalog-content\") pod \"62e8c28c-5893-4543-bda4-20cf2d1866ae\" (UID: \"62e8c28c-5893-4543-bda4-20cf2d1866ae\") " Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.428585 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8nb4\" (UniqueName: \"kubernetes.io/projected/62e8c28c-5893-4543-bda4-20cf2d1866ae-kube-api-access-x8nb4\") pod \"62e8c28c-5893-4543-bda4-20cf2d1866ae\" (UID: \"62e8c28c-5893-4543-bda4-20cf2d1866ae\") " Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.428695 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62e8c28c-5893-4543-bda4-20cf2d1866ae-utilities\") pod \"62e8c28c-5893-4543-bda4-20cf2d1866ae\" (UID: \"62e8c28c-5893-4543-bda4-20cf2d1866ae\") " Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.430782 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62e8c28c-5893-4543-bda4-20cf2d1866ae-utilities" (OuterVolumeSpecName: "utilities") pod "62e8c28c-5893-4543-bda4-20cf2d1866ae" (UID: "62e8c28c-5893-4543-bda4-20cf2d1866ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.443536 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62e8c28c-5893-4543-bda4-20cf2d1866ae-kube-api-access-x8nb4" (OuterVolumeSpecName: "kube-api-access-x8nb4") pod "62e8c28c-5893-4543-bda4-20cf2d1866ae" (UID: "62e8c28c-5893-4543-bda4-20cf2d1866ae"). InnerVolumeSpecName "kube-api-access-x8nb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.458862 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62e8c28c-5893-4543-bda4-20cf2d1866ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62e8c28c-5893-4543-bda4-20cf2d1866ae" (UID: "62e8c28c-5893-4543-bda4-20cf2d1866ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.530937 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62e8c28c-5893-4543-bda4-20cf2d1866ae-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.530975 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62e8c28c-5893-4543-bda4-20cf2d1866ae-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.530986 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8nb4\" (UniqueName: \"kubernetes.io/projected/62e8c28c-5893-4543-bda4-20cf2d1866ae-kube-api-access-x8nb4\") on node \"crc\" DevicePath \"\"" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.679860 4724 generic.go:334] "Generic (PLEG): container finished" podID="62e8c28c-5893-4543-bda4-20cf2d1866ae" containerID="f652b5336b51c9d1768711f9185aaf0adadb455e3dd19b40f1d1400b50160966" exitCode=0 Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.679911 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sj49g" event={"ID":"62e8c28c-5893-4543-bda4-20cf2d1866ae","Type":"ContainerDied","Data":"f652b5336b51c9d1768711f9185aaf0adadb455e3dd19b40f1d1400b50160966"} Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.679943 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sj49g" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.679963 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sj49g" event={"ID":"62e8c28c-5893-4543-bda4-20cf2d1866ae","Type":"ContainerDied","Data":"5e4126f59c5a1045a3c85e864f163470815856745688ef80fb6466340e77d183"} Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.679999 4724 scope.go:117] "RemoveContainer" containerID="f652b5336b51c9d1768711f9185aaf0adadb455e3dd19b40f1d1400b50160966" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.709018 4724 scope.go:117] "RemoveContainer" containerID="1e36bc4536ce53782063a65960f1536bd3cf679754c1f3ce96d2d6f144bf5db9" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.724952 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sj49g"] Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.734103 4724 scope.go:117] "RemoveContainer" containerID="aa878e0e164957b6b6585a00ab5925ef3eefcf8d9c4a4281239b98adfea87a71" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.750675 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sj49g"] Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.798392 4724 scope.go:117] "RemoveContainer" containerID="f652b5336b51c9d1768711f9185aaf0adadb455e3dd19b40f1d1400b50160966" Feb 26 13:02:03 crc kubenswrapper[4724]: E0226 13:02:03.798926 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f652b5336b51c9d1768711f9185aaf0adadb455e3dd19b40f1d1400b50160966\": container with ID starting with f652b5336b51c9d1768711f9185aaf0adadb455e3dd19b40f1d1400b50160966 not found: ID does not exist" containerID="f652b5336b51c9d1768711f9185aaf0adadb455e3dd19b40f1d1400b50160966" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.798968 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f652b5336b51c9d1768711f9185aaf0adadb455e3dd19b40f1d1400b50160966"} err="failed to get container status \"f652b5336b51c9d1768711f9185aaf0adadb455e3dd19b40f1d1400b50160966\": rpc error: code = NotFound desc = could not find container \"f652b5336b51c9d1768711f9185aaf0adadb455e3dd19b40f1d1400b50160966\": container with ID starting with f652b5336b51c9d1768711f9185aaf0adadb455e3dd19b40f1d1400b50160966 not found: ID does not exist" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.798999 4724 scope.go:117] "RemoveContainer" containerID="1e36bc4536ce53782063a65960f1536bd3cf679754c1f3ce96d2d6f144bf5db9" Feb 26 13:02:03 crc kubenswrapper[4724]: E0226 13:02:03.799609 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e36bc4536ce53782063a65960f1536bd3cf679754c1f3ce96d2d6f144bf5db9\": container with ID starting with 1e36bc4536ce53782063a65960f1536bd3cf679754c1f3ce96d2d6f144bf5db9 not found: ID does not exist" containerID="1e36bc4536ce53782063a65960f1536bd3cf679754c1f3ce96d2d6f144bf5db9" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.799741 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e36bc4536ce53782063a65960f1536bd3cf679754c1f3ce96d2d6f144bf5db9"} err="failed to get container status \"1e36bc4536ce53782063a65960f1536bd3cf679754c1f3ce96d2d6f144bf5db9\": rpc error: code = NotFound desc = could not find container \"1e36bc4536ce53782063a65960f1536bd3cf679754c1f3ce96d2d6f144bf5db9\": container with ID starting with 1e36bc4536ce53782063a65960f1536bd3cf679754c1f3ce96d2d6f144bf5db9 not found: ID does not exist" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.799827 4724 scope.go:117] "RemoveContainer" containerID="aa878e0e164957b6b6585a00ab5925ef3eefcf8d9c4a4281239b98adfea87a71" Feb 26 13:02:03 crc kubenswrapper[4724]: E0226 13:02:03.800164 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa878e0e164957b6b6585a00ab5925ef3eefcf8d9c4a4281239b98adfea87a71\": container with ID starting with aa878e0e164957b6b6585a00ab5925ef3eefcf8d9c4a4281239b98adfea87a71 not found: ID does not exist" containerID="aa878e0e164957b6b6585a00ab5925ef3eefcf8d9c4a4281239b98adfea87a71" Feb 26 13:02:03 crc kubenswrapper[4724]: I0226 13:02:03.800219 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa878e0e164957b6b6585a00ab5925ef3eefcf8d9c4a4281239b98adfea87a71"} err="failed to get container status \"aa878e0e164957b6b6585a00ab5925ef3eefcf8d9c4a4281239b98adfea87a71\": rpc error: code = NotFound desc = could not find container \"aa878e0e164957b6b6585a00ab5925ef3eefcf8d9c4a4281239b98adfea87a71\": container with ID starting with aa878e0e164957b6b6585a00ab5925ef3eefcf8d9c4a4281239b98adfea87a71 not found: ID does not exist" Feb 26 13:02:04 crc kubenswrapper[4724]: I0226 13:02:04.002264 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62e8c28c-5893-4543-bda4-20cf2d1866ae" path="/var/lib/kubelet/pods/62e8c28c-5893-4543-bda4-20cf2d1866ae/volumes" Feb 26 13:02:04 crc kubenswrapper[4724]: I0226 13:02:04.689811 4724 generic.go:334] "Generic (PLEG): container finished" podID="672e898e-cc7d-4920-a471-e25c47cbd89d" containerID="3ae345e5dbbed60285eb9316a677d67db92eb919265d9f75cc4a67ae3331ba59" exitCode=0 Feb 26 13:02:04 crc kubenswrapper[4724]: I0226 13:02:04.689885 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535182-wxmn4" event={"ID":"672e898e-cc7d-4920-a471-e25c47cbd89d","Type":"ContainerDied","Data":"3ae345e5dbbed60285eb9316a677d67db92eb919265d9f75cc4a67ae3331ba59"} Feb 26 13:02:06 crc kubenswrapper[4724]: I0226 13:02:06.065401 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535182-wxmn4" Feb 26 13:02:06 crc kubenswrapper[4724]: I0226 13:02:06.182869 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k4l2\" (UniqueName: \"kubernetes.io/projected/672e898e-cc7d-4920-a471-e25c47cbd89d-kube-api-access-9k4l2\") pod \"672e898e-cc7d-4920-a471-e25c47cbd89d\" (UID: \"672e898e-cc7d-4920-a471-e25c47cbd89d\") " Feb 26 13:02:06 crc kubenswrapper[4724]: I0226 13:02:06.189012 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/672e898e-cc7d-4920-a471-e25c47cbd89d-kube-api-access-9k4l2" (OuterVolumeSpecName: "kube-api-access-9k4l2") pod "672e898e-cc7d-4920-a471-e25c47cbd89d" (UID: "672e898e-cc7d-4920-a471-e25c47cbd89d"). InnerVolumeSpecName "kube-api-access-9k4l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:02:06 crc kubenswrapper[4724]: I0226 13:02:06.285048 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k4l2\" (UniqueName: \"kubernetes.io/projected/672e898e-cc7d-4920-a471-e25c47cbd89d-kube-api-access-9k4l2\") on node \"crc\" DevicePath \"\"" Feb 26 13:02:06 crc kubenswrapper[4724]: I0226 13:02:06.710013 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535182-wxmn4" event={"ID":"672e898e-cc7d-4920-a471-e25c47cbd89d","Type":"ContainerDied","Data":"fe405532b33f20d9ebbc3744e3334ab54d51380a2e84a97e88313f3fd170ad49"} Feb 26 13:02:06 crc kubenswrapper[4724]: I0226 13:02:06.710469 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe405532b33f20d9ebbc3744e3334ab54d51380a2e84a97e88313f3fd170ad49" Feb 26 13:02:06 crc kubenswrapper[4724]: I0226 13:02:06.710081 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535182-wxmn4" Feb 26 13:02:06 crc kubenswrapper[4724]: I0226 13:02:06.781823 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535176-bg2ld"] Feb 26 13:02:06 crc kubenswrapper[4724]: I0226 13:02:06.793797 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535176-bg2ld"] Feb 26 13:02:07 crc kubenswrapper[4724]: I0226 13:02:07.989594 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab95d02d-14dc-4bbe-bc19-79e4bc9a5384" path="/var/lib/kubelet/pods/ab95d02d-14dc-4bbe-bc19-79e4bc9a5384/volumes" Feb 26 13:02:13 crc kubenswrapper[4724]: I0226 13:02:13.983794 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:02:13 crc kubenswrapper[4724]: E0226 13:02:13.985752 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:02:24 crc kubenswrapper[4724]: I0226 13:02:24.748514 4724 scope.go:117] "RemoveContainer" containerID="1bd75aa3245f4c261eb253cce2e2d9e95e353d3c46382fc73386a37c2668515e" Feb 26 13:02:28 crc kubenswrapper[4724]: I0226 13:02:28.976166 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:02:28 crc kubenswrapper[4724]: E0226 13:02:28.977095 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:02:43 crc kubenswrapper[4724]: I0226 13:02:43.981622 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:02:43 crc kubenswrapper[4724]: E0226 13:02:43.982378 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:02:56 crc kubenswrapper[4724]: I0226 13:02:56.975839 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:02:56 crc kubenswrapper[4724]: E0226 13:02:56.976638 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:03:09 crc kubenswrapper[4724]: I0226 13:03:09.975768 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:03:09 crc kubenswrapper[4724]: E0226 13:03:09.976920 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:03:24 crc kubenswrapper[4724]: I0226 13:03:24.976955 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:03:24 crc kubenswrapper[4724]: E0226 13:03:24.977867 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:03:36 crc kubenswrapper[4724]: I0226 13:03:36.980150 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:03:36 crc kubenswrapper[4724]: E0226 13:03:36.981203 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:03:49 crc kubenswrapper[4724]: I0226 13:03:49.977167 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:03:49 crc kubenswrapper[4724]: E0226 13:03:49.978572 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.179542 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535184-gcr9s"] Feb 26 13:04:00 crc kubenswrapper[4724]: E0226 13:04:00.180902 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e8c28c-5893-4543-bda4-20cf2d1866ae" containerName="extract-content" Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.180924 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e8c28c-5893-4543-bda4-20cf2d1866ae" containerName="extract-content" Feb 26 13:04:00 crc kubenswrapper[4724]: E0226 13:04:00.180970 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="672e898e-cc7d-4920-a471-e25c47cbd89d" containerName="oc" Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.180979 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="672e898e-cc7d-4920-a471-e25c47cbd89d" containerName="oc" Feb 26 13:04:00 crc kubenswrapper[4724]: E0226 13:04:00.180998 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e8c28c-5893-4543-bda4-20cf2d1866ae" containerName="extract-utilities" Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.181010 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e8c28c-5893-4543-bda4-20cf2d1866ae" containerName="extract-utilities" Feb 26 13:04:00 crc kubenswrapper[4724]: E0226 13:04:00.181026 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e8c28c-5893-4543-bda4-20cf2d1866ae" containerName="registry-server" Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.181034 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e8c28c-5893-4543-bda4-20cf2d1866ae" containerName="registry-server" Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.181437 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="62e8c28c-5893-4543-bda4-20cf2d1866ae" containerName="registry-server" Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.181466 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="672e898e-cc7d-4920-a471-e25c47cbd89d" containerName="oc" Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.182621 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535184-gcr9s" Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.186805 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.187351 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.187632 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.192311 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535184-gcr9s"] Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.306967 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt4x4\" (UniqueName: \"kubernetes.io/projected/693c6ae2-1455-4870-9c17-ef0d38ff5af8-kube-api-access-dt4x4\") pod \"auto-csr-approver-29535184-gcr9s\" (UID: \"693c6ae2-1455-4870-9c17-ef0d38ff5af8\") " pod="openshift-infra/auto-csr-approver-29535184-gcr9s" Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.409683 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt4x4\" (UniqueName: \"kubernetes.io/projected/693c6ae2-1455-4870-9c17-ef0d38ff5af8-kube-api-access-dt4x4\") pod \"auto-csr-approver-29535184-gcr9s\" (UID: \"693c6ae2-1455-4870-9c17-ef0d38ff5af8\") " pod="openshift-infra/auto-csr-approver-29535184-gcr9s" Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.436341 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt4x4\" (UniqueName: \"kubernetes.io/projected/693c6ae2-1455-4870-9c17-ef0d38ff5af8-kube-api-access-dt4x4\") pod \"auto-csr-approver-29535184-gcr9s\" (UID: \"693c6ae2-1455-4870-9c17-ef0d38ff5af8\") " pod="openshift-infra/auto-csr-approver-29535184-gcr9s" Feb 26 13:04:00 crc kubenswrapper[4724]: I0226 13:04:00.529486 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535184-gcr9s" Feb 26 13:04:01 crc kubenswrapper[4724]: I0226 13:04:01.225452 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535184-gcr9s"] Feb 26 13:04:01 crc kubenswrapper[4724]: I0226 13:04:01.238818 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 13:04:01 crc kubenswrapper[4724]: I0226 13:04:01.851082 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535184-gcr9s" event={"ID":"693c6ae2-1455-4870-9c17-ef0d38ff5af8","Type":"ContainerStarted","Data":"0010306d6dc9733276e46c5df71c47887596e1aa7b4910928e7583b0786f6b01"} Feb 26 13:04:03 crc kubenswrapper[4724]: I0226 13:04:03.878288 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535184-gcr9s" event={"ID":"693c6ae2-1455-4870-9c17-ef0d38ff5af8","Type":"ContainerStarted","Data":"83ec43cc78720389053632c2487d80569554ba7173794f5faf49dfd987db3476"} Feb 26 13:04:04 crc kubenswrapper[4724]: I0226 13:04:04.892384 4724 generic.go:334] "Generic (PLEG): container finished" podID="693c6ae2-1455-4870-9c17-ef0d38ff5af8" containerID="83ec43cc78720389053632c2487d80569554ba7173794f5faf49dfd987db3476" exitCode=0 Feb 26 13:04:04 crc kubenswrapper[4724]: I0226 13:04:04.892474 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535184-gcr9s" event={"ID":"693c6ae2-1455-4870-9c17-ef0d38ff5af8","Type":"ContainerDied","Data":"83ec43cc78720389053632c2487d80569554ba7173794f5faf49dfd987db3476"} Feb 26 13:04:04 crc kubenswrapper[4724]: I0226 13:04:04.975649 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:04:04 crc kubenswrapper[4724]: E0226 13:04:04.975981 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:04:06 crc kubenswrapper[4724]: I0226 13:04:06.470764 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535184-gcr9s" Feb 26 13:04:06 crc kubenswrapper[4724]: I0226 13:04:06.485090 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt4x4\" (UniqueName: \"kubernetes.io/projected/693c6ae2-1455-4870-9c17-ef0d38ff5af8-kube-api-access-dt4x4\") pod \"693c6ae2-1455-4870-9c17-ef0d38ff5af8\" (UID: \"693c6ae2-1455-4870-9c17-ef0d38ff5af8\") " Feb 26 13:04:06 crc kubenswrapper[4724]: I0226 13:04:06.500042 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/693c6ae2-1455-4870-9c17-ef0d38ff5af8-kube-api-access-dt4x4" (OuterVolumeSpecName: "kube-api-access-dt4x4") pod "693c6ae2-1455-4870-9c17-ef0d38ff5af8" (UID: "693c6ae2-1455-4870-9c17-ef0d38ff5af8"). InnerVolumeSpecName "kube-api-access-dt4x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:04:06 crc kubenswrapper[4724]: I0226 13:04:06.588473 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt4x4\" (UniqueName: \"kubernetes.io/projected/693c6ae2-1455-4870-9c17-ef0d38ff5af8-kube-api-access-dt4x4\") on node \"crc\" DevicePath \"\"" Feb 26 13:04:06 crc kubenswrapper[4724]: I0226 13:04:06.919504 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535184-gcr9s" event={"ID":"693c6ae2-1455-4870-9c17-ef0d38ff5af8","Type":"ContainerDied","Data":"0010306d6dc9733276e46c5df71c47887596e1aa7b4910928e7583b0786f6b01"} Feb 26 13:04:06 crc kubenswrapper[4724]: I0226 13:04:06.919559 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0010306d6dc9733276e46c5df71c47887596e1aa7b4910928e7583b0786f6b01" Feb 26 13:04:06 crc kubenswrapper[4724]: I0226 13:04:06.919622 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535184-gcr9s" Feb 26 13:04:07 crc kubenswrapper[4724]: I0226 13:04:07.660816 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535178-ld56v"] Feb 26 13:04:07 crc kubenswrapper[4724]: I0226 13:04:07.672476 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535178-ld56v"] Feb 26 13:04:07 crc kubenswrapper[4724]: I0226 13:04:07.992570 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a1882c9-33ec-4665-a39c-50f62e73280f" path="/var/lib/kubelet/pods/0a1882c9-33ec-4665-a39c-50f62e73280f/volumes" Feb 26 13:04:16 crc kubenswrapper[4724]: I0226 13:04:16.975475 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:04:18 crc kubenswrapper[4724]: I0226 13:04:18.024539 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"d2dd2414db08851e199c7911682ef4fabc2f32d8ee6a812766e0ebcf2d193500"} Feb 26 13:04:24 crc kubenswrapper[4724]: I0226 13:04:24.873568 4724 scope.go:117] "RemoveContainer" containerID="9f97590cafdf4ab6071f87ecc25bdd22bfcdcd705003df33857d60aaa46e220d" Feb 26 13:05:47 crc kubenswrapper[4724]: I0226 13:05:47.255011 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p7hqn"] Feb 26 13:05:47 crc kubenswrapper[4724]: E0226 13:05:47.256854 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="693c6ae2-1455-4870-9c17-ef0d38ff5af8" containerName="oc" Feb 26 13:05:47 crc kubenswrapper[4724]: I0226 13:05:47.256883 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="693c6ae2-1455-4870-9c17-ef0d38ff5af8" containerName="oc" Feb 26 13:05:47 crc kubenswrapper[4724]: I0226 13:05:47.257887 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="693c6ae2-1455-4870-9c17-ef0d38ff5af8" containerName="oc" Feb 26 13:05:47 crc kubenswrapper[4724]: I0226 13:05:47.265221 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:05:47 crc kubenswrapper[4724]: I0226 13:05:47.273691 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p7hqn"] Feb 26 13:05:47 crc kubenswrapper[4724]: I0226 13:05:47.384606 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000fd3a2-9702-4c8e-991c-ed05f059fb65-catalog-content\") pod \"certified-operators-p7hqn\" (UID: \"000fd3a2-9702-4c8e-991c-ed05f059fb65\") " pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:05:47 crc kubenswrapper[4724]: I0226 13:05:47.384696 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000fd3a2-9702-4c8e-991c-ed05f059fb65-utilities\") pod \"certified-operators-p7hqn\" (UID: \"000fd3a2-9702-4c8e-991c-ed05f059fb65\") " pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:05:47 crc kubenswrapper[4724]: I0226 13:05:47.384905 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcgv6\" (UniqueName: \"kubernetes.io/projected/000fd3a2-9702-4c8e-991c-ed05f059fb65-kube-api-access-lcgv6\") pod \"certified-operators-p7hqn\" (UID: \"000fd3a2-9702-4c8e-991c-ed05f059fb65\") " pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:05:47 crc kubenswrapper[4724]: I0226 13:05:47.487576 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000fd3a2-9702-4c8e-991c-ed05f059fb65-catalog-content\") pod \"certified-operators-p7hqn\" (UID: \"000fd3a2-9702-4c8e-991c-ed05f059fb65\") " pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:05:47 crc kubenswrapper[4724]: I0226 13:05:47.487978 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000fd3a2-9702-4c8e-991c-ed05f059fb65-utilities\") pod \"certified-operators-p7hqn\" (UID: \"000fd3a2-9702-4c8e-991c-ed05f059fb65\") " pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:05:47 crc kubenswrapper[4724]: I0226 13:05:47.488003 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcgv6\" (UniqueName: \"kubernetes.io/projected/000fd3a2-9702-4c8e-991c-ed05f059fb65-kube-api-access-lcgv6\") pod \"certified-operators-p7hqn\" (UID: \"000fd3a2-9702-4c8e-991c-ed05f059fb65\") " pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:05:47 crc kubenswrapper[4724]: I0226 13:05:47.488030 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000fd3a2-9702-4c8e-991c-ed05f059fb65-catalog-content\") pod \"certified-operators-p7hqn\" (UID: \"000fd3a2-9702-4c8e-991c-ed05f059fb65\") " pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:05:47 crc kubenswrapper[4724]: I0226 13:05:47.488342 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000fd3a2-9702-4c8e-991c-ed05f059fb65-utilities\") pod \"certified-operators-p7hqn\" (UID: \"000fd3a2-9702-4c8e-991c-ed05f059fb65\") " pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:05:47 crc kubenswrapper[4724]: I0226 13:05:47.517796 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcgv6\" (UniqueName: \"kubernetes.io/projected/000fd3a2-9702-4c8e-991c-ed05f059fb65-kube-api-access-lcgv6\") pod \"certified-operators-p7hqn\" (UID: \"000fd3a2-9702-4c8e-991c-ed05f059fb65\") " pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:05:47 crc kubenswrapper[4724]: I0226 13:05:47.600525 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:05:48 crc kubenswrapper[4724]: I0226 13:05:48.503195 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p7hqn"] Feb 26 13:05:48 crc kubenswrapper[4724]: I0226 13:05:48.965873 4724 generic.go:334] "Generic (PLEG): container finished" podID="000fd3a2-9702-4c8e-991c-ed05f059fb65" containerID="9b3f48e2c57ecf6db92d13440ee12aa394cd3bf9ca2734efcb165259f12e359f" exitCode=0 Feb 26 13:05:48 crc kubenswrapper[4724]: I0226 13:05:48.966140 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7hqn" event={"ID":"000fd3a2-9702-4c8e-991c-ed05f059fb65","Type":"ContainerDied","Data":"9b3f48e2c57ecf6db92d13440ee12aa394cd3bf9ca2734efcb165259f12e359f"} Feb 26 13:05:48 crc kubenswrapper[4724]: I0226 13:05:48.966813 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7hqn" event={"ID":"000fd3a2-9702-4c8e-991c-ed05f059fb65","Type":"ContainerStarted","Data":"7a9549a58405b3075e5512a356804c73fe992470d40eed882e3a6ead13d01804"} Feb 26 13:05:50 crc kubenswrapper[4724]: I0226 13:05:50.985642 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7hqn" event={"ID":"000fd3a2-9702-4c8e-991c-ed05f059fb65","Type":"ContainerStarted","Data":"28f382d6726be185288852ed66b8213c39004618cf4e77206aa305e21c119c95"} Feb 26 13:05:56 crc kubenswrapper[4724]: I0226 13:05:56.043810 4724 generic.go:334] "Generic (PLEG): container finished" podID="000fd3a2-9702-4c8e-991c-ed05f059fb65" containerID="28f382d6726be185288852ed66b8213c39004618cf4e77206aa305e21c119c95" exitCode=0 Feb 26 13:05:56 crc kubenswrapper[4724]: I0226 13:05:56.043892 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7hqn" event={"ID":"000fd3a2-9702-4c8e-991c-ed05f059fb65","Type":"ContainerDied","Data":"28f382d6726be185288852ed66b8213c39004618cf4e77206aa305e21c119c95"} Feb 26 13:05:58 crc kubenswrapper[4724]: I0226 13:05:58.073885 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7hqn" event={"ID":"000fd3a2-9702-4c8e-991c-ed05f059fb65","Type":"ContainerStarted","Data":"b57502631f6cf6baeacf7683ca0a1962af944b327fc2e13ac3924d0990558553"} Feb 26 13:05:58 crc kubenswrapper[4724]: I0226 13:05:58.097171 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p7hqn" podStartSLOduration=2.8721789170000003 podStartE2EDuration="11.097144351s" podCreationTimestamp="2026-02-26 13:05:47 +0000 UTC" firstStartedPulling="2026-02-26 13:05:48.969570323 +0000 UTC m=+7215.625309438" lastFinishedPulling="2026-02-26 13:05:57.194535757 +0000 UTC m=+7223.850274872" observedRunningTime="2026-02-26 13:05:58.092902043 +0000 UTC m=+7224.748641158" watchObservedRunningTime="2026-02-26 13:05:58.097144351 +0000 UTC m=+7224.752883466" Feb 26 13:06:00 crc kubenswrapper[4724]: I0226 13:06:00.144868 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535186-bmg7h"] Feb 26 13:06:00 crc kubenswrapper[4724]: I0226 13:06:00.146657 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535186-bmg7h" Feb 26 13:06:00 crc kubenswrapper[4724]: I0226 13:06:00.151657 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shg8n\" (UniqueName: \"kubernetes.io/projected/6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb-kube-api-access-shg8n\") pod \"auto-csr-approver-29535186-bmg7h\" (UID: \"6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb\") " pod="openshift-infra/auto-csr-approver-29535186-bmg7h" Feb 26 13:06:00 crc kubenswrapper[4724]: I0226 13:06:00.155602 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:06:00 crc kubenswrapper[4724]: I0226 13:06:00.155628 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:06:00 crc kubenswrapper[4724]: I0226 13:06:00.158094 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:06:00 crc kubenswrapper[4724]: I0226 13:06:00.171232 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535186-bmg7h"] Feb 26 13:06:00 crc kubenswrapper[4724]: I0226 13:06:00.253977 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shg8n\" (UniqueName: \"kubernetes.io/projected/6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb-kube-api-access-shg8n\") pod \"auto-csr-approver-29535186-bmg7h\" (UID: \"6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb\") " pod="openshift-infra/auto-csr-approver-29535186-bmg7h" Feb 26 13:06:00 crc kubenswrapper[4724]: I0226 13:06:00.274507 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shg8n\" (UniqueName: \"kubernetes.io/projected/6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb-kube-api-access-shg8n\") pod \"auto-csr-approver-29535186-bmg7h\" (UID: \"6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb\") " pod="openshift-infra/auto-csr-approver-29535186-bmg7h" Feb 26 13:06:00 crc kubenswrapper[4724]: I0226 13:06:00.470982 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535186-bmg7h" Feb 26 13:06:00 crc kubenswrapper[4724]: I0226 13:06:00.971624 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535186-bmg7h"] Feb 26 13:06:01 crc kubenswrapper[4724]: I0226 13:06:01.100554 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535186-bmg7h" event={"ID":"6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb","Type":"ContainerStarted","Data":"32c3ccdf5fc3c61e1e670f20d18b0a11c8f7cd4690453fa754dbff5d7dd445ea"} Feb 26 13:06:03 crc kubenswrapper[4724]: I0226 13:06:03.123690 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535186-bmg7h" event={"ID":"6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb","Type":"ContainerStarted","Data":"b9dd7baf0b0df85e53869d54f30c5074f7b95d6f396bfebda564fafb6af176b2"} Feb 26 13:06:03 crc kubenswrapper[4724]: I0226 13:06:03.144541 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535186-bmg7h" podStartSLOduration=2.102410682 podStartE2EDuration="3.144519505s" podCreationTimestamp="2026-02-26 13:06:00 +0000 UTC" firstStartedPulling="2026-02-26 13:06:00.973838582 +0000 UTC m=+7227.629577697" lastFinishedPulling="2026-02-26 13:06:02.015947405 +0000 UTC m=+7228.671686520" observedRunningTime="2026-02-26 13:06:03.137582478 +0000 UTC m=+7229.793321583" watchObservedRunningTime="2026-02-26 13:06:03.144519505 +0000 UTC m=+7229.800258620" Feb 26 13:06:05 crc kubenswrapper[4724]: I0226 13:06:05.145751 4724 generic.go:334] "Generic (PLEG): container finished" podID="6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb" containerID="b9dd7baf0b0df85e53869d54f30c5074f7b95d6f396bfebda564fafb6af176b2" exitCode=0 Feb 26 13:06:05 crc kubenswrapper[4724]: I0226 13:06:05.146090 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535186-bmg7h" event={"ID":"6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb","Type":"ContainerDied","Data":"b9dd7baf0b0df85e53869d54f30c5074f7b95d6f396bfebda564fafb6af176b2"} Feb 26 13:06:06 crc kubenswrapper[4724]: I0226 13:06:06.574463 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535186-bmg7h" Feb 26 13:06:06 crc kubenswrapper[4724]: I0226 13:06:06.739004 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shg8n\" (UniqueName: \"kubernetes.io/projected/6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb-kube-api-access-shg8n\") pod \"6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb\" (UID: \"6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb\") " Feb 26 13:06:06 crc kubenswrapper[4724]: I0226 13:06:06.749514 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb-kube-api-access-shg8n" (OuterVolumeSpecName: "kube-api-access-shg8n") pod "6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb" (UID: "6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb"). InnerVolumeSpecName "kube-api-access-shg8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:06:06 crc kubenswrapper[4724]: I0226 13:06:06.842103 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shg8n\" (UniqueName: \"kubernetes.io/projected/6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb-kube-api-access-shg8n\") on node \"crc\" DevicePath \"\"" Feb 26 13:06:07 crc kubenswrapper[4724]: I0226 13:06:07.165282 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535186-bmg7h" event={"ID":"6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb","Type":"ContainerDied","Data":"32c3ccdf5fc3c61e1e670f20d18b0a11c8f7cd4690453fa754dbff5d7dd445ea"} Feb 26 13:06:07 crc kubenswrapper[4724]: I0226 13:06:07.165321 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c3ccdf5fc3c61e1e670f20d18b0a11c8f7cd4690453fa754dbff5d7dd445ea" Feb 26 13:06:07 crc kubenswrapper[4724]: I0226 13:06:07.165898 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535186-bmg7h" Feb 26 13:06:07 crc kubenswrapper[4724]: I0226 13:06:07.227978 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535180-q4552"] Feb 26 13:06:07 crc kubenswrapper[4724]: I0226 13:06:07.240154 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535180-q4552"] Feb 26 13:06:07 crc kubenswrapper[4724]: I0226 13:06:07.601254 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:06:07 crc kubenswrapper[4724]: I0226 13:06:07.601319 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:06:07 crc kubenswrapper[4724]: I0226 13:06:07.988662 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4a13762-53eb-4d56-b416-33fc8ebc2592" path="/var/lib/kubelet/pods/a4a13762-53eb-4d56-b416-33fc8ebc2592/volumes" Feb 26 13:06:08 crc kubenswrapper[4724]: I0226 13:06:08.650300 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-p7hqn" podUID="000fd3a2-9702-4c8e-991c-ed05f059fb65" containerName="registry-server" probeResult="failure" output=< Feb 26 13:06:08 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:06:08 crc kubenswrapper[4724]: > Feb 26 13:06:18 crc kubenswrapper[4724]: I0226 13:06:18.656401 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-p7hqn" podUID="000fd3a2-9702-4c8e-991c-ed05f059fb65" containerName="registry-server" probeResult="failure" output=< Feb 26 13:06:18 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:06:18 crc kubenswrapper[4724]: > Feb 26 13:06:25 crc kubenswrapper[4724]: I0226 13:06:25.142260 4724 scope.go:117] "RemoveContainer" containerID="f585ba93f838cb069a670e46e5d3976890253cd116595bf15ed3ce3a921a898d" Feb 26 13:06:28 crc kubenswrapper[4724]: I0226 13:06:28.655112 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-p7hqn" podUID="000fd3a2-9702-4c8e-991c-ed05f059fb65" containerName="registry-server" probeResult="failure" output=< Feb 26 13:06:28 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:06:28 crc kubenswrapper[4724]: > Feb 26 13:06:37 crc kubenswrapper[4724]: I0226 13:06:37.655675 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:06:37 crc kubenswrapper[4724]: I0226 13:06:37.713627 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:06:37 crc kubenswrapper[4724]: I0226 13:06:37.899297 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p7hqn"] Feb 26 13:06:39 crc kubenswrapper[4724]: I0226 13:06:39.503515 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p7hqn" podUID="000fd3a2-9702-4c8e-991c-ed05f059fb65" containerName="registry-server" containerID="cri-o://b57502631f6cf6baeacf7683ca0a1962af944b327fc2e13ac3924d0990558553" gracePeriod=2 Feb 26 13:06:40 crc kubenswrapper[4724]: I0226 13:06:40.516644 4724 generic.go:334] "Generic (PLEG): container finished" podID="000fd3a2-9702-4c8e-991c-ed05f059fb65" containerID="b57502631f6cf6baeacf7683ca0a1962af944b327fc2e13ac3924d0990558553" exitCode=0 Feb 26 13:06:40 crc kubenswrapper[4724]: I0226 13:06:40.516736 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7hqn" event={"ID":"000fd3a2-9702-4c8e-991c-ed05f059fb65","Type":"ContainerDied","Data":"b57502631f6cf6baeacf7683ca0a1962af944b327fc2e13ac3924d0990558553"} Feb 26 13:06:40 crc kubenswrapper[4724]: I0226 13:06:40.853526 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.012882 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcgv6\" (UniqueName: \"kubernetes.io/projected/000fd3a2-9702-4c8e-991c-ed05f059fb65-kube-api-access-lcgv6\") pod \"000fd3a2-9702-4c8e-991c-ed05f059fb65\" (UID: \"000fd3a2-9702-4c8e-991c-ed05f059fb65\") " Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.013100 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000fd3a2-9702-4c8e-991c-ed05f059fb65-catalog-content\") pod \"000fd3a2-9702-4c8e-991c-ed05f059fb65\" (UID: \"000fd3a2-9702-4c8e-991c-ed05f059fb65\") " Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.014380 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/000fd3a2-9702-4c8e-991c-ed05f059fb65-utilities" (OuterVolumeSpecName: "utilities") pod "000fd3a2-9702-4c8e-991c-ed05f059fb65" (UID: "000fd3a2-9702-4c8e-991c-ed05f059fb65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.014441 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000fd3a2-9702-4c8e-991c-ed05f059fb65-utilities\") pod \"000fd3a2-9702-4c8e-991c-ed05f059fb65\" (UID: \"000fd3a2-9702-4c8e-991c-ed05f059fb65\") " Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.016103 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000fd3a2-9702-4c8e-991c-ed05f059fb65-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.047234 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/000fd3a2-9702-4c8e-991c-ed05f059fb65-kube-api-access-lcgv6" (OuterVolumeSpecName: "kube-api-access-lcgv6") pod "000fd3a2-9702-4c8e-991c-ed05f059fb65" (UID: "000fd3a2-9702-4c8e-991c-ed05f059fb65"). InnerVolumeSpecName "kube-api-access-lcgv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.117557 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcgv6\" (UniqueName: \"kubernetes.io/projected/000fd3a2-9702-4c8e-991c-ed05f059fb65-kube-api-access-lcgv6\") on node \"crc\" DevicePath \"\"" Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.146695 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/000fd3a2-9702-4c8e-991c-ed05f059fb65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "000fd3a2-9702-4c8e-991c-ed05f059fb65" (UID: "000fd3a2-9702-4c8e-991c-ed05f059fb65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.219839 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000fd3a2-9702-4c8e-991c-ed05f059fb65-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.530002 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7hqn" event={"ID":"000fd3a2-9702-4c8e-991c-ed05f059fb65","Type":"ContainerDied","Data":"7a9549a58405b3075e5512a356804c73fe992470d40eed882e3a6ead13d01804"} Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.530343 4724 scope.go:117] "RemoveContainer" containerID="b57502631f6cf6baeacf7683ca0a1962af944b327fc2e13ac3924d0990558553" Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.530104 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7hqn" Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.554539 4724 scope.go:117] "RemoveContainer" containerID="28f382d6726be185288852ed66b8213c39004618cf4e77206aa305e21c119c95" Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.592240 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p7hqn"] Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.608596 4724 scope.go:117] "RemoveContainer" containerID="9b3f48e2c57ecf6db92d13440ee12aa394cd3bf9ca2734efcb165259f12e359f" Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.611742 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p7hqn"] Feb 26 13:06:41 crc kubenswrapper[4724]: I0226 13:06:41.987154 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="000fd3a2-9702-4c8e-991c-ed05f059fb65" path="/var/lib/kubelet/pods/000fd3a2-9702-4c8e-991c-ed05f059fb65/volumes" Feb 26 13:06:46 crc kubenswrapper[4724]: I0226 13:06:46.906294 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:06:46 crc kubenswrapper[4724]: I0226 13:06:46.907987 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:07:16 crc kubenswrapper[4724]: I0226 13:07:16.907253 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:07:16 crc kubenswrapper[4724]: I0226 13:07:16.907839 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:07:46 crc kubenswrapper[4724]: I0226 13:07:46.906891 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:07:46 crc kubenswrapper[4724]: I0226 13:07:46.908552 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:07:46 crc kubenswrapper[4724]: I0226 13:07:46.908685 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 13:07:46 crc kubenswrapper[4724]: I0226 13:07:46.909517 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2dd2414db08851e199c7911682ef4fabc2f32d8ee6a812766e0ebcf2d193500"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 13:07:46 crc kubenswrapper[4724]: I0226 13:07:46.909696 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://d2dd2414db08851e199c7911682ef4fabc2f32d8ee6a812766e0ebcf2d193500" gracePeriod=600 Feb 26 13:07:47 crc kubenswrapper[4724]: I0226 13:07:47.188258 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="d2dd2414db08851e199c7911682ef4fabc2f32d8ee6a812766e0ebcf2d193500" exitCode=0 Feb 26 13:07:47 crc kubenswrapper[4724]: I0226 13:07:47.188329 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"d2dd2414db08851e199c7911682ef4fabc2f32d8ee6a812766e0ebcf2d193500"} Feb 26 13:07:47 crc kubenswrapper[4724]: I0226 13:07:47.188943 4724 scope.go:117] "RemoveContainer" containerID="d18b26619802ce37e24b1e3f20d1d6cc8f04101b3e24ea24cb85d30ce29e0c54" Feb 26 13:07:48 crc kubenswrapper[4724]: I0226 13:07:48.204592 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402"} Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.195728 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535188-mdktf"] Feb 26 13:08:00 crc kubenswrapper[4724]: E0226 13:08:00.197659 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000fd3a2-9702-4c8e-991c-ed05f059fb65" containerName="registry-server" Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.197684 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="000fd3a2-9702-4c8e-991c-ed05f059fb65" containerName="registry-server" Feb 26 13:08:00 crc kubenswrapper[4724]: E0226 13:08:00.197724 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000fd3a2-9702-4c8e-991c-ed05f059fb65" containerName="extract-utilities" Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.197739 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="000fd3a2-9702-4c8e-991c-ed05f059fb65" containerName="extract-utilities" Feb 26 13:08:00 crc kubenswrapper[4724]: E0226 13:08:00.197776 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb" containerName="oc" Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.197785 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb" containerName="oc" Feb 26 13:08:00 crc kubenswrapper[4724]: E0226 13:08:00.197806 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000fd3a2-9702-4c8e-991c-ed05f059fb65" containerName="extract-content" Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.197813 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="000fd3a2-9702-4c8e-991c-ed05f059fb65" containerName="extract-content" Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.198058 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb" containerName="oc" Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.198077 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="000fd3a2-9702-4c8e-991c-ed05f059fb65" containerName="registry-server" Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.199110 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535188-mdktf" Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.209730 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.209805 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.225342 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.244733 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s8hd\" (UniqueName: \"kubernetes.io/projected/fdb7d8bc-dd78-419d-9b96-fb961c1ddde1-kube-api-access-4s8hd\") pod \"auto-csr-approver-29535188-mdktf\" (UID: \"fdb7d8bc-dd78-419d-9b96-fb961c1ddde1\") " pod="openshift-infra/auto-csr-approver-29535188-mdktf" Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.256303 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535188-mdktf"] Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.348764 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s8hd\" (UniqueName: \"kubernetes.io/projected/fdb7d8bc-dd78-419d-9b96-fb961c1ddde1-kube-api-access-4s8hd\") pod \"auto-csr-approver-29535188-mdktf\" (UID: \"fdb7d8bc-dd78-419d-9b96-fb961c1ddde1\") " pod="openshift-infra/auto-csr-approver-29535188-mdktf" Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.380712 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s8hd\" (UniqueName: \"kubernetes.io/projected/fdb7d8bc-dd78-419d-9b96-fb961c1ddde1-kube-api-access-4s8hd\") pod \"auto-csr-approver-29535188-mdktf\" (UID: \"fdb7d8bc-dd78-419d-9b96-fb961c1ddde1\") " pod="openshift-infra/auto-csr-approver-29535188-mdktf" Feb 26 13:08:00 crc kubenswrapper[4724]: I0226 13:08:00.542933 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535188-mdktf" Feb 26 13:08:01 crc kubenswrapper[4724]: I0226 13:08:01.118616 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535188-mdktf"] Feb 26 13:08:01 crc kubenswrapper[4724]: I0226 13:08:01.342053 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535188-mdktf" event={"ID":"fdb7d8bc-dd78-419d-9b96-fb961c1ddde1","Type":"ContainerStarted","Data":"4504bba6ac9447a3d4385ceea108f9fce5f52174c160658e1789f4780f95606c"} Feb 26 13:08:03 crc kubenswrapper[4724]: I0226 13:08:03.361391 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535188-mdktf" event={"ID":"fdb7d8bc-dd78-419d-9b96-fb961c1ddde1","Type":"ContainerStarted","Data":"e2552d218fa23f497223ca478c416e9a36f349fe30384c9ce0a59ae19e27a36d"} Feb 26 13:08:03 crc kubenswrapper[4724]: I0226 13:08:03.382323 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535188-mdktf" podStartSLOduration=2.018754829 podStartE2EDuration="3.382272856s" podCreationTimestamp="2026-02-26 13:08:00 +0000 UTC" firstStartedPulling="2026-02-26 13:08:01.126442168 +0000 UTC m=+7347.782181283" lastFinishedPulling="2026-02-26 13:08:02.489960195 +0000 UTC m=+7349.145699310" observedRunningTime="2026-02-26 13:08:03.3750049 +0000 UTC m=+7350.030744035" watchObservedRunningTime="2026-02-26 13:08:03.382272856 +0000 UTC m=+7350.038011971" Feb 26 13:08:04 crc kubenswrapper[4724]: I0226 13:08:04.374444 4724 generic.go:334] "Generic (PLEG): container finished" podID="fdb7d8bc-dd78-419d-9b96-fb961c1ddde1" containerID="e2552d218fa23f497223ca478c416e9a36f349fe30384c9ce0a59ae19e27a36d" exitCode=0 Feb 26 13:08:04 crc kubenswrapper[4724]: I0226 13:08:04.374736 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535188-mdktf" event={"ID":"fdb7d8bc-dd78-419d-9b96-fb961c1ddde1","Type":"ContainerDied","Data":"e2552d218fa23f497223ca478c416e9a36f349fe30384c9ce0a59ae19e27a36d"} Feb 26 13:08:05 crc kubenswrapper[4724]: I0226 13:08:05.799644 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535188-mdktf" Feb 26 13:08:05 crc kubenswrapper[4724]: I0226 13:08:05.820109 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s8hd\" (UniqueName: \"kubernetes.io/projected/fdb7d8bc-dd78-419d-9b96-fb961c1ddde1-kube-api-access-4s8hd\") pod \"fdb7d8bc-dd78-419d-9b96-fb961c1ddde1\" (UID: \"fdb7d8bc-dd78-419d-9b96-fb961c1ddde1\") " Feb 26 13:08:05 crc kubenswrapper[4724]: I0226 13:08:05.829556 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb7d8bc-dd78-419d-9b96-fb961c1ddde1-kube-api-access-4s8hd" (OuterVolumeSpecName: "kube-api-access-4s8hd") pod "fdb7d8bc-dd78-419d-9b96-fb961c1ddde1" (UID: "fdb7d8bc-dd78-419d-9b96-fb961c1ddde1"). InnerVolumeSpecName "kube-api-access-4s8hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:08:05 crc kubenswrapper[4724]: I0226 13:08:05.923468 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s8hd\" (UniqueName: \"kubernetes.io/projected/fdb7d8bc-dd78-419d-9b96-fb961c1ddde1-kube-api-access-4s8hd\") on node \"crc\" DevicePath \"\"" Feb 26 13:08:06 crc kubenswrapper[4724]: I0226 13:08:06.401417 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535188-mdktf" event={"ID":"fdb7d8bc-dd78-419d-9b96-fb961c1ddde1","Type":"ContainerDied","Data":"4504bba6ac9447a3d4385ceea108f9fce5f52174c160658e1789f4780f95606c"} Feb 26 13:08:06 crc kubenswrapper[4724]: I0226 13:08:06.401726 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4504bba6ac9447a3d4385ceea108f9fce5f52174c160658e1789f4780f95606c" Feb 26 13:08:06 crc kubenswrapper[4724]: I0226 13:08:06.401526 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535188-mdktf" Feb 26 13:08:06 crc kubenswrapper[4724]: I0226 13:08:06.468473 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535182-wxmn4"] Feb 26 13:08:06 crc kubenswrapper[4724]: I0226 13:08:06.476601 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535182-wxmn4"] Feb 26 13:08:07 crc kubenswrapper[4724]: I0226 13:08:07.993029 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="672e898e-cc7d-4920-a471-e25c47cbd89d" path="/var/lib/kubelet/pods/672e898e-cc7d-4920-a471-e25c47cbd89d/volumes" Feb 26 13:08:25 crc kubenswrapper[4724]: I0226 13:08:25.326586 4724 scope.go:117] "RemoveContainer" containerID="3ae345e5dbbed60285eb9316a677d67db92eb919265d9f75cc4a67ae3331ba59" Feb 26 13:09:11 crc kubenswrapper[4724]: I0226 13:09:11.766411 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2xxv9"] Feb 26 13:09:11 crc kubenswrapper[4724]: E0226 13:09:11.772362 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb7d8bc-dd78-419d-9b96-fb961c1ddde1" containerName="oc" Feb 26 13:09:11 crc kubenswrapper[4724]: I0226 13:09:11.772707 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb7d8bc-dd78-419d-9b96-fb961c1ddde1" containerName="oc" Feb 26 13:09:11 crc kubenswrapper[4724]: I0226 13:09:11.773015 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb7d8bc-dd78-419d-9b96-fb961c1ddde1" containerName="oc" Feb 26 13:09:11 crc kubenswrapper[4724]: I0226 13:09:11.776663 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:09:11 crc kubenswrapper[4724]: I0226 13:09:11.784960 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2xxv9"] Feb 26 13:09:12 crc kubenswrapper[4724]: I0226 13:09:12.024420 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-catalog-content\") pod \"community-operators-2xxv9\" (UID: \"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77\") " pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:09:12 crc kubenswrapper[4724]: I0226 13:09:12.024544 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-utilities\") pod \"community-operators-2xxv9\" (UID: \"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77\") " pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:09:12 crc kubenswrapper[4724]: I0226 13:09:12.024632 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sxj9\" (UniqueName: \"kubernetes.io/projected/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-kube-api-access-9sxj9\") pod \"community-operators-2xxv9\" (UID: \"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77\") " pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:09:12 crc kubenswrapper[4724]: I0226 13:09:12.126761 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sxj9\" (UniqueName: \"kubernetes.io/projected/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-kube-api-access-9sxj9\") pod \"community-operators-2xxv9\" (UID: \"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77\") " pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:09:12 crc kubenswrapper[4724]: I0226 13:09:12.126867 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-catalog-content\") pod \"community-operators-2xxv9\" (UID: \"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77\") " pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:09:12 crc kubenswrapper[4724]: I0226 13:09:12.126967 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-utilities\") pod \"community-operators-2xxv9\" (UID: \"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77\") " pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:09:12 crc kubenswrapper[4724]: I0226 13:09:12.127689 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-utilities\") pod \"community-operators-2xxv9\" (UID: \"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77\") " pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:09:12 crc kubenswrapper[4724]: I0226 13:09:12.127972 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-catalog-content\") pod \"community-operators-2xxv9\" (UID: \"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77\") " pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:09:12 crc kubenswrapper[4724]: I0226 13:09:12.158910 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sxj9\" (UniqueName: \"kubernetes.io/projected/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-kube-api-access-9sxj9\") pod \"community-operators-2xxv9\" (UID: \"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77\") " pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:09:12 crc kubenswrapper[4724]: I0226 13:09:12.403934 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:09:13 crc kubenswrapper[4724]: I0226 13:09:13.066438 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2xxv9"] Feb 26 13:09:13 crc kubenswrapper[4724]: I0226 13:09:13.157555 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xxv9" event={"ID":"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77","Type":"ContainerStarted","Data":"436c80e6f7243a967b045961ee2425056ad4c5d0c1f966cfc205592a288c8dbd"} Feb 26 13:09:14 crc kubenswrapper[4724]: I0226 13:09:14.166138 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerID="0f488be7da62a681693412716ee2b487e03c6908e0fcf10a6ad88281678635e0" exitCode=0 Feb 26 13:09:14 crc kubenswrapper[4724]: I0226 13:09:14.166475 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xxv9" event={"ID":"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77","Type":"ContainerDied","Data":"0f488be7da62a681693412716ee2b487e03c6908e0fcf10a6ad88281678635e0"} Feb 26 13:09:14 crc kubenswrapper[4724]: I0226 13:09:14.169633 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 13:09:16 crc kubenswrapper[4724]: I0226 13:09:16.187385 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xxv9" event={"ID":"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77","Type":"ContainerStarted","Data":"ceb6359480f967aa79e2a8424dc8bbc6f04632e51913a7544ce41c54458ad57c"} Feb 26 13:09:22 crc kubenswrapper[4724]: I0226 13:09:22.252696 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerID="ceb6359480f967aa79e2a8424dc8bbc6f04632e51913a7544ce41c54458ad57c" exitCode=0 Feb 26 13:09:22 crc kubenswrapper[4724]: I0226 13:09:22.252759 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xxv9" event={"ID":"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77","Type":"ContainerDied","Data":"ceb6359480f967aa79e2a8424dc8bbc6f04632e51913a7544ce41c54458ad57c"} Feb 26 13:09:25 crc kubenswrapper[4724]: I0226 13:09:25.325126 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xxv9" event={"ID":"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77","Type":"ContainerStarted","Data":"8b77cbc55715f19b24eb8be2b81e643719d92eb5c25bf134a93b333eb33e24ac"} Feb 26 13:09:25 crc kubenswrapper[4724]: I0226 13:09:25.358266 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2xxv9" podStartSLOduration=4.518752 podStartE2EDuration="14.358238675s" podCreationTimestamp="2026-02-26 13:09:11 +0000 UTC" firstStartedPulling="2026-02-26 13:09:14.169288302 +0000 UTC m=+7420.825027417" lastFinishedPulling="2026-02-26 13:09:24.008774977 +0000 UTC m=+7430.664514092" observedRunningTime="2026-02-26 13:09:25.352858718 +0000 UTC m=+7432.008597833" watchObservedRunningTime="2026-02-26 13:09:25.358238675 +0000 UTC m=+7432.013977790" Feb 26 13:09:32 crc kubenswrapper[4724]: I0226 13:09:32.411750 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:09:32 crc kubenswrapper[4724]: I0226 13:09:32.414775 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:09:33 crc kubenswrapper[4724]: I0226 13:09:33.478938 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2xxv9" podUID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerName="registry-server" probeResult="failure" output=< Feb 26 13:09:33 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:09:33 crc kubenswrapper[4724]: > Feb 26 13:09:39 crc kubenswrapper[4724]: I0226 13:09:39.428604 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7dcxj"] Feb 26 13:09:39 crc kubenswrapper[4724]: I0226 13:09:39.431083 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:09:39 crc kubenswrapper[4724]: I0226 13:09:39.444398 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7dcxj"] Feb 26 13:09:39 crc kubenswrapper[4724]: I0226 13:09:39.578403 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc075666-0f32-4960-a6d0-ed53a6181b6c-utilities\") pod \"redhat-operators-7dcxj\" (UID: \"cc075666-0f32-4960-a6d0-ed53a6181b6c\") " pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:09:39 crc kubenswrapper[4724]: I0226 13:09:39.578455 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc075666-0f32-4960-a6d0-ed53a6181b6c-catalog-content\") pod \"redhat-operators-7dcxj\" (UID: \"cc075666-0f32-4960-a6d0-ed53a6181b6c\") " pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:09:39 crc kubenswrapper[4724]: I0226 13:09:39.578648 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck2pz\" (UniqueName: \"kubernetes.io/projected/cc075666-0f32-4960-a6d0-ed53a6181b6c-kube-api-access-ck2pz\") pod \"redhat-operators-7dcxj\" (UID: \"cc075666-0f32-4960-a6d0-ed53a6181b6c\") " pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:09:39 crc kubenswrapper[4724]: I0226 13:09:39.680749 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc075666-0f32-4960-a6d0-ed53a6181b6c-utilities\") pod \"redhat-operators-7dcxj\" (UID: \"cc075666-0f32-4960-a6d0-ed53a6181b6c\") " pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:09:39 crc kubenswrapper[4724]: I0226 13:09:39.680797 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc075666-0f32-4960-a6d0-ed53a6181b6c-catalog-content\") pod \"redhat-operators-7dcxj\" (UID: \"cc075666-0f32-4960-a6d0-ed53a6181b6c\") " pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:09:39 crc kubenswrapper[4724]: I0226 13:09:39.680842 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck2pz\" (UniqueName: \"kubernetes.io/projected/cc075666-0f32-4960-a6d0-ed53a6181b6c-kube-api-access-ck2pz\") pod \"redhat-operators-7dcxj\" (UID: \"cc075666-0f32-4960-a6d0-ed53a6181b6c\") " pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:09:39 crc kubenswrapper[4724]: I0226 13:09:39.681397 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc075666-0f32-4960-a6d0-ed53a6181b6c-utilities\") pod \"redhat-operators-7dcxj\" (UID: \"cc075666-0f32-4960-a6d0-ed53a6181b6c\") " pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:09:39 crc kubenswrapper[4724]: I0226 13:09:39.681569 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc075666-0f32-4960-a6d0-ed53a6181b6c-catalog-content\") pod \"redhat-operators-7dcxj\" (UID: \"cc075666-0f32-4960-a6d0-ed53a6181b6c\") " pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:09:39 crc kubenswrapper[4724]: I0226 13:09:39.711525 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck2pz\" (UniqueName: \"kubernetes.io/projected/cc075666-0f32-4960-a6d0-ed53a6181b6c-kube-api-access-ck2pz\") pod \"redhat-operators-7dcxj\" (UID: \"cc075666-0f32-4960-a6d0-ed53a6181b6c\") " pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:09:39 crc kubenswrapper[4724]: I0226 13:09:39.757483 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:09:40 crc kubenswrapper[4724]: I0226 13:09:40.303912 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7dcxj"] Feb 26 13:09:40 crc kubenswrapper[4724]: I0226 13:09:40.541504 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dcxj" event={"ID":"cc075666-0f32-4960-a6d0-ed53a6181b6c","Type":"ContainerStarted","Data":"f96370839c8dedf7696f5e4dc43c3065ce606edec77a17b258d87564b968dd5e"} Feb 26 13:09:41 crc kubenswrapper[4724]: I0226 13:09:41.551215 4724 generic.go:334] "Generic (PLEG): container finished" podID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerID="5a643517df9dc352051bddb86a9c48be89c0853074e402878354155b21e2709c" exitCode=0 Feb 26 13:09:41 crc kubenswrapper[4724]: I0226 13:09:41.551589 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dcxj" event={"ID":"cc075666-0f32-4960-a6d0-ed53a6181b6c","Type":"ContainerDied","Data":"5a643517df9dc352051bddb86a9c48be89c0853074e402878354155b21e2709c"} Feb 26 13:09:43 crc kubenswrapper[4724]: I0226 13:09:43.455590 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2xxv9" podUID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerName="registry-server" probeResult="failure" output=< Feb 26 13:09:43 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:09:43 crc kubenswrapper[4724]: > Feb 26 13:09:45 crc kubenswrapper[4724]: I0226 13:09:45.593867 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dcxj" event={"ID":"cc075666-0f32-4960-a6d0-ed53a6181b6c","Type":"ContainerStarted","Data":"51c17ec3af60f45c4ceabf4ee1b764d28e6ab1454ac5f05e3cbb0df185420e21"} Feb 26 13:09:53 crc kubenswrapper[4724]: I0226 13:09:53.572929 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2xxv9" podUID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerName="registry-server" probeResult="failure" output=< Feb 26 13:09:53 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:09:53 crc kubenswrapper[4724]: > Feb 26 13:10:00 crc kubenswrapper[4724]: I0226 13:10:00.362648 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535190-w5ln7"] Feb 26 13:10:00 crc kubenswrapper[4724]: I0226 13:10:00.504330 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535190-w5ln7"] Feb 26 13:10:00 crc kubenswrapper[4724]: I0226 13:10:00.504439 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535190-w5ln7" Feb 26 13:10:00 crc kubenswrapper[4724]: I0226 13:10:00.583142 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:10:00 crc kubenswrapper[4724]: I0226 13:10:00.585769 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:10:00 crc kubenswrapper[4724]: I0226 13:10:00.589489 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:10:00 crc kubenswrapper[4724]: I0226 13:10:00.640950 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgwp5\" (UniqueName: \"kubernetes.io/projected/ce267166-8cda-4753-b3a2-9bd506685c29-kube-api-access-kgwp5\") pod \"auto-csr-approver-29535190-w5ln7\" (UID: \"ce267166-8cda-4753-b3a2-9bd506685c29\") " pod="openshift-infra/auto-csr-approver-29535190-w5ln7" Feb 26 13:10:00 crc kubenswrapper[4724]: I0226 13:10:00.742803 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgwp5\" (UniqueName: \"kubernetes.io/projected/ce267166-8cda-4753-b3a2-9bd506685c29-kube-api-access-kgwp5\") pod \"auto-csr-approver-29535190-w5ln7\" (UID: \"ce267166-8cda-4753-b3a2-9bd506685c29\") " pod="openshift-infra/auto-csr-approver-29535190-w5ln7" Feb 26 13:10:00 crc kubenswrapper[4724]: I0226 13:10:00.781469 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgwp5\" (UniqueName: \"kubernetes.io/projected/ce267166-8cda-4753-b3a2-9bd506685c29-kube-api-access-kgwp5\") pod \"auto-csr-approver-29535190-w5ln7\" (UID: \"ce267166-8cda-4753-b3a2-9bd506685c29\") " pod="openshift-infra/auto-csr-approver-29535190-w5ln7" Feb 26 13:10:00 crc kubenswrapper[4724]: I0226 13:10:00.980410 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535190-w5ln7" Feb 26 13:10:03 crc kubenswrapper[4724]: I0226 13:10:03.472238 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2xxv9" podUID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerName="registry-server" probeResult="failure" output=< Feb 26 13:10:03 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:10:03 crc kubenswrapper[4724]: > Feb 26 13:10:12 crc kubenswrapper[4724]: I0226 13:10:12.153676 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535190-w5ln7"] Feb 26 13:10:12 crc kubenswrapper[4724]: I0226 13:10:12.638129 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535190-w5ln7" event={"ID":"ce267166-8cda-4753-b3a2-9bd506685c29","Type":"ContainerStarted","Data":"7336b43439b853c9b2d26c2c579f915502e9a95806546fc689da78da6ca3c795"} Feb 26 13:10:13 crc kubenswrapper[4724]: I0226 13:10:13.458131 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2xxv9" podUID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerName="registry-server" probeResult="failure" output=< Feb 26 13:10:13 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:10:13 crc kubenswrapper[4724]: > Feb 26 13:10:15 crc kubenswrapper[4724]: I0226 13:10:15.687309 4724 generic.go:334] "Generic (PLEG): container finished" podID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerID="51c17ec3af60f45c4ceabf4ee1b764d28e6ab1454ac5f05e3cbb0df185420e21" exitCode=0 Feb 26 13:10:15 crc kubenswrapper[4724]: I0226 13:10:15.688039 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dcxj" event={"ID":"cc075666-0f32-4960-a6d0-ed53a6181b6c","Type":"ContainerDied","Data":"51c17ec3af60f45c4ceabf4ee1b764d28e6ab1454ac5f05e3cbb0df185420e21"} Feb 26 13:10:15 crc kubenswrapper[4724]: I0226 13:10:15.691423 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535190-w5ln7" event={"ID":"ce267166-8cda-4753-b3a2-9bd506685c29","Type":"ContainerStarted","Data":"a6f075b5bd630272dcc939a6ff9850acc16b04ba30e2648022ee78e0d9656ebd"} Feb 26 13:10:15 crc kubenswrapper[4724]: I0226 13:10:15.753301 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535190-w5ln7" podStartSLOduration=13.669847139 podStartE2EDuration="15.753259416s" podCreationTimestamp="2026-02-26 13:10:00 +0000 UTC" firstStartedPulling="2026-02-26 13:10:12.188354722 +0000 UTC m=+7478.844093827" lastFinishedPulling="2026-02-26 13:10:14.271766979 +0000 UTC m=+7480.927506104" observedRunningTime="2026-02-26 13:10:15.739559356 +0000 UTC m=+7482.395298471" watchObservedRunningTime="2026-02-26 13:10:15.753259416 +0000 UTC m=+7482.408998531" Feb 26 13:10:16 crc kubenswrapper[4724]: I0226 13:10:16.905986 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:10:16 crc kubenswrapper[4724]: I0226 13:10:16.989741 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:10:18 crc kubenswrapper[4724]: I0226 13:10:18.726075 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dcxj" event={"ID":"cc075666-0f32-4960-a6d0-ed53a6181b6c","Type":"ContainerStarted","Data":"1e7b62ec600699e9f2e5f9c3b90c6d05259d61ef5736d1d4cbe342e76202dcd7"} Feb 26 13:10:18 crc kubenswrapper[4724]: I0226 13:10:18.729345 4724 generic.go:334] "Generic (PLEG): container finished" podID="ce267166-8cda-4753-b3a2-9bd506685c29" containerID="a6f075b5bd630272dcc939a6ff9850acc16b04ba30e2648022ee78e0d9656ebd" exitCode=0 Feb 26 13:10:18 crc kubenswrapper[4724]: I0226 13:10:18.729382 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535190-w5ln7" event={"ID":"ce267166-8cda-4753-b3a2-9bd506685c29","Type":"ContainerDied","Data":"a6f075b5bd630272dcc939a6ff9850acc16b04ba30e2648022ee78e0d9656ebd"} Feb 26 13:10:18 crc kubenswrapper[4724]: I0226 13:10:18.754198 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7dcxj" podStartSLOduration=3.8577747589999998 podStartE2EDuration="39.754136256s" podCreationTimestamp="2026-02-26 13:09:39 +0000 UTC" firstStartedPulling="2026-02-26 13:09:41.553679229 +0000 UTC m=+7448.209418344" lastFinishedPulling="2026-02-26 13:10:17.450040726 +0000 UTC m=+7484.105779841" observedRunningTime="2026-02-26 13:10:18.752088683 +0000 UTC m=+7485.407827818" watchObservedRunningTime="2026-02-26 13:10:18.754136256 +0000 UTC m=+7485.409875371" Feb 26 13:10:19 crc kubenswrapper[4724]: I0226 13:10:19.759548 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:10:19 crc kubenswrapper[4724]: I0226 13:10:19.760067 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:10:20 crc kubenswrapper[4724]: I0226 13:10:20.153255 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535190-w5ln7" Feb 26 13:10:20 crc kubenswrapper[4724]: I0226 13:10:20.290458 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgwp5\" (UniqueName: \"kubernetes.io/projected/ce267166-8cda-4753-b3a2-9bd506685c29-kube-api-access-kgwp5\") pod \"ce267166-8cda-4753-b3a2-9bd506685c29\" (UID: \"ce267166-8cda-4753-b3a2-9bd506685c29\") " Feb 26 13:10:20 crc kubenswrapper[4724]: I0226 13:10:20.311899 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce267166-8cda-4753-b3a2-9bd506685c29-kube-api-access-kgwp5" (OuterVolumeSpecName: "kube-api-access-kgwp5") pod "ce267166-8cda-4753-b3a2-9bd506685c29" (UID: "ce267166-8cda-4753-b3a2-9bd506685c29"). InnerVolumeSpecName "kube-api-access-kgwp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:10:20 crc kubenswrapper[4724]: I0226 13:10:20.393853 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgwp5\" (UniqueName: \"kubernetes.io/projected/ce267166-8cda-4753-b3a2-9bd506685c29-kube-api-access-kgwp5\") on node \"crc\" DevicePath \"\"" Feb 26 13:10:20 crc kubenswrapper[4724]: I0226 13:10:20.760223 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535190-w5ln7" event={"ID":"ce267166-8cda-4753-b3a2-9bd506685c29","Type":"ContainerDied","Data":"7336b43439b853c9b2d26c2c579f915502e9a95806546fc689da78da6ca3c795"} Feb 26 13:10:20 crc kubenswrapper[4724]: I0226 13:10:20.761284 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7336b43439b853c9b2d26c2c579f915502e9a95806546fc689da78da6ca3c795" Feb 26 13:10:20 crc kubenswrapper[4724]: I0226 13:10:20.760307 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535190-w5ln7" Feb 26 13:10:20 crc kubenswrapper[4724]: I0226 13:10:20.825488 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7dcxj" podUID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerName="registry-server" probeResult="failure" output=< Feb 26 13:10:20 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:10:20 crc kubenswrapper[4724]: > Feb 26 13:10:20 crc kubenswrapper[4724]: I0226 13:10:20.878785 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535184-gcr9s"] Feb 26 13:10:20 crc kubenswrapper[4724]: I0226 13:10:20.891755 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535184-gcr9s"] Feb 26 13:10:21 crc kubenswrapper[4724]: I0226 13:10:21.994384 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="693c6ae2-1455-4870-9c17-ef0d38ff5af8" path="/var/lib/kubelet/pods/693c6ae2-1455-4870-9c17-ef0d38ff5af8/volumes" Feb 26 13:10:23 crc kubenswrapper[4724]: I0226 13:10:23.463723 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2xxv9" podUID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerName="registry-server" probeResult="failure" output=< Feb 26 13:10:23 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:10:23 crc kubenswrapper[4724]: > Feb 26 13:10:25 crc kubenswrapper[4724]: I0226 13:10:25.465476 4724 scope.go:117] "RemoveContainer" containerID="83ec43cc78720389053632c2487d80569554ba7173794f5faf49dfd987db3476" Feb 26 13:10:30 crc kubenswrapper[4724]: I0226 13:10:30.814484 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7dcxj" podUID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerName="registry-server" probeResult="failure" output=< Feb 26 13:10:30 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:10:30 crc kubenswrapper[4724]: > Feb 26 13:10:32 crc kubenswrapper[4724]: I0226 13:10:32.477854 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:10:32 crc kubenswrapper[4724]: I0226 13:10:32.534988 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:10:32 crc kubenswrapper[4724]: I0226 13:10:32.734934 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2xxv9"] Feb 26 13:10:33 crc kubenswrapper[4724]: I0226 13:10:33.912525 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2xxv9" podUID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerName="registry-server" containerID="cri-o://8b77cbc55715f19b24eb8be2b81e643719d92eb5c25bf134a93b333eb33e24ac" gracePeriod=2 Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.457319 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.545001 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sxj9\" (UniqueName: \"kubernetes.io/projected/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-kube-api-access-9sxj9\") pod \"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77\" (UID: \"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77\") " Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.545434 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-catalog-content\") pod \"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77\" (UID: \"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77\") " Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.545467 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-utilities\") pod \"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77\" (UID: \"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77\") " Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.546076 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-utilities" (OuterVolumeSpecName: "utilities") pod "f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" (UID: "f4e0d05c-00d7-43e2-87c4-7c908b1c9b77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.558571 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-kube-api-access-9sxj9" (OuterVolumeSpecName: "kube-api-access-9sxj9") pod "f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" (UID: "f4e0d05c-00d7-43e2-87c4-7c908b1c9b77"). InnerVolumeSpecName "kube-api-access-9sxj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.620830 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" (UID: "f4e0d05c-00d7-43e2-87c4-7c908b1c9b77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.648896 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.648954 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.648970 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sxj9\" (UniqueName: \"kubernetes.io/projected/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77-kube-api-access-9sxj9\") on node \"crc\" DevicePath \"\"" Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.915643 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerID="8b77cbc55715f19b24eb8be2b81e643719d92eb5c25bf134a93b333eb33e24ac" exitCode=0 Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.915699 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xxv9" Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.915707 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xxv9" event={"ID":"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77","Type":"ContainerDied","Data":"8b77cbc55715f19b24eb8be2b81e643719d92eb5c25bf134a93b333eb33e24ac"} Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.915754 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xxv9" event={"ID":"f4e0d05c-00d7-43e2-87c4-7c908b1c9b77","Type":"ContainerDied","Data":"436c80e6f7243a967b045961ee2425056ad4c5d0c1f966cfc205592a288c8dbd"} Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.915780 4724 scope.go:117] "RemoveContainer" containerID="8b77cbc55715f19b24eb8be2b81e643719d92eb5c25bf134a93b333eb33e24ac" Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.944929 4724 scope.go:117] "RemoveContainer" containerID="ceb6359480f967aa79e2a8424dc8bbc6f04632e51913a7544ce41c54458ad57c" Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.967601 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2xxv9"] Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.979854 4724 scope.go:117] "RemoveContainer" containerID="0f488be7da62a681693412716ee2b487e03c6908e0fcf10a6ad88281678635e0" Feb 26 13:10:34 crc kubenswrapper[4724]: I0226 13:10:34.980452 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2xxv9"] Feb 26 13:10:35 crc kubenswrapper[4724]: I0226 13:10:35.033667 4724 scope.go:117] "RemoveContainer" containerID="8b77cbc55715f19b24eb8be2b81e643719d92eb5c25bf134a93b333eb33e24ac" Feb 26 13:10:35 crc kubenswrapper[4724]: E0226 13:10:35.041641 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b77cbc55715f19b24eb8be2b81e643719d92eb5c25bf134a93b333eb33e24ac\": container with ID starting with 8b77cbc55715f19b24eb8be2b81e643719d92eb5c25bf134a93b333eb33e24ac not found: ID does not exist" containerID="8b77cbc55715f19b24eb8be2b81e643719d92eb5c25bf134a93b333eb33e24ac" Feb 26 13:10:35 crc kubenswrapper[4724]: I0226 13:10:35.041721 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b77cbc55715f19b24eb8be2b81e643719d92eb5c25bf134a93b333eb33e24ac"} err="failed to get container status \"8b77cbc55715f19b24eb8be2b81e643719d92eb5c25bf134a93b333eb33e24ac\": rpc error: code = NotFound desc = could not find container \"8b77cbc55715f19b24eb8be2b81e643719d92eb5c25bf134a93b333eb33e24ac\": container with ID starting with 8b77cbc55715f19b24eb8be2b81e643719d92eb5c25bf134a93b333eb33e24ac not found: ID does not exist" Feb 26 13:10:35 crc kubenswrapper[4724]: I0226 13:10:35.041766 4724 scope.go:117] "RemoveContainer" containerID="ceb6359480f967aa79e2a8424dc8bbc6f04632e51913a7544ce41c54458ad57c" Feb 26 13:10:35 crc kubenswrapper[4724]: E0226 13:10:35.045361 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb6359480f967aa79e2a8424dc8bbc6f04632e51913a7544ce41c54458ad57c\": container with ID starting with ceb6359480f967aa79e2a8424dc8bbc6f04632e51913a7544ce41c54458ad57c not found: ID does not exist" containerID="ceb6359480f967aa79e2a8424dc8bbc6f04632e51913a7544ce41c54458ad57c" Feb 26 13:10:35 crc kubenswrapper[4724]: I0226 13:10:35.045428 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb6359480f967aa79e2a8424dc8bbc6f04632e51913a7544ce41c54458ad57c"} err="failed to get container status \"ceb6359480f967aa79e2a8424dc8bbc6f04632e51913a7544ce41c54458ad57c\": rpc error: code = NotFound desc = could not find container \"ceb6359480f967aa79e2a8424dc8bbc6f04632e51913a7544ce41c54458ad57c\": container with ID starting with ceb6359480f967aa79e2a8424dc8bbc6f04632e51913a7544ce41c54458ad57c not found: ID does not exist" Feb 26 13:10:35 crc kubenswrapper[4724]: I0226 13:10:35.045465 4724 scope.go:117] "RemoveContainer" containerID="0f488be7da62a681693412716ee2b487e03c6908e0fcf10a6ad88281678635e0" Feb 26 13:10:35 crc kubenswrapper[4724]: E0226 13:10:35.045730 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f488be7da62a681693412716ee2b487e03c6908e0fcf10a6ad88281678635e0\": container with ID starting with 0f488be7da62a681693412716ee2b487e03c6908e0fcf10a6ad88281678635e0 not found: ID does not exist" containerID="0f488be7da62a681693412716ee2b487e03c6908e0fcf10a6ad88281678635e0" Feb 26 13:10:35 crc kubenswrapper[4724]: I0226 13:10:35.045770 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f488be7da62a681693412716ee2b487e03c6908e0fcf10a6ad88281678635e0"} err="failed to get container status \"0f488be7da62a681693412716ee2b487e03c6908e0fcf10a6ad88281678635e0\": rpc error: code = NotFound desc = could not find container \"0f488be7da62a681693412716ee2b487e03c6908e0fcf10a6ad88281678635e0\": container with ID starting with 0f488be7da62a681693412716ee2b487e03c6908e0fcf10a6ad88281678635e0 not found: ID does not exist" Feb 26 13:10:35 crc kubenswrapper[4724]: I0226 13:10:35.990082 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" path="/var/lib/kubelet/pods/f4e0d05c-00d7-43e2-87c4-7c908b1c9b77/volumes" Feb 26 13:10:40 crc kubenswrapper[4724]: I0226 13:10:40.809058 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7dcxj" podUID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerName="registry-server" probeResult="failure" output=< Feb 26 13:10:40 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:10:40 crc kubenswrapper[4724]: > Feb 26 13:10:46 crc kubenswrapper[4724]: I0226 13:10:46.907614 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:10:46 crc kubenswrapper[4724]: I0226 13:10:46.908153 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:10:50 crc kubenswrapper[4724]: I0226 13:10:50.825511 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7dcxj" podUID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerName="registry-server" probeResult="failure" output=< Feb 26 13:10:50 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:10:50 crc kubenswrapper[4724]: > Feb 26 13:11:00 crc kubenswrapper[4724]: I0226 13:11:00.825872 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7dcxj" podUID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerName="registry-server" probeResult="failure" output=< Feb 26 13:11:00 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:11:00 crc kubenswrapper[4724]: > Feb 26 13:11:09 crc kubenswrapper[4724]: I0226 13:11:09.817731 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:11:09 crc kubenswrapper[4724]: I0226 13:11:09.890042 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:11:10 crc kubenswrapper[4724]: I0226 13:11:10.683310 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7dcxj"] Feb 26 13:11:11 crc kubenswrapper[4724]: I0226 13:11:11.257057 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7dcxj" podUID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerName="registry-server" containerID="cri-o://1e7b62ec600699e9f2e5f9c3b90c6d05259d61ef5736d1d4cbe342e76202dcd7" gracePeriod=2 Feb 26 13:11:11 crc kubenswrapper[4724]: I0226 13:11:11.804590 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:11:11 crc kubenswrapper[4724]: I0226 13:11:11.942299 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck2pz\" (UniqueName: \"kubernetes.io/projected/cc075666-0f32-4960-a6d0-ed53a6181b6c-kube-api-access-ck2pz\") pod \"cc075666-0f32-4960-a6d0-ed53a6181b6c\" (UID: \"cc075666-0f32-4960-a6d0-ed53a6181b6c\") " Feb 26 13:11:11 crc kubenswrapper[4724]: I0226 13:11:11.943620 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc075666-0f32-4960-a6d0-ed53a6181b6c-utilities\") pod \"cc075666-0f32-4960-a6d0-ed53a6181b6c\" (UID: \"cc075666-0f32-4960-a6d0-ed53a6181b6c\") " Feb 26 13:11:11 crc kubenswrapper[4724]: I0226 13:11:11.943699 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc075666-0f32-4960-a6d0-ed53a6181b6c-catalog-content\") pod \"cc075666-0f32-4960-a6d0-ed53a6181b6c\" (UID: \"cc075666-0f32-4960-a6d0-ed53a6181b6c\") " Feb 26 13:11:11 crc kubenswrapper[4724]: I0226 13:11:11.944105 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc075666-0f32-4960-a6d0-ed53a6181b6c-utilities" (OuterVolumeSpecName: "utilities") pod "cc075666-0f32-4960-a6d0-ed53a6181b6c" (UID: "cc075666-0f32-4960-a6d0-ed53a6181b6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:11:11 crc kubenswrapper[4724]: I0226 13:11:11.956295 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc075666-0f32-4960-a6d0-ed53a6181b6c-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:11:11 crc kubenswrapper[4724]: I0226 13:11:11.964822 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc075666-0f32-4960-a6d0-ed53a6181b6c-kube-api-access-ck2pz" (OuterVolumeSpecName: "kube-api-access-ck2pz") pod "cc075666-0f32-4960-a6d0-ed53a6181b6c" (UID: "cc075666-0f32-4960-a6d0-ed53a6181b6c"). InnerVolumeSpecName "kube-api-access-ck2pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.058338 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck2pz\" (UniqueName: \"kubernetes.io/projected/cc075666-0f32-4960-a6d0-ed53a6181b6c-kube-api-access-ck2pz\") on node \"crc\" DevicePath \"\"" Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.114532 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc075666-0f32-4960-a6d0-ed53a6181b6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc075666-0f32-4960-a6d0-ed53a6181b6c" (UID: "cc075666-0f32-4960-a6d0-ed53a6181b6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.161352 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc075666-0f32-4960-a6d0-ed53a6181b6c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.274817 4724 generic.go:334] "Generic (PLEG): container finished" podID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerID="1e7b62ec600699e9f2e5f9c3b90c6d05259d61ef5736d1d4cbe342e76202dcd7" exitCode=0 Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.274892 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dcxj" event={"ID":"cc075666-0f32-4960-a6d0-ed53a6181b6c","Type":"ContainerDied","Data":"1e7b62ec600699e9f2e5f9c3b90c6d05259d61ef5736d1d4cbe342e76202dcd7"} Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.274962 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dcxj" event={"ID":"cc075666-0f32-4960-a6d0-ed53a6181b6c","Type":"ContainerDied","Data":"f96370839c8dedf7696f5e4dc43c3065ce606edec77a17b258d87564b968dd5e"} Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.274992 4724 scope.go:117] "RemoveContainer" containerID="1e7b62ec600699e9f2e5f9c3b90c6d05259d61ef5736d1d4cbe342e76202dcd7" Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.275255 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dcxj" Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.308631 4724 scope.go:117] "RemoveContainer" containerID="51c17ec3af60f45c4ceabf4ee1b764d28e6ab1454ac5f05e3cbb0df185420e21" Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.328569 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7dcxj"] Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.348356 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7dcxj"] Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.370547 4724 scope.go:117] "RemoveContainer" containerID="5a643517df9dc352051bddb86a9c48be89c0853074e402878354155b21e2709c" Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.401125 4724 scope.go:117] "RemoveContainer" containerID="1e7b62ec600699e9f2e5f9c3b90c6d05259d61ef5736d1d4cbe342e76202dcd7" Feb 26 13:11:12 crc kubenswrapper[4724]: E0226 13:11:12.401748 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e7b62ec600699e9f2e5f9c3b90c6d05259d61ef5736d1d4cbe342e76202dcd7\": container with ID starting with 1e7b62ec600699e9f2e5f9c3b90c6d05259d61ef5736d1d4cbe342e76202dcd7 not found: ID does not exist" containerID="1e7b62ec600699e9f2e5f9c3b90c6d05259d61ef5736d1d4cbe342e76202dcd7" Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.401776 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7b62ec600699e9f2e5f9c3b90c6d05259d61ef5736d1d4cbe342e76202dcd7"} err="failed to get container status \"1e7b62ec600699e9f2e5f9c3b90c6d05259d61ef5736d1d4cbe342e76202dcd7\": rpc error: code = NotFound desc = could not find container \"1e7b62ec600699e9f2e5f9c3b90c6d05259d61ef5736d1d4cbe342e76202dcd7\": container with ID starting with 1e7b62ec600699e9f2e5f9c3b90c6d05259d61ef5736d1d4cbe342e76202dcd7 not found: ID does not exist" Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.401797 4724 scope.go:117] "RemoveContainer" containerID="51c17ec3af60f45c4ceabf4ee1b764d28e6ab1454ac5f05e3cbb0df185420e21" Feb 26 13:11:12 crc kubenswrapper[4724]: E0226 13:11:12.402002 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51c17ec3af60f45c4ceabf4ee1b764d28e6ab1454ac5f05e3cbb0df185420e21\": container with ID starting with 51c17ec3af60f45c4ceabf4ee1b764d28e6ab1454ac5f05e3cbb0df185420e21 not found: ID does not exist" containerID="51c17ec3af60f45c4ceabf4ee1b764d28e6ab1454ac5f05e3cbb0df185420e21" Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.402019 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51c17ec3af60f45c4ceabf4ee1b764d28e6ab1454ac5f05e3cbb0df185420e21"} err="failed to get container status \"51c17ec3af60f45c4ceabf4ee1b764d28e6ab1454ac5f05e3cbb0df185420e21\": rpc error: code = NotFound desc = could not find container \"51c17ec3af60f45c4ceabf4ee1b764d28e6ab1454ac5f05e3cbb0df185420e21\": container with ID starting with 51c17ec3af60f45c4ceabf4ee1b764d28e6ab1454ac5f05e3cbb0df185420e21 not found: ID does not exist" Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.402032 4724 scope.go:117] "RemoveContainer" containerID="5a643517df9dc352051bddb86a9c48be89c0853074e402878354155b21e2709c" Feb 26 13:11:12 crc kubenswrapper[4724]: E0226 13:11:12.402281 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a643517df9dc352051bddb86a9c48be89c0853074e402878354155b21e2709c\": container with ID starting with 5a643517df9dc352051bddb86a9c48be89c0853074e402878354155b21e2709c not found: ID does not exist" containerID="5a643517df9dc352051bddb86a9c48be89c0853074e402878354155b21e2709c" Feb 26 13:11:12 crc kubenswrapper[4724]: I0226 13:11:12.402297 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a643517df9dc352051bddb86a9c48be89c0853074e402878354155b21e2709c"} err="failed to get container status \"5a643517df9dc352051bddb86a9c48be89c0853074e402878354155b21e2709c\": rpc error: code = NotFound desc = could not find container \"5a643517df9dc352051bddb86a9c48be89c0853074e402878354155b21e2709c\": container with ID starting with 5a643517df9dc352051bddb86a9c48be89c0853074e402878354155b21e2709c not found: ID does not exist" Feb 26 13:11:13 crc kubenswrapper[4724]: I0226 13:11:13.987453 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc075666-0f32-4960-a6d0-ed53a6181b6c" path="/var/lib/kubelet/pods/cc075666-0f32-4960-a6d0-ed53a6181b6c/volumes" Feb 26 13:11:16 crc kubenswrapper[4724]: I0226 13:11:16.906470 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:11:16 crc kubenswrapper[4724]: I0226 13:11:16.907008 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:11:16 crc kubenswrapper[4724]: I0226 13:11:16.907058 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 13:11:16 crc kubenswrapper[4724]: I0226 13:11:16.907805 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 13:11:16 crc kubenswrapper[4724]: I0226 13:11:16.907854 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" gracePeriod=600 Feb 26 13:11:17 crc kubenswrapper[4724]: E0226 13:11:17.027286 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:11:17 crc kubenswrapper[4724]: I0226 13:11:17.320399 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" exitCode=0 Feb 26 13:11:17 crc kubenswrapper[4724]: I0226 13:11:17.320447 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402"} Feb 26 13:11:17 crc kubenswrapper[4724]: I0226 13:11:17.320490 4724 scope.go:117] "RemoveContainer" containerID="d2dd2414db08851e199c7911682ef4fabc2f32d8ee6a812766e0ebcf2d193500" Feb 26 13:11:17 crc kubenswrapper[4724]: I0226 13:11:17.321255 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:11:17 crc kubenswrapper[4724]: E0226 13:11:17.321520 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:11:28 crc kubenswrapper[4724]: I0226 13:11:28.975450 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:11:28 crc kubenswrapper[4724]: E0226 13:11:28.976207 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:11:43 crc kubenswrapper[4724]: I0226 13:11:43.990225 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:11:43 crc kubenswrapper[4724]: E0226 13:11:43.991160 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:11:55 crc kubenswrapper[4724]: I0226 13:11:55.977348 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:11:55 crc kubenswrapper[4724]: E0226 13:11:55.978121 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.159315 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535192-wnwb2"] Feb 26 13:12:00 crc kubenswrapper[4724]: E0226 13:12:00.165968 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerName="extract-utilities" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.166016 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerName="extract-utilities" Feb 26 13:12:00 crc kubenswrapper[4724]: E0226 13:12:00.166049 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerName="extract-content" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.166055 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerName="extract-content" Feb 26 13:12:00 crc kubenswrapper[4724]: E0226 13:12:00.166068 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerName="extract-utilities" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.166073 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerName="extract-utilities" Feb 26 13:12:00 crc kubenswrapper[4724]: E0226 13:12:00.166096 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerName="registry-server" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.166102 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerName="registry-server" Feb 26 13:12:00 crc kubenswrapper[4724]: E0226 13:12:00.166112 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerName="extract-content" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.166117 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerName="extract-content" Feb 26 13:12:00 crc kubenswrapper[4724]: E0226 13:12:00.166148 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerName="registry-server" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.166155 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerName="registry-server" Feb 26 13:12:00 crc kubenswrapper[4724]: E0226 13:12:00.166169 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce267166-8cda-4753-b3a2-9bd506685c29" containerName="oc" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.166190 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce267166-8cda-4753-b3a2-9bd506685c29" containerName="oc" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.166947 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e0d05c-00d7-43e2-87c4-7c908b1c9b77" containerName="registry-server" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.166980 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc075666-0f32-4960-a6d0-ed53a6181b6c" containerName="registry-server" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.167008 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce267166-8cda-4753-b3a2-9bd506685c29" containerName="oc" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.167771 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535192-wnwb2" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.170036 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.170888 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.174450 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.176702 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535192-wnwb2"] Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.279007 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5wg\" (UniqueName: \"kubernetes.io/projected/8256e3a2-215d-4cea-8641-15be415f6180-kube-api-access-kt5wg\") pod \"auto-csr-approver-29535192-wnwb2\" (UID: \"8256e3a2-215d-4cea-8641-15be415f6180\") " pod="openshift-infra/auto-csr-approver-29535192-wnwb2" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.381637 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5wg\" (UniqueName: \"kubernetes.io/projected/8256e3a2-215d-4cea-8641-15be415f6180-kube-api-access-kt5wg\") pod \"auto-csr-approver-29535192-wnwb2\" (UID: \"8256e3a2-215d-4cea-8641-15be415f6180\") " pod="openshift-infra/auto-csr-approver-29535192-wnwb2" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.405070 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5wg\" (UniqueName: \"kubernetes.io/projected/8256e3a2-215d-4cea-8641-15be415f6180-kube-api-access-kt5wg\") pod \"auto-csr-approver-29535192-wnwb2\" (UID: \"8256e3a2-215d-4cea-8641-15be415f6180\") " pod="openshift-infra/auto-csr-approver-29535192-wnwb2" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.512087 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535192-wnwb2" Feb 26 13:12:00 crc kubenswrapper[4724]: I0226 13:12:00.976729 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535192-wnwb2"] Feb 26 13:12:01 crc kubenswrapper[4724]: I0226 13:12:01.787362 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535192-wnwb2" event={"ID":"8256e3a2-215d-4cea-8641-15be415f6180","Type":"ContainerStarted","Data":"26a8d05ed0c90531ed4944307ca33301e2513ce405c2835af024e844a7f44a86"} Feb 26 13:12:02 crc kubenswrapper[4724]: I0226 13:12:02.804264 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535192-wnwb2" event={"ID":"8256e3a2-215d-4cea-8641-15be415f6180","Type":"ContainerStarted","Data":"4f3c1cc48e5cd01a3d5d65fffc6fcbbc8ef500d729e49e0736acb5b1a145b3b5"} Feb 26 13:12:02 crc kubenswrapper[4724]: I0226 13:12:02.829039 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535192-wnwb2" podStartSLOduration=1.798255613 podStartE2EDuration="2.829016319s" podCreationTimestamp="2026-02-26 13:12:00 +0000 UTC" firstStartedPulling="2026-02-26 13:12:00.981568737 +0000 UTC m=+7587.637307852" lastFinishedPulling="2026-02-26 13:12:02.012329433 +0000 UTC m=+7588.668068558" observedRunningTime="2026-02-26 13:12:02.826577936 +0000 UTC m=+7589.482317051" watchObservedRunningTime="2026-02-26 13:12:02.829016319 +0000 UTC m=+7589.484755454" Feb 26 13:12:03 crc kubenswrapper[4724]: I0226 13:12:03.818046 4724 generic.go:334] "Generic (PLEG): container finished" podID="8256e3a2-215d-4cea-8641-15be415f6180" containerID="4f3c1cc48e5cd01a3d5d65fffc6fcbbc8ef500d729e49e0736acb5b1a145b3b5" exitCode=0 Feb 26 13:12:03 crc kubenswrapper[4724]: I0226 13:12:03.818250 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535192-wnwb2" event={"ID":"8256e3a2-215d-4cea-8641-15be415f6180","Type":"ContainerDied","Data":"4f3c1cc48e5cd01a3d5d65fffc6fcbbc8ef500d729e49e0736acb5b1a145b3b5"} Feb 26 13:12:05 crc kubenswrapper[4724]: I0226 13:12:05.245667 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535192-wnwb2" Feb 26 13:12:05 crc kubenswrapper[4724]: I0226 13:12:05.283068 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt5wg\" (UniqueName: \"kubernetes.io/projected/8256e3a2-215d-4cea-8641-15be415f6180-kube-api-access-kt5wg\") pod \"8256e3a2-215d-4cea-8641-15be415f6180\" (UID: \"8256e3a2-215d-4cea-8641-15be415f6180\") " Feb 26 13:12:05 crc kubenswrapper[4724]: I0226 13:12:05.290452 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8256e3a2-215d-4cea-8641-15be415f6180-kube-api-access-kt5wg" (OuterVolumeSpecName: "kube-api-access-kt5wg") pod "8256e3a2-215d-4cea-8641-15be415f6180" (UID: "8256e3a2-215d-4cea-8641-15be415f6180"). InnerVolumeSpecName "kube-api-access-kt5wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:12:05 crc kubenswrapper[4724]: I0226 13:12:05.385325 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt5wg\" (UniqueName: \"kubernetes.io/projected/8256e3a2-215d-4cea-8641-15be415f6180-kube-api-access-kt5wg\") on node \"crc\" DevicePath \"\"" Feb 26 13:12:05 crc kubenswrapper[4724]: I0226 13:12:05.846245 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535192-wnwb2" event={"ID":"8256e3a2-215d-4cea-8641-15be415f6180","Type":"ContainerDied","Data":"26a8d05ed0c90531ed4944307ca33301e2513ce405c2835af024e844a7f44a86"} Feb 26 13:12:05 crc kubenswrapper[4724]: I0226 13:12:05.846315 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26a8d05ed0c90531ed4944307ca33301e2513ce405c2835af024e844a7f44a86" Feb 26 13:12:05 crc kubenswrapper[4724]: I0226 13:12:05.846405 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535192-wnwb2" Feb 26 13:12:06 crc kubenswrapper[4724]: I0226 13:12:06.324863 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535186-bmg7h"] Feb 26 13:12:06 crc kubenswrapper[4724]: I0226 13:12:06.333035 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535186-bmg7h"] Feb 26 13:12:07 crc kubenswrapper[4724]: I0226 13:12:07.991963 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb" path="/var/lib/kubelet/pods/6c65cdfb-7c89-4b21-9a32-4f921f3ebbdb/volumes" Feb 26 13:12:09 crc kubenswrapper[4724]: I0226 13:12:09.976266 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:12:09 crc kubenswrapper[4724]: E0226 13:12:09.976873 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:12:20 crc kubenswrapper[4724]: I0226 13:12:20.976674 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:12:20 crc kubenswrapper[4724]: E0226 13:12:20.977364 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:12:25 crc kubenswrapper[4724]: I0226 13:12:25.607135 4724 scope.go:117] "RemoveContainer" containerID="b9dd7baf0b0df85e53869d54f30c5074f7b95d6f396bfebda564fafb6af176b2" Feb 26 13:12:31 crc kubenswrapper[4724]: I0226 13:12:31.979777 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:12:31 crc kubenswrapper[4724]: E0226 13:12:31.980570 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:12:44 crc kubenswrapper[4724]: I0226 13:12:44.976128 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:12:44 crc kubenswrapper[4724]: E0226 13:12:44.976911 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:12:57 crc kubenswrapper[4724]: I0226 13:12:57.975681 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:12:57 crc kubenswrapper[4724]: E0226 13:12:57.976888 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:13:09 crc kubenswrapper[4724]: I0226 13:13:09.976415 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:13:09 crc kubenswrapper[4724]: E0226 13:13:09.977684 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:13:22 crc kubenswrapper[4724]: I0226 13:13:22.975953 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:13:22 crc kubenswrapper[4724]: E0226 13:13:22.976920 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:13:36 crc kubenswrapper[4724]: I0226 13:13:36.975575 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:13:36 crc kubenswrapper[4724]: E0226 13:13:36.976255 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:13:49 crc kubenswrapper[4724]: I0226 13:13:49.975973 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:13:49 crc kubenswrapper[4724]: E0226 13:13:49.976805 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:14:00 crc kubenswrapper[4724]: I0226 13:14:00.156848 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535194-2q7dp"] Feb 26 13:14:00 crc kubenswrapper[4724]: E0226 13:14:00.157849 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8256e3a2-215d-4cea-8641-15be415f6180" containerName="oc" Feb 26 13:14:00 crc kubenswrapper[4724]: I0226 13:14:00.157866 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8256e3a2-215d-4cea-8641-15be415f6180" containerName="oc" Feb 26 13:14:00 crc kubenswrapper[4724]: I0226 13:14:00.158094 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8256e3a2-215d-4cea-8641-15be415f6180" containerName="oc" Feb 26 13:14:00 crc kubenswrapper[4724]: I0226 13:14:00.158818 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535194-2q7dp" Feb 26 13:14:00 crc kubenswrapper[4724]: I0226 13:14:00.164695 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:14:00 crc kubenswrapper[4724]: I0226 13:14:00.168685 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:14:00 crc kubenswrapper[4724]: I0226 13:14:00.169280 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:14:00 crc kubenswrapper[4724]: I0226 13:14:00.169908 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535194-2q7dp"] Feb 26 13:14:00 crc kubenswrapper[4724]: I0226 13:14:00.276846 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q9hh\" (UniqueName: \"kubernetes.io/projected/4b26798b-9ef9-4b67-9326-a987e89231ed-kube-api-access-8q9hh\") pod \"auto-csr-approver-29535194-2q7dp\" (UID: \"4b26798b-9ef9-4b67-9326-a987e89231ed\") " pod="openshift-infra/auto-csr-approver-29535194-2q7dp" Feb 26 13:14:00 crc kubenswrapper[4724]: I0226 13:14:00.379392 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q9hh\" (UniqueName: \"kubernetes.io/projected/4b26798b-9ef9-4b67-9326-a987e89231ed-kube-api-access-8q9hh\") pod \"auto-csr-approver-29535194-2q7dp\" (UID: \"4b26798b-9ef9-4b67-9326-a987e89231ed\") " pod="openshift-infra/auto-csr-approver-29535194-2q7dp" Feb 26 13:14:00 crc kubenswrapper[4724]: I0226 13:14:00.399301 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q9hh\" (UniqueName: \"kubernetes.io/projected/4b26798b-9ef9-4b67-9326-a987e89231ed-kube-api-access-8q9hh\") pod \"auto-csr-approver-29535194-2q7dp\" (UID: \"4b26798b-9ef9-4b67-9326-a987e89231ed\") " pod="openshift-infra/auto-csr-approver-29535194-2q7dp" Feb 26 13:14:00 crc kubenswrapper[4724]: I0226 13:14:00.482049 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535194-2q7dp" Feb 26 13:14:00 crc kubenswrapper[4724]: I0226 13:14:00.981692 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535194-2q7dp"] Feb 26 13:14:01 crc kubenswrapper[4724]: I0226 13:14:01.888739 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535194-2q7dp" event={"ID":"4b26798b-9ef9-4b67-9326-a987e89231ed","Type":"ContainerStarted","Data":"641a2c80bc3d80318c6ed12f557e4ffbd2e0a3537c1dbd404fc56eb4b8ddf2ca"} Feb 26 13:14:02 crc kubenswrapper[4724]: I0226 13:14:02.898923 4724 generic.go:334] "Generic (PLEG): container finished" podID="4b26798b-9ef9-4b67-9326-a987e89231ed" containerID="9ecb622f6bf3bbb40885190faa27ecd9794c81753cba76ff1c3b2683e1d810ec" exitCode=0 Feb 26 13:14:02 crc kubenswrapper[4724]: I0226 13:14:02.898980 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535194-2q7dp" event={"ID":"4b26798b-9ef9-4b67-9326-a987e89231ed","Type":"ContainerDied","Data":"9ecb622f6bf3bbb40885190faa27ecd9794c81753cba76ff1c3b2683e1d810ec"} Feb 26 13:14:04 crc kubenswrapper[4724]: I0226 13:14:04.272558 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535194-2q7dp" Feb 26 13:14:04 crc kubenswrapper[4724]: I0226 13:14:04.381334 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q9hh\" (UniqueName: \"kubernetes.io/projected/4b26798b-9ef9-4b67-9326-a987e89231ed-kube-api-access-8q9hh\") pod \"4b26798b-9ef9-4b67-9326-a987e89231ed\" (UID: \"4b26798b-9ef9-4b67-9326-a987e89231ed\") " Feb 26 13:14:04 crc kubenswrapper[4724]: I0226 13:14:04.387974 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b26798b-9ef9-4b67-9326-a987e89231ed-kube-api-access-8q9hh" (OuterVolumeSpecName: "kube-api-access-8q9hh") pod "4b26798b-9ef9-4b67-9326-a987e89231ed" (UID: "4b26798b-9ef9-4b67-9326-a987e89231ed"). InnerVolumeSpecName "kube-api-access-8q9hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:14:04 crc kubenswrapper[4724]: I0226 13:14:04.483984 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q9hh\" (UniqueName: \"kubernetes.io/projected/4b26798b-9ef9-4b67-9326-a987e89231ed-kube-api-access-8q9hh\") on node \"crc\" DevicePath \"\"" Feb 26 13:14:04 crc kubenswrapper[4724]: I0226 13:14:04.934568 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535194-2q7dp" event={"ID":"4b26798b-9ef9-4b67-9326-a987e89231ed","Type":"ContainerDied","Data":"641a2c80bc3d80318c6ed12f557e4ffbd2e0a3537c1dbd404fc56eb4b8ddf2ca"} Feb 26 13:14:04 crc kubenswrapper[4724]: I0226 13:14:04.934975 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="641a2c80bc3d80318c6ed12f557e4ffbd2e0a3537c1dbd404fc56eb4b8ddf2ca" Feb 26 13:14:04 crc kubenswrapper[4724]: I0226 13:14:04.934627 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535194-2q7dp" Feb 26 13:14:04 crc kubenswrapper[4724]: I0226 13:14:04.977597 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:14:04 crc kubenswrapper[4724]: E0226 13:14:04.978115 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:14:05 crc kubenswrapper[4724]: E0226 13:14:05.037537 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b26798b_9ef9_4b67_9326_a987e89231ed.slice/crio-641a2c80bc3d80318c6ed12f557e4ffbd2e0a3537c1dbd404fc56eb4b8ddf2ca\": RecentStats: unable to find data in memory cache]" Feb 26 13:14:05 crc kubenswrapper[4724]: I0226 13:14:05.390621 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535188-mdktf"] Feb 26 13:14:05 crc kubenswrapper[4724]: I0226 13:14:05.400033 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535188-mdktf"] Feb 26 13:14:05 crc kubenswrapper[4724]: I0226 13:14:05.988706 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdb7d8bc-dd78-419d-9b96-fb961c1ddde1" path="/var/lib/kubelet/pods/fdb7d8bc-dd78-419d-9b96-fb961c1ddde1/volumes" Feb 26 13:14:19 crc kubenswrapper[4724]: I0226 13:14:19.976037 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:14:19 crc kubenswrapper[4724]: E0226 13:14:19.976923 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:14:25 crc kubenswrapper[4724]: I0226 13:14:25.704919 4724 scope.go:117] "RemoveContainer" containerID="e2552d218fa23f497223ca478c416e9a36f349fe30384c9ce0a59ae19e27a36d" Feb 26 13:14:30 crc kubenswrapper[4724]: I0226 13:14:30.975993 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:14:30 crc kubenswrapper[4724]: E0226 13:14:30.976683 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:14:41 crc kubenswrapper[4724]: I0226 13:14:41.976659 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:14:41 crc kubenswrapper[4724]: E0226 13:14:41.977483 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:14:53 crc kubenswrapper[4724]: I0226 13:14:53.982544 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:14:53 crc kubenswrapper[4724]: E0226 13:14:53.983467 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.221492 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml"] Feb 26 13:15:00 crc kubenswrapper[4724]: E0226 13:15:00.222675 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b26798b-9ef9-4b67-9326-a987e89231ed" containerName="oc" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.222694 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b26798b-9ef9-4b67-9326-a987e89231ed" containerName="oc" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.223037 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b26798b-9ef9-4b67-9326-a987e89231ed" containerName="oc" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.226010 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.229562 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.231088 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.238819 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml"] Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.361809 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdh7n\" (UniqueName: \"kubernetes.io/projected/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-kube-api-access-zdh7n\") pod \"collect-profiles-29535195-gx2ml\" (UID: \"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.362021 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-secret-volume\") pod \"collect-profiles-29535195-gx2ml\" (UID: \"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.362076 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-config-volume\") pod \"collect-profiles-29535195-gx2ml\" (UID: \"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.463502 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-secret-volume\") pod \"collect-profiles-29535195-gx2ml\" (UID: \"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.463541 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-config-volume\") pod \"collect-profiles-29535195-gx2ml\" (UID: \"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.463681 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdh7n\" (UniqueName: \"kubernetes.io/projected/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-kube-api-access-zdh7n\") pod \"collect-profiles-29535195-gx2ml\" (UID: \"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.464805 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-config-volume\") pod \"collect-profiles-29535195-gx2ml\" (UID: \"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.478694 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-secret-volume\") pod \"collect-profiles-29535195-gx2ml\" (UID: \"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.481411 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdh7n\" (UniqueName: \"kubernetes.io/projected/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-kube-api-access-zdh7n\") pod \"collect-profiles-29535195-gx2ml\" (UID: \"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" Feb 26 13:15:00 crc kubenswrapper[4724]: I0226 13:15:00.552298 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" Feb 26 13:15:01 crc kubenswrapper[4724]: I0226 13:15:01.029645 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml"] Feb 26 13:15:01 crc kubenswrapper[4724]: I0226 13:15:01.503363 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" event={"ID":"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311","Type":"ContainerStarted","Data":"fbae59b9af9f2e90e9db4908ecb38e3327f336b7957d0f8c51bb458de3f2b805"} Feb 26 13:15:01 crc kubenswrapper[4724]: I0226 13:15:01.503410 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" event={"ID":"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311","Type":"ContainerStarted","Data":"ccbdf0fd0b63a29ed86768ab94aad902cfdae47233fc6ca3facf4e1ffff1ece6"} Feb 26 13:15:01 crc kubenswrapper[4724]: I0226 13:15:01.539944 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" podStartSLOduration=1.5398983309999998 podStartE2EDuration="1.539898331s" podCreationTimestamp="2026-02-26 13:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 13:15:01.533794885 +0000 UTC m=+7768.189534030" watchObservedRunningTime="2026-02-26 13:15:01.539898331 +0000 UTC m=+7768.195637446" Feb 26 13:15:02 crc kubenswrapper[4724]: I0226 13:15:02.513221 4724 generic.go:334] "Generic (PLEG): container finished" podID="9e4bb34d-f6bb-438a-9fd9-6a7d1155d311" containerID="fbae59b9af9f2e90e9db4908ecb38e3327f336b7957d0f8c51bb458de3f2b805" exitCode=0 Feb 26 13:15:02 crc kubenswrapper[4724]: I0226 13:15:02.513295 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" event={"ID":"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311","Type":"ContainerDied","Data":"fbae59b9af9f2e90e9db4908ecb38e3327f336b7957d0f8c51bb458de3f2b805"} Feb 26 13:15:03 crc kubenswrapper[4724]: I0226 13:15:03.939836 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" Feb 26 13:15:04 crc kubenswrapper[4724]: I0226 13:15:04.031432 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-config-volume\") pod \"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311\" (UID: \"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311\") " Feb 26 13:15:04 crc kubenswrapper[4724]: I0226 13:15:04.031863 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-secret-volume\") pod \"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311\" (UID: \"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311\") " Feb 26 13:15:04 crc kubenswrapper[4724]: I0226 13:15:04.031919 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdh7n\" (UniqueName: \"kubernetes.io/projected/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-kube-api-access-zdh7n\") pod \"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311\" (UID: \"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311\") " Feb 26 13:15:04 crc kubenswrapper[4724]: I0226 13:15:04.032332 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-config-volume" (OuterVolumeSpecName: "config-volume") pod "9e4bb34d-f6bb-438a-9fd9-6a7d1155d311" (UID: "9e4bb34d-f6bb-438a-9fd9-6a7d1155d311"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 13:15:04 crc kubenswrapper[4724]: I0226 13:15:04.033041 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 13:15:04 crc kubenswrapper[4724]: I0226 13:15:04.038070 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9e4bb34d-f6bb-438a-9fd9-6a7d1155d311" (UID: "9e4bb34d-f6bb-438a-9fd9-6a7d1155d311"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:15:04 crc kubenswrapper[4724]: I0226 13:15:04.039031 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-kube-api-access-zdh7n" (OuterVolumeSpecName: "kube-api-access-zdh7n") pod "9e4bb34d-f6bb-438a-9fd9-6a7d1155d311" (UID: "9e4bb34d-f6bb-438a-9fd9-6a7d1155d311"). InnerVolumeSpecName "kube-api-access-zdh7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:15:04 crc kubenswrapper[4724]: I0226 13:15:04.134513 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 13:15:04 crc kubenswrapper[4724]: I0226 13:15:04.134547 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdh7n\" (UniqueName: \"kubernetes.io/projected/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311-kube-api-access-zdh7n\") on node \"crc\" DevicePath \"\"" Feb 26 13:15:04 crc kubenswrapper[4724]: I0226 13:15:04.533745 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" event={"ID":"9e4bb34d-f6bb-438a-9fd9-6a7d1155d311","Type":"ContainerDied","Data":"ccbdf0fd0b63a29ed86768ab94aad902cfdae47233fc6ca3facf4e1ffff1ece6"} Feb 26 13:15:04 crc kubenswrapper[4724]: I0226 13:15:04.533786 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccbdf0fd0b63a29ed86768ab94aad902cfdae47233fc6ca3facf4e1ffff1ece6" Feb 26 13:15:04 crc kubenswrapper[4724]: I0226 13:15:04.533829 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml" Feb 26 13:15:04 crc kubenswrapper[4724]: I0226 13:15:04.616898 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw"] Feb 26 13:15:04 crc kubenswrapper[4724]: I0226 13:15:04.627785 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535150-xn5fw"] Feb 26 13:15:04 crc kubenswrapper[4724]: I0226 13:15:04.976085 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:15:04 crc kubenswrapper[4724]: E0226 13:15:04.976395 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:15:05 crc kubenswrapper[4724]: I0226 13:15:05.986309 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55e091f5-f546-4bf5-b14d-9ae47e7c3f96" path="/var/lib/kubelet/pods/55e091f5-f546-4bf5-b14d-9ae47e7c3f96/volumes" Feb 26 13:15:16 crc kubenswrapper[4724]: I0226 13:15:16.975346 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:15:16 crc kubenswrapper[4724]: E0226 13:15:16.976252 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:15:25 crc kubenswrapper[4724]: I0226 13:15:25.780944 4724 scope.go:117] "RemoveContainer" containerID="74e8e6227a58736f6af55cf66b3c62b9f78605007c3d7e3b2e6e3e2175c84e79" Feb 26 13:15:31 crc kubenswrapper[4724]: I0226 13:15:31.976024 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:15:31 crc kubenswrapper[4724]: E0226 13:15:31.976711 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:15:42 crc kubenswrapper[4724]: I0226 13:15:42.975358 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:15:42 crc kubenswrapper[4724]: E0226 13:15:42.976155 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:15:55 crc kubenswrapper[4724]: I0226 13:15:55.976592 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:15:55 crc kubenswrapper[4724]: E0226 13:15:55.977971 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:16:00 crc kubenswrapper[4724]: I0226 13:16:00.210833 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535196-9ws75"] Feb 26 13:16:00 crc kubenswrapper[4724]: E0226 13:16:00.213519 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4bb34d-f6bb-438a-9fd9-6a7d1155d311" containerName="collect-profiles" Feb 26 13:16:00 crc kubenswrapper[4724]: I0226 13:16:00.213651 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4bb34d-f6bb-438a-9fd9-6a7d1155d311" containerName="collect-profiles" Feb 26 13:16:00 crc kubenswrapper[4724]: I0226 13:16:00.214067 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4bb34d-f6bb-438a-9fd9-6a7d1155d311" containerName="collect-profiles" Feb 26 13:16:00 crc kubenswrapper[4724]: I0226 13:16:00.215283 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535196-9ws75" Feb 26 13:16:00 crc kubenswrapper[4724]: I0226 13:16:00.219724 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:16:00 crc kubenswrapper[4724]: I0226 13:16:00.219934 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:16:00 crc kubenswrapper[4724]: I0226 13:16:00.220093 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:16:00 crc kubenswrapper[4724]: I0226 13:16:00.226411 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535196-9ws75"] Feb 26 13:16:00 crc kubenswrapper[4724]: I0226 13:16:00.266986 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm8xq\" (UniqueName: \"kubernetes.io/projected/f45e6f70-4f96-4935-9f53-b00971cbe271-kube-api-access-lm8xq\") pod \"auto-csr-approver-29535196-9ws75\" (UID: \"f45e6f70-4f96-4935-9f53-b00971cbe271\") " pod="openshift-infra/auto-csr-approver-29535196-9ws75" Feb 26 13:16:00 crc kubenswrapper[4724]: I0226 13:16:00.368498 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm8xq\" (UniqueName: \"kubernetes.io/projected/f45e6f70-4f96-4935-9f53-b00971cbe271-kube-api-access-lm8xq\") pod \"auto-csr-approver-29535196-9ws75\" (UID: \"f45e6f70-4f96-4935-9f53-b00971cbe271\") " pod="openshift-infra/auto-csr-approver-29535196-9ws75" Feb 26 13:16:00 crc kubenswrapper[4724]: I0226 13:16:00.395306 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm8xq\" (UniqueName: \"kubernetes.io/projected/f45e6f70-4f96-4935-9f53-b00971cbe271-kube-api-access-lm8xq\") pod \"auto-csr-approver-29535196-9ws75\" (UID: \"f45e6f70-4f96-4935-9f53-b00971cbe271\") " pod="openshift-infra/auto-csr-approver-29535196-9ws75" Feb 26 13:16:00 crc kubenswrapper[4724]: I0226 13:16:00.549821 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535196-9ws75" Feb 26 13:16:01 crc kubenswrapper[4724]: I0226 13:16:01.065470 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535196-9ws75"] Feb 26 13:16:01 crc kubenswrapper[4724]: W0226 13:16:01.084849 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf45e6f70_4f96_4935_9f53_b00971cbe271.slice/crio-4e68833ed17ade7d053812a618860b798e2fb9192c23ef3d1081c9a9ddee6a01 WatchSource:0}: Error finding container 4e68833ed17ade7d053812a618860b798e2fb9192c23ef3d1081c9a9ddee6a01: Status 404 returned error can't find the container with id 4e68833ed17ade7d053812a618860b798e2fb9192c23ef3d1081c9a9ddee6a01 Feb 26 13:16:01 crc kubenswrapper[4724]: I0226 13:16:01.086770 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 13:16:02 crc kubenswrapper[4724]: I0226 13:16:02.059744 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535196-9ws75" event={"ID":"f45e6f70-4f96-4935-9f53-b00971cbe271","Type":"ContainerStarted","Data":"4e68833ed17ade7d053812a618860b798e2fb9192c23ef3d1081c9a9ddee6a01"} Feb 26 13:16:04 crc kubenswrapper[4724]: I0226 13:16:04.082271 4724 generic.go:334] "Generic (PLEG): container finished" podID="f45e6f70-4f96-4935-9f53-b00971cbe271" containerID="233e34c9a8bf0c29be97d97f14f503e507d538ba701245bcc7df3ddddf2768a2" exitCode=0 Feb 26 13:16:04 crc kubenswrapper[4724]: I0226 13:16:04.082524 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535196-9ws75" event={"ID":"f45e6f70-4f96-4935-9f53-b00971cbe271","Type":"ContainerDied","Data":"233e34c9a8bf0c29be97d97f14f503e507d538ba701245bcc7df3ddddf2768a2"} Feb 26 13:16:05 crc kubenswrapper[4724]: I0226 13:16:05.436313 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535196-9ws75" Feb 26 13:16:05 crc kubenswrapper[4724]: I0226 13:16:05.599581 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm8xq\" (UniqueName: \"kubernetes.io/projected/f45e6f70-4f96-4935-9f53-b00971cbe271-kube-api-access-lm8xq\") pod \"f45e6f70-4f96-4935-9f53-b00971cbe271\" (UID: \"f45e6f70-4f96-4935-9f53-b00971cbe271\") " Feb 26 13:16:05 crc kubenswrapper[4724]: I0226 13:16:05.606607 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45e6f70-4f96-4935-9f53-b00971cbe271-kube-api-access-lm8xq" (OuterVolumeSpecName: "kube-api-access-lm8xq") pod "f45e6f70-4f96-4935-9f53-b00971cbe271" (UID: "f45e6f70-4f96-4935-9f53-b00971cbe271"). InnerVolumeSpecName "kube-api-access-lm8xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:16:05 crc kubenswrapper[4724]: I0226 13:16:05.702945 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm8xq\" (UniqueName: \"kubernetes.io/projected/f45e6f70-4f96-4935-9f53-b00971cbe271-kube-api-access-lm8xq\") on node \"crc\" DevicePath \"\"" Feb 26 13:16:06 crc kubenswrapper[4724]: I0226 13:16:06.103246 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535196-9ws75" event={"ID":"f45e6f70-4f96-4935-9f53-b00971cbe271","Type":"ContainerDied","Data":"4e68833ed17ade7d053812a618860b798e2fb9192c23ef3d1081c9a9ddee6a01"} Feb 26 13:16:06 crc kubenswrapper[4724]: I0226 13:16:06.103285 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e68833ed17ade7d053812a618860b798e2fb9192c23ef3d1081c9a9ddee6a01" Feb 26 13:16:06 crc kubenswrapper[4724]: I0226 13:16:06.103323 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535196-9ws75" Feb 26 13:16:06 crc kubenswrapper[4724]: I0226 13:16:06.523798 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535190-w5ln7"] Feb 26 13:16:06 crc kubenswrapper[4724]: I0226 13:16:06.533392 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535190-w5ln7"] Feb 26 13:16:07 crc kubenswrapper[4724]: I0226 13:16:07.998958 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce267166-8cda-4753-b3a2-9bd506685c29" path="/var/lib/kubelet/pods/ce267166-8cda-4753-b3a2-9bd506685c29/volumes" Feb 26 13:16:09 crc kubenswrapper[4724]: I0226 13:16:09.975849 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:16:09 crc kubenswrapper[4724]: E0226 13:16:09.976458 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:16:21 crc kubenswrapper[4724]: I0226 13:16:21.975535 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:16:23 crc kubenswrapper[4724]: I0226 13:16:23.271322 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"28ba1bdecaca0324305e932c22e76ce7343db7b56c19c15304770da6d24d656d"} Feb 26 13:16:25 crc kubenswrapper[4724]: I0226 13:16:25.837325 4724 scope.go:117] "RemoveContainer" containerID="a6f075b5bd630272dcc939a6ff9850acc16b04ba30e2648022ee78e0d9656ebd" Feb 26 13:16:59 crc kubenswrapper[4724]: I0226 13:16:59.546231 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7x8zw"] Feb 26 13:16:59 crc kubenswrapper[4724]: E0226 13:16:59.547270 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45e6f70-4f96-4935-9f53-b00971cbe271" containerName="oc" Feb 26 13:16:59 crc kubenswrapper[4724]: I0226 13:16:59.547286 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45e6f70-4f96-4935-9f53-b00971cbe271" containerName="oc" Feb 26 13:16:59 crc kubenswrapper[4724]: I0226 13:16:59.547490 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45e6f70-4f96-4935-9f53-b00971cbe271" containerName="oc" Feb 26 13:16:59 crc kubenswrapper[4724]: I0226 13:16:59.573568 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7x8zw"] Feb 26 13:16:59 crc kubenswrapper[4724]: I0226 13:16:59.573729 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:16:59 crc kubenswrapper[4724]: I0226 13:16:59.723992 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c956e3b4-00b3-4d8c-913b-5421d30445de-utilities\") pod \"certified-operators-7x8zw\" (UID: \"c956e3b4-00b3-4d8c-913b-5421d30445de\") " pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:16:59 crc kubenswrapper[4724]: I0226 13:16:59.724427 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c956e3b4-00b3-4d8c-913b-5421d30445de-catalog-content\") pod \"certified-operators-7x8zw\" (UID: \"c956e3b4-00b3-4d8c-913b-5421d30445de\") " pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:16:59 crc kubenswrapper[4724]: I0226 13:16:59.724831 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b289t\" (UniqueName: \"kubernetes.io/projected/c956e3b4-00b3-4d8c-913b-5421d30445de-kube-api-access-b289t\") pod \"certified-operators-7x8zw\" (UID: \"c956e3b4-00b3-4d8c-913b-5421d30445de\") " pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:16:59 crc kubenswrapper[4724]: I0226 13:16:59.826240 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c956e3b4-00b3-4d8c-913b-5421d30445de-catalog-content\") pod \"certified-operators-7x8zw\" (UID: \"c956e3b4-00b3-4d8c-913b-5421d30445de\") " pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:16:59 crc kubenswrapper[4724]: I0226 13:16:59.826477 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b289t\" (UniqueName: \"kubernetes.io/projected/c956e3b4-00b3-4d8c-913b-5421d30445de-kube-api-access-b289t\") pod \"certified-operators-7x8zw\" (UID: \"c956e3b4-00b3-4d8c-913b-5421d30445de\") " pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:16:59 crc kubenswrapper[4724]: I0226 13:16:59.826558 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c956e3b4-00b3-4d8c-913b-5421d30445de-utilities\") pod \"certified-operators-7x8zw\" (UID: \"c956e3b4-00b3-4d8c-913b-5421d30445de\") " pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:16:59 crc kubenswrapper[4724]: I0226 13:16:59.826761 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c956e3b4-00b3-4d8c-913b-5421d30445de-catalog-content\") pod \"certified-operators-7x8zw\" (UID: \"c956e3b4-00b3-4d8c-913b-5421d30445de\") " pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:16:59 crc kubenswrapper[4724]: I0226 13:16:59.827025 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c956e3b4-00b3-4d8c-913b-5421d30445de-utilities\") pod \"certified-operators-7x8zw\" (UID: \"c956e3b4-00b3-4d8c-913b-5421d30445de\") " pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:16:59 crc kubenswrapper[4724]: I0226 13:16:59.851848 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b289t\" (UniqueName: \"kubernetes.io/projected/c956e3b4-00b3-4d8c-913b-5421d30445de-kube-api-access-b289t\") pod \"certified-operators-7x8zw\" (UID: \"c956e3b4-00b3-4d8c-913b-5421d30445de\") " pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:16:59 crc kubenswrapper[4724]: I0226 13:16:59.900321 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:17:00 crc kubenswrapper[4724]: I0226 13:17:00.349272 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7x8zw"] Feb 26 13:17:00 crc kubenswrapper[4724]: I0226 13:17:00.636917 4724 generic.go:334] "Generic (PLEG): container finished" podID="c956e3b4-00b3-4d8c-913b-5421d30445de" containerID="7f09162c6c390a482224db29100e2b1522a20881fefb6ab24507780d467fadb3" exitCode=0 Feb 26 13:17:00 crc kubenswrapper[4724]: I0226 13:17:00.637077 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7x8zw" event={"ID":"c956e3b4-00b3-4d8c-913b-5421d30445de","Type":"ContainerDied","Data":"7f09162c6c390a482224db29100e2b1522a20881fefb6ab24507780d467fadb3"} Feb 26 13:17:00 crc kubenswrapper[4724]: I0226 13:17:00.637251 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7x8zw" event={"ID":"c956e3b4-00b3-4d8c-913b-5421d30445de","Type":"ContainerStarted","Data":"8e8b64e7f3ab9f6668ce9569da79e8e9892cb8d2db10d56d3f2acd41eb9570fe"} Feb 26 13:17:02 crc kubenswrapper[4724]: I0226 13:17:02.660539 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7x8zw" event={"ID":"c956e3b4-00b3-4d8c-913b-5421d30445de","Type":"ContainerStarted","Data":"5aba54cb380c9ca2766479d491d0dbe525553c84d5ed4e21807b227cb69a5e89"} Feb 26 13:17:02 crc kubenswrapper[4724]: I0226 13:17:02.732210 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q8m5l"] Feb 26 13:17:02 crc kubenswrapper[4724]: I0226 13:17:02.739400 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:02 crc kubenswrapper[4724]: I0226 13:17:02.747961 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8m5l"] Feb 26 13:17:02 crc kubenswrapper[4724]: I0226 13:17:02.792594 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/118907e7-3d21-444c-9522-148164d5aa06-utilities\") pod \"redhat-marketplace-q8m5l\" (UID: \"118907e7-3d21-444c-9522-148164d5aa06\") " pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:02 crc kubenswrapper[4724]: I0226 13:17:02.793106 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnwlh\" (UniqueName: \"kubernetes.io/projected/118907e7-3d21-444c-9522-148164d5aa06-kube-api-access-nnwlh\") pod \"redhat-marketplace-q8m5l\" (UID: \"118907e7-3d21-444c-9522-148164d5aa06\") " pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:02 crc kubenswrapper[4724]: I0226 13:17:02.793340 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/118907e7-3d21-444c-9522-148164d5aa06-catalog-content\") pod \"redhat-marketplace-q8m5l\" (UID: \"118907e7-3d21-444c-9522-148164d5aa06\") " pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:02 crc kubenswrapper[4724]: I0226 13:17:02.894694 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnwlh\" (UniqueName: \"kubernetes.io/projected/118907e7-3d21-444c-9522-148164d5aa06-kube-api-access-nnwlh\") pod \"redhat-marketplace-q8m5l\" (UID: \"118907e7-3d21-444c-9522-148164d5aa06\") " pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:02 crc kubenswrapper[4724]: I0226 13:17:02.895098 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/118907e7-3d21-444c-9522-148164d5aa06-catalog-content\") pod \"redhat-marketplace-q8m5l\" (UID: \"118907e7-3d21-444c-9522-148164d5aa06\") " pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:02 crc kubenswrapper[4724]: I0226 13:17:02.895546 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/118907e7-3d21-444c-9522-148164d5aa06-catalog-content\") pod \"redhat-marketplace-q8m5l\" (UID: \"118907e7-3d21-444c-9522-148164d5aa06\") " pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:02 crc kubenswrapper[4724]: I0226 13:17:02.895693 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/118907e7-3d21-444c-9522-148164d5aa06-utilities\") pod \"redhat-marketplace-q8m5l\" (UID: \"118907e7-3d21-444c-9522-148164d5aa06\") " pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:02 crc kubenswrapper[4724]: I0226 13:17:02.895975 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/118907e7-3d21-444c-9522-148164d5aa06-utilities\") pod \"redhat-marketplace-q8m5l\" (UID: \"118907e7-3d21-444c-9522-148164d5aa06\") " pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:02 crc kubenswrapper[4724]: I0226 13:17:02.921659 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnwlh\" (UniqueName: \"kubernetes.io/projected/118907e7-3d21-444c-9522-148164d5aa06-kube-api-access-nnwlh\") pod \"redhat-marketplace-q8m5l\" (UID: \"118907e7-3d21-444c-9522-148164d5aa06\") " pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:03 crc kubenswrapper[4724]: I0226 13:17:03.058082 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:03 crc kubenswrapper[4724]: I0226 13:17:03.704923 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8m5l"] Feb 26 13:17:04 crc kubenswrapper[4724]: I0226 13:17:04.678296 4724 generic.go:334] "Generic (PLEG): container finished" podID="c956e3b4-00b3-4d8c-913b-5421d30445de" containerID="5aba54cb380c9ca2766479d491d0dbe525553c84d5ed4e21807b227cb69a5e89" exitCode=0 Feb 26 13:17:04 crc kubenswrapper[4724]: I0226 13:17:04.678398 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7x8zw" event={"ID":"c956e3b4-00b3-4d8c-913b-5421d30445de","Type":"ContainerDied","Data":"5aba54cb380c9ca2766479d491d0dbe525553c84d5ed4e21807b227cb69a5e89"} Feb 26 13:17:04 crc kubenswrapper[4724]: I0226 13:17:04.680358 4724 generic.go:334] "Generic (PLEG): container finished" podID="118907e7-3d21-444c-9522-148164d5aa06" containerID="5582d7925ecd4bcacd84c8cd387ce39bf6f2e257cb3824cdcf319a77bf51bbf9" exitCode=0 Feb 26 13:17:04 crc kubenswrapper[4724]: I0226 13:17:04.680436 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8m5l" event={"ID":"118907e7-3d21-444c-9522-148164d5aa06","Type":"ContainerDied","Data":"5582d7925ecd4bcacd84c8cd387ce39bf6f2e257cb3824cdcf319a77bf51bbf9"} Feb 26 13:17:04 crc kubenswrapper[4724]: I0226 13:17:04.680465 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8m5l" event={"ID":"118907e7-3d21-444c-9522-148164d5aa06","Type":"ContainerStarted","Data":"ec8074192e17af09cf2296825549e478dae3fb7037adb8e3bbbe7adcc87ba0b3"} Feb 26 13:17:05 crc kubenswrapper[4724]: I0226 13:17:05.691995 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7x8zw" event={"ID":"c956e3b4-00b3-4d8c-913b-5421d30445de","Type":"ContainerStarted","Data":"bbf10aa323a05f083ab06f5a5760050da593dd2eb22814b71e6b9f40159cdb4a"} Feb 26 13:17:05 crc kubenswrapper[4724]: I0226 13:17:05.713095 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7x8zw" podStartSLOduration=2.216218686 podStartE2EDuration="6.713016796s" podCreationTimestamp="2026-02-26 13:16:59 +0000 UTC" firstStartedPulling="2026-02-26 13:17:00.640128305 +0000 UTC m=+7887.295867420" lastFinishedPulling="2026-02-26 13:17:05.136926415 +0000 UTC m=+7891.792665530" observedRunningTime="2026-02-26 13:17:05.707861114 +0000 UTC m=+7892.363600259" watchObservedRunningTime="2026-02-26 13:17:05.713016796 +0000 UTC m=+7892.368755911" Feb 26 13:17:06 crc kubenswrapper[4724]: I0226 13:17:06.701835 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8m5l" event={"ID":"118907e7-3d21-444c-9522-148164d5aa06","Type":"ContainerStarted","Data":"e0701c1359f0920f0942200e20b21a91f6709be17003d7e525dc4380472a37be"} Feb 26 13:17:08 crc kubenswrapper[4724]: I0226 13:17:08.727098 4724 generic.go:334] "Generic (PLEG): container finished" podID="118907e7-3d21-444c-9522-148164d5aa06" containerID="e0701c1359f0920f0942200e20b21a91f6709be17003d7e525dc4380472a37be" exitCode=0 Feb 26 13:17:08 crc kubenswrapper[4724]: I0226 13:17:08.728621 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8m5l" event={"ID":"118907e7-3d21-444c-9522-148164d5aa06","Type":"ContainerDied","Data":"e0701c1359f0920f0942200e20b21a91f6709be17003d7e525dc4380472a37be"} Feb 26 13:17:09 crc kubenswrapper[4724]: I0226 13:17:09.738052 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8m5l" event={"ID":"118907e7-3d21-444c-9522-148164d5aa06","Type":"ContainerStarted","Data":"3ccf899b1cc3ec4cfb19cdf3daeb69afe7ad1a17b9f0e81a93c2377ec3ad9c12"} Feb 26 13:17:09 crc kubenswrapper[4724]: I0226 13:17:09.763032 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q8m5l" podStartSLOduration=3.271596579 podStartE2EDuration="7.763012832s" podCreationTimestamp="2026-02-26 13:17:02 +0000 UTC" firstStartedPulling="2026-02-26 13:17:04.68222444 +0000 UTC m=+7891.337963545" lastFinishedPulling="2026-02-26 13:17:09.173640683 +0000 UTC m=+7895.829379798" observedRunningTime="2026-02-26 13:17:09.760369805 +0000 UTC m=+7896.416108920" watchObservedRunningTime="2026-02-26 13:17:09.763012832 +0000 UTC m=+7896.418751957" Feb 26 13:17:09 crc kubenswrapper[4724]: I0226 13:17:09.900938 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:17:09 crc kubenswrapper[4724]: I0226 13:17:09.900994 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:17:10 crc kubenswrapper[4724]: I0226 13:17:10.953002 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7x8zw" podUID="c956e3b4-00b3-4d8c-913b-5421d30445de" containerName="registry-server" probeResult="failure" output=< Feb 26 13:17:10 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:17:10 crc kubenswrapper[4724]: > Feb 26 13:17:13 crc kubenswrapper[4724]: I0226 13:17:13.058866 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:13 crc kubenswrapper[4724]: I0226 13:17:13.060505 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:14 crc kubenswrapper[4724]: I0226 13:17:14.119647 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-q8m5l" podUID="118907e7-3d21-444c-9522-148164d5aa06" containerName="registry-server" probeResult="failure" output=< Feb 26 13:17:14 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:17:14 crc kubenswrapper[4724]: > Feb 26 13:17:21 crc kubenswrapper[4724]: I0226 13:17:21.044624 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7x8zw" podUID="c956e3b4-00b3-4d8c-913b-5421d30445de" containerName="registry-server" probeResult="failure" output=< Feb 26 13:17:21 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:17:21 crc kubenswrapper[4724]: > Feb 26 13:17:24 crc kubenswrapper[4724]: I0226 13:17:24.113171 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-q8m5l" podUID="118907e7-3d21-444c-9522-148164d5aa06" containerName="registry-server" probeResult="failure" output=< Feb 26 13:17:24 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:17:24 crc kubenswrapper[4724]: > Feb 26 13:17:30 crc kubenswrapper[4724]: I0226 13:17:30.954683 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7x8zw" podUID="c956e3b4-00b3-4d8c-913b-5421d30445de" containerName="registry-server" probeResult="failure" output=< Feb 26 13:17:30 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:17:30 crc kubenswrapper[4724]: > Feb 26 13:17:33 crc kubenswrapper[4724]: I0226 13:17:33.121347 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:33 crc kubenswrapper[4724]: I0226 13:17:33.181009 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:33 crc kubenswrapper[4724]: I0226 13:17:33.941040 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8m5l"] Feb 26 13:17:35 crc kubenswrapper[4724]: I0226 13:17:35.037361 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q8m5l" podUID="118907e7-3d21-444c-9522-148164d5aa06" containerName="registry-server" containerID="cri-o://3ccf899b1cc3ec4cfb19cdf3daeb69afe7ad1a17b9f0e81a93c2377ec3ad9c12" gracePeriod=2 Feb 26 13:17:35 crc kubenswrapper[4724]: I0226 13:17:35.883712 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.048344 4724 generic.go:334] "Generic (PLEG): container finished" podID="118907e7-3d21-444c-9522-148164d5aa06" containerID="3ccf899b1cc3ec4cfb19cdf3daeb69afe7ad1a17b9f0e81a93c2377ec3ad9c12" exitCode=0 Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.048389 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8m5l" event={"ID":"118907e7-3d21-444c-9522-148164d5aa06","Type":"ContainerDied","Data":"3ccf899b1cc3ec4cfb19cdf3daeb69afe7ad1a17b9f0e81a93c2377ec3ad9c12"} Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.048416 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8m5l" event={"ID":"118907e7-3d21-444c-9522-148164d5aa06","Type":"ContainerDied","Data":"ec8074192e17af09cf2296825549e478dae3fb7037adb8e3bbbe7adcc87ba0b3"} Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.048436 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8m5l" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.048446 4724 scope.go:117] "RemoveContainer" containerID="3ccf899b1cc3ec4cfb19cdf3daeb69afe7ad1a17b9f0e81a93c2377ec3ad9c12" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.053103 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnwlh\" (UniqueName: \"kubernetes.io/projected/118907e7-3d21-444c-9522-148164d5aa06-kube-api-access-nnwlh\") pod \"118907e7-3d21-444c-9522-148164d5aa06\" (UID: \"118907e7-3d21-444c-9522-148164d5aa06\") " Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.053393 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/118907e7-3d21-444c-9522-148164d5aa06-utilities\") pod \"118907e7-3d21-444c-9522-148164d5aa06\" (UID: \"118907e7-3d21-444c-9522-148164d5aa06\") " Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.053534 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/118907e7-3d21-444c-9522-148164d5aa06-catalog-content\") pod \"118907e7-3d21-444c-9522-148164d5aa06\" (UID: \"118907e7-3d21-444c-9522-148164d5aa06\") " Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.054675 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/118907e7-3d21-444c-9522-148164d5aa06-utilities" (OuterVolumeSpecName: "utilities") pod "118907e7-3d21-444c-9522-148164d5aa06" (UID: "118907e7-3d21-444c-9522-148164d5aa06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.055121 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/118907e7-3d21-444c-9522-148164d5aa06-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.079712 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/118907e7-3d21-444c-9522-148164d5aa06-kube-api-access-nnwlh" (OuterVolumeSpecName: "kube-api-access-nnwlh") pod "118907e7-3d21-444c-9522-148164d5aa06" (UID: "118907e7-3d21-444c-9522-148164d5aa06"). InnerVolumeSpecName "kube-api-access-nnwlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.095110 4724 scope.go:117] "RemoveContainer" containerID="e0701c1359f0920f0942200e20b21a91f6709be17003d7e525dc4380472a37be" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.109136 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/118907e7-3d21-444c-9522-148164d5aa06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "118907e7-3d21-444c-9522-148164d5aa06" (UID: "118907e7-3d21-444c-9522-148164d5aa06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.139852 4724 scope.go:117] "RemoveContainer" containerID="5582d7925ecd4bcacd84c8cd387ce39bf6f2e257cb3824cdcf319a77bf51bbf9" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.158002 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnwlh\" (UniqueName: \"kubernetes.io/projected/118907e7-3d21-444c-9522-148164d5aa06-kube-api-access-nnwlh\") on node \"crc\" DevicePath \"\"" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.158034 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/118907e7-3d21-444c-9522-148164d5aa06-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.206510 4724 scope.go:117] "RemoveContainer" containerID="3ccf899b1cc3ec4cfb19cdf3daeb69afe7ad1a17b9f0e81a93c2377ec3ad9c12" Feb 26 13:17:36 crc kubenswrapper[4724]: E0226 13:17:36.210158 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ccf899b1cc3ec4cfb19cdf3daeb69afe7ad1a17b9f0e81a93c2377ec3ad9c12\": container with ID starting with 3ccf899b1cc3ec4cfb19cdf3daeb69afe7ad1a17b9f0e81a93c2377ec3ad9c12 not found: ID does not exist" containerID="3ccf899b1cc3ec4cfb19cdf3daeb69afe7ad1a17b9f0e81a93c2377ec3ad9c12" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.210421 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ccf899b1cc3ec4cfb19cdf3daeb69afe7ad1a17b9f0e81a93c2377ec3ad9c12"} err="failed to get container status \"3ccf899b1cc3ec4cfb19cdf3daeb69afe7ad1a17b9f0e81a93c2377ec3ad9c12\": rpc error: code = NotFound desc = could not find container \"3ccf899b1cc3ec4cfb19cdf3daeb69afe7ad1a17b9f0e81a93c2377ec3ad9c12\": container with ID starting with 3ccf899b1cc3ec4cfb19cdf3daeb69afe7ad1a17b9f0e81a93c2377ec3ad9c12 not found: ID does not exist" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.210456 4724 scope.go:117] "RemoveContainer" containerID="e0701c1359f0920f0942200e20b21a91f6709be17003d7e525dc4380472a37be" Feb 26 13:17:36 crc kubenswrapper[4724]: E0226 13:17:36.212811 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0701c1359f0920f0942200e20b21a91f6709be17003d7e525dc4380472a37be\": container with ID starting with e0701c1359f0920f0942200e20b21a91f6709be17003d7e525dc4380472a37be not found: ID does not exist" containerID="e0701c1359f0920f0942200e20b21a91f6709be17003d7e525dc4380472a37be" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.212844 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0701c1359f0920f0942200e20b21a91f6709be17003d7e525dc4380472a37be"} err="failed to get container status \"e0701c1359f0920f0942200e20b21a91f6709be17003d7e525dc4380472a37be\": rpc error: code = NotFound desc = could not find container \"e0701c1359f0920f0942200e20b21a91f6709be17003d7e525dc4380472a37be\": container with ID starting with e0701c1359f0920f0942200e20b21a91f6709be17003d7e525dc4380472a37be not found: ID does not exist" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.212864 4724 scope.go:117] "RemoveContainer" containerID="5582d7925ecd4bcacd84c8cd387ce39bf6f2e257cb3824cdcf319a77bf51bbf9" Feb 26 13:17:36 crc kubenswrapper[4724]: E0226 13:17:36.216150 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5582d7925ecd4bcacd84c8cd387ce39bf6f2e257cb3824cdcf319a77bf51bbf9\": container with ID starting with 5582d7925ecd4bcacd84c8cd387ce39bf6f2e257cb3824cdcf319a77bf51bbf9 not found: ID does not exist" containerID="5582d7925ecd4bcacd84c8cd387ce39bf6f2e257cb3824cdcf319a77bf51bbf9" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.216214 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5582d7925ecd4bcacd84c8cd387ce39bf6f2e257cb3824cdcf319a77bf51bbf9"} err="failed to get container status \"5582d7925ecd4bcacd84c8cd387ce39bf6f2e257cb3824cdcf319a77bf51bbf9\": rpc error: code = NotFound desc = could not find container \"5582d7925ecd4bcacd84c8cd387ce39bf6f2e257cb3824cdcf319a77bf51bbf9\": container with ID starting with 5582d7925ecd4bcacd84c8cd387ce39bf6f2e257cb3824cdcf319a77bf51bbf9 not found: ID does not exist" Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.389066 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8m5l"] Feb 26 13:17:36 crc kubenswrapper[4724]: I0226 13:17:36.398762 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8m5l"] Feb 26 13:17:37 crc kubenswrapper[4724]: I0226 13:17:37.989376 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="118907e7-3d21-444c-9522-148164d5aa06" path="/var/lib/kubelet/pods/118907e7-3d21-444c-9522-148164d5aa06/volumes" Feb 26 13:17:39 crc kubenswrapper[4724]: I0226 13:17:39.962717 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:17:40 crc kubenswrapper[4724]: I0226 13:17:40.017325 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:17:40 crc kubenswrapper[4724]: I0226 13:17:40.338222 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7x8zw"] Feb 26 13:17:41 crc kubenswrapper[4724]: I0226 13:17:41.105074 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7x8zw" podUID="c956e3b4-00b3-4d8c-913b-5421d30445de" containerName="registry-server" containerID="cri-o://bbf10aa323a05f083ab06f5a5760050da593dd2eb22814b71e6b9f40159cdb4a" gracePeriod=2 Feb 26 13:17:41 crc kubenswrapper[4724]: I0226 13:17:41.583320 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:17:41 crc kubenswrapper[4724]: I0226 13:17:41.764105 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c956e3b4-00b3-4d8c-913b-5421d30445de-catalog-content\") pod \"c956e3b4-00b3-4d8c-913b-5421d30445de\" (UID: \"c956e3b4-00b3-4d8c-913b-5421d30445de\") " Feb 26 13:17:41 crc kubenswrapper[4724]: I0226 13:17:41.764261 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b289t\" (UniqueName: \"kubernetes.io/projected/c956e3b4-00b3-4d8c-913b-5421d30445de-kube-api-access-b289t\") pod \"c956e3b4-00b3-4d8c-913b-5421d30445de\" (UID: \"c956e3b4-00b3-4d8c-913b-5421d30445de\") " Feb 26 13:17:41 crc kubenswrapper[4724]: I0226 13:17:41.764296 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c956e3b4-00b3-4d8c-913b-5421d30445de-utilities\") pod \"c956e3b4-00b3-4d8c-913b-5421d30445de\" (UID: \"c956e3b4-00b3-4d8c-913b-5421d30445de\") " Feb 26 13:17:41 crc kubenswrapper[4724]: I0226 13:17:41.765767 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c956e3b4-00b3-4d8c-913b-5421d30445de-utilities" (OuterVolumeSpecName: "utilities") pod "c956e3b4-00b3-4d8c-913b-5421d30445de" (UID: "c956e3b4-00b3-4d8c-913b-5421d30445de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:17:41 crc kubenswrapper[4724]: I0226 13:17:41.773811 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c956e3b4-00b3-4d8c-913b-5421d30445de-kube-api-access-b289t" (OuterVolumeSpecName: "kube-api-access-b289t") pod "c956e3b4-00b3-4d8c-913b-5421d30445de" (UID: "c956e3b4-00b3-4d8c-913b-5421d30445de"). InnerVolumeSpecName "kube-api-access-b289t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:17:41 crc kubenswrapper[4724]: I0226 13:17:41.867450 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b289t\" (UniqueName: \"kubernetes.io/projected/c956e3b4-00b3-4d8c-913b-5421d30445de-kube-api-access-b289t\") on node \"crc\" DevicePath \"\"" Feb 26 13:17:41 crc kubenswrapper[4724]: I0226 13:17:41.867483 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c956e3b4-00b3-4d8c-913b-5421d30445de-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:17:41 crc kubenswrapper[4724]: I0226 13:17:41.898459 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c956e3b4-00b3-4d8c-913b-5421d30445de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c956e3b4-00b3-4d8c-913b-5421d30445de" (UID: "c956e3b4-00b3-4d8c-913b-5421d30445de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:17:41 crc kubenswrapper[4724]: I0226 13:17:41.969263 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c956e3b4-00b3-4d8c-913b-5421d30445de-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:17:42 crc kubenswrapper[4724]: I0226 13:17:42.117222 4724 generic.go:334] "Generic (PLEG): container finished" podID="c956e3b4-00b3-4d8c-913b-5421d30445de" containerID="bbf10aa323a05f083ab06f5a5760050da593dd2eb22814b71e6b9f40159cdb4a" exitCode=0 Feb 26 13:17:42 crc kubenswrapper[4724]: I0226 13:17:42.117266 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7x8zw" event={"ID":"c956e3b4-00b3-4d8c-913b-5421d30445de","Type":"ContainerDied","Data":"bbf10aa323a05f083ab06f5a5760050da593dd2eb22814b71e6b9f40159cdb4a"} Feb 26 13:17:42 crc kubenswrapper[4724]: I0226 13:17:42.117305 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7x8zw" event={"ID":"c956e3b4-00b3-4d8c-913b-5421d30445de","Type":"ContainerDied","Data":"8e8b64e7f3ab9f6668ce9569da79e8e9892cb8d2db10d56d3f2acd41eb9570fe"} Feb 26 13:17:42 crc kubenswrapper[4724]: I0226 13:17:42.117323 4724 scope.go:117] "RemoveContainer" containerID="bbf10aa323a05f083ab06f5a5760050da593dd2eb22814b71e6b9f40159cdb4a" Feb 26 13:17:42 crc kubenswrapper[4724]: I0226 13:17:42.117338 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7x8zw" Feb 26 13:17:42 crc kubenswrapper[4724]: I0226 13:17:42.147265 4724 scope.go:117] "RemoveContainer" containerID="5aba54cb380c9ca2766479d491d0dbe525553c84d5ed4e21807b227cb69a5e89" Feb 26 13:17:42 crc kubenswrapper[4724]: I0226 13:17:42.159215 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7x8zw"] Feb 26 13:17:42 crc kubenswrapper[4724]: I0226 13:17:42.169102 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7x8zw"] Feb 26 13:17:42 crc kubenswrapper[4724]: I0226 13:17:42.192674 4724 scope.go:117] "RemoveContainer" containerID="7f09162c6c390a482224db29100e2b1522a20881fefb6ab24507780d467fadb3" Feb 26 13:17:42 crc kubenswrapper[4724]: I0226 13:17:42.240289 4724 scope.go:117] "RemoveContainer" containerID="bbf10aa323a05f083ab06f5a5760050da593dd2eb22814b71e6b9f40159cdb4a" Feb 26 13:17:42 crc kubenswrapper[4724]: E0226 13:17:42.240831 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf10aa323a05f083ab06f5a5760050da593dd2eb22814b71e6b9f40159cdb4a\": container with ID starting with bbf10aa323a05f083ab06f5a5760050da593dd2eb22814b71e6b9f40159cdb4a not found: ID does not exist" containerID="bbf10aa323a05f083ab06f5a5760050da593dd2eb22814b71e6b9f40159cdb4a" Feb 26 13:17:42 crc kubenswrapper[4724]: I0226 13:17:42.240921 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf10aa323a05f083ab06f5a5760050da593dd2eb22814b71e6b9f40159cdb4a"} err="failed to get container status \"bbf10aa323a05f083ab06f5a5760050da593dd2eb22814b71e6b9f40159cdb4a\": rpc error: code = NotFound desc = could not find container \"bbf10aa323a05f083ab06f5a5760050da593dd2eb22814b71e6b9f40159cdb4a\": container with ID starting with bbf10aa323a05f083ab06f5a5760050da593dd2eb22814b71e6b9f40159cdb4a not found: ID does not exist" Feb 26 13:17:42 crc kubenswrapper[4724]: I0226 13:17:42.241029 4724 scope.go:117] "RemoveContainer" containerID="5aba54cb380c9ca2766479d491d0dbe525553c84d5ed4e21807b227cb69a5e89" Feb 26 13:17:42 crc kubenswrapper[4724]: E0226 13:17:42.241674 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aba54cb380c9ca2766479d491d0dbe525553c84d5ed4e21807b227cb69a5e89\": container with ID starting with 5aba54cb380c9ca2766479d491d0dbe525553c84d5ed4e21807b227cb69a5e89 not found: ID does not exist" containerID="5aba54cb380c9ca2766479d491d0dbe525553c84d5ed4e21807b227cb69a5e89" Feb 26 13:17:42 crc kubenswrapper[4724]: I0226 13:17:42.241722 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aba54cb380c9ca2766479d491d0dbe525553c84d5ed4e21807b227cb69a5e89"} err="failed to get container status \"5aba54cb380c9ca2766479d491d0dbe525553c84d5ed4e21807b227cb69a5e89\": rpc error: code = NotFound desc = could not find container \"5aba54cb380c9ca2766479d491d0dbe525553c84d5ed4e21807b227cb69a5e89\": container with ID starting with 5aba54cb380c9ca2766479d491d0dbe525553c84d5ed4e21807b227cb69a5e89 not found: ID does not exist" Feb 26 13:17:42 crc kubenswrapper[4724]: I0226 13:17:42.241738 4724 scope.go:117] "RemoveContainer" containerID="7f09162c6c390a482224db29100e2b1522a20881fefb6ab24507780d467fadb3" Feb 26 13:17:42 crc kubenswrapper[4724]: E0226 13:17:42.242240 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f09162c6c390a482224db29100e2b1522a20881fefb6ab24507780d467fadb3\": container with ID starting with 7f09162c6c390a482224db29100e2b1522a20881fefb6ab24507780d467fadb3 not found: ID does not exist" containerID="7f09162c6c390a482224db29100e2b1522a20881fefb6ab24507780d467fadb3" Feb 26 13:17:42 crc kubenswrapper[4724]: I0226 13:17:42.242277 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f09162c6c390a482224db29100e2b1522a20881fefb6ab24507780d467fadb3"} err="failed to get container status \"7f09162c6c390a482224db29100e2b1522a20881fefb6ab24507780d467fadb3\": rpc error: code = NotFound desc = could not find container \"7f09162c6c390a482224db29100e2b1522a20881fefb6ab24507780d467fadb3\": container with ID starting with 7f09162c6c390a482224db29100e2b1522a20881fefb6ab24507780d467fadb3 not found: ID does not exist" Feb 26 13:17:43 crc kubenswrapper[4724]: I0226 13:17:43.988117 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c956e3b4-00b3-4d8c-913b-5421d30445de" path="/var/lib/kubelet/pods/c956e3b4-00b3-4d8c-913b-5421d30445de/volumes" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.169956 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535198-6xzdj"] Feb 26 13:18:00 crc kubenswrapper[4724]: E0226 13:18:00.170964 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118907e7-3d21-444c-9522-148164d5aa06" containerName="extract-utilities" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.170983 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="118907e7-3d21-444c-9522-148164d5aa06" containerName="extract-utilities" Feb 26 13:18:00 crc kubenswrapper[4724]: E0226 13:18:00.170997 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c956e3b4-00b3-4d8c-913b-5421d30445de" containerName="extract-content" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.171006 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c956e3b4-00b3-4d8c-913b-5421d30445de" containerName="extract-content" Feb 26 13:18:00 crc kubenswrapper[4724]: E0226 13:18:00.171021 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118907e7-3d21-444c-9522-148164d5aa06" containerName="extract-content" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.171028 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="118907e7-3d21-444c-9522-148164d5aa06" containerName="extract-content" Feb 26 13:18:00 crc kubenswrapper[4724]: E0226 13:18:00.171051 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c956e3b4-00b3-4d8c-913b-5421d30445de" containerName="registry-server" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.171058 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c956e3b4-00b3-4d8c-913b-5421d30445de" containerName="registry-server" Feb 26 13:18:00 crc kubenswrapper[4724]: E0226 13:18:00.171072 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118907e7-3d21-444c-9522-148164d5aa06" containerName="registry-server" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.171079 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="118907e7-3d21-444c-9522-148164d5aa06" containerName="registry-server" Feb 26 13:18:00 crc kubenswrapper[4724]: E0226 13:18:00.171095 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c956e3b4-00b3-4d8c-913b-5421d30445de" containerName="extract-utilities" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.171102 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c956e3b4-00b3-4d8c-913b-5421d30445de" containerName="extract-utilities" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.171372 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c956e3b4-00b3-4d8c-913b-5421d30445de" containerName="registry-server" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.171407 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="118907e7-3d21-444c-9522-148164d5aa06" containerName="registry-server" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.174264 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535198-6xzdj" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.179511 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.179513 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.180903 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.276114 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535198-6xzdj"] Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.311477 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr4qn\" (UniqueName: \"kubernetes.io/projected/438abc8e-3494-423f-ad25-fb67642b25e4-kube-api-access-wr4qn\") pod \"auto-csr-approver-29535198-6xzdj\" (UID: \"438abc8e-3494-423f-ad25-fb67642b25e4\") " pod="openshift-infra/auto-csr-approver-29535198-6xzdj" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.413038 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr4qn\" (UniqueName: \"kubernetes.io/projected/438abc8e-3494-423f-ad25-fb67642b25e4-kube-api-access-wr4qn\") pod \"auto-csr-approver-29535198-6xzdj\" (UID: \"438abc8e-3494-423f-ad25-fb67642b25e4\") " pod="openshift-infra/auto-csr-approver-29535198-6xzdj" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.468083 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr4qn\" (UniqueName: \"kubernetes.io/projected/438abc8e-3494-423f-ad25-fb67642b25e4-kube-api-access-wr4qn\") pod \"auto-csr-approver-29535198-6xzdj\" (UID: \"438abc8e-3494-423f-ad25-fb67642b25e4\") " pod="openshift-infra/auto-csr-approver-29535198-6xzdj" Feb 26 13:18:00 crc kubenswrapper[4724]: I0226 13:18:00.499905 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535198-6xzdj" Feb 26 13:18:01 crc kubenswrapper[4724]: I0226 13:18:01.495614 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535198-6xzdj"] Feb 26 13:18:02 crc kubenswrapper[4724]: I0226 13:18:02.378872 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535198-6xzdj" event={"ID":"438abc8e-3494-423f-ad25-fb67642b25e4","Type":"ContainerStarted","Data":"988f0796bf97c0fdf10726170528018af6985fafce1919a327002f258a1dee40"} Feb 26 13:18:04 crc kubenswrapper[4724]: I0226 13:18:04.399132 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535198-6xzdj" event={"ID":"438abc8e-3494-423f-ad25-fb67642b25e4","Type":"ContainerStarted","Data":"3a2a473359947c74c0e4a44c9caab3621e6496e8d660d140919cc5d2f4e2ffd6"} Feb 26 13:18:04 crc kubenswrapper[4724]: I0226 13:18:04.422824 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535198-6xzdj" podStartSLOduration=3.325856155 podStartE2EDuration="4.422804448s" podCreationTimestamp="2026-02-26 13:18:00 +0000 UTC" firstStartedPulling="2026-02-26 13:18:01.512044942 +0000 UTC m=+7948.167784057" lastFinishedPulling="2026-02-26 13:18:02.608993235 +0000 UTC m=+7949.264732350" observedRunningTime="2026-02-26 13:18:04.413661325 +0000 UTC m=+7951.069400440" watchObservedRunningTime="2026-02-26 13:18:04.422804448 +0000 UTC m=+7951.078543553" Feb 26 13:18:08 crc kubenswrapper[4724]: I0226 13:18:08.459924 4724 generic.go:334] "Generic (PLEG): container finished" podID="438abc8e-3494-423f-ad25-fb67642b25e4" containerID="3a2a473359947c74c0e4a44c9caab3621e6496e8d660d140919cc5d2f4e2ffd6" exitCode=0 Feb 26 13:18:08 crc kubenswrapper[4724]: I0226 13:18:08.460677 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535198-6xzdj" event={"ID":"438abc8e-3494-423f-ad25-fb67642b25e4","Type":"ContainerDied","Data":"3a2a473359947c74c0e4a44c9caab3621e6496e8d660d140919cc5d2f4e2ffd6"} Feb 26 13:18:10 crc kubenswrapper[4724]: I0226 13:18:10.007588 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535198-6xzdj" Feb 26 13:18:10 crc kubenswrapper[4724]: I0226 13:18:10.105195 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr4qn\" (UniqueName: \"kubernetes.io/projected/438abc8e-3494-423f-ad25-fb67642b25e4-kube-api-access-wr4qn\") pod \"438abc8e-3494-423f-ad25-fb67642b25e4\" (UID: \"438abc8e-3494-423f-ad25-fb67642b25e4\") " Feb 26 13:18:10 crc kubenswrapper[4724]: I0226 13:18:10.112238 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438abc8e-3494-423f-ad25-fb67642b25e4-kube-api-access-wr4qn" (OuterVolumeSpecName: "kube-api-access-wr4qn") pod "438abc8e-3494-423f-ad25-fb67642b25e4" (UID: "438abc8e-3494-423f-ad25-fb67642b25e4"). InnerVolumeSpecName "kube-api-access-wr4qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:18:10 crc kubenswrapper[4724]: I0226 13:18:10.207797 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr4qn\" (UniqueName: \"kubernetes.io/projected/438abc8e-3494-423f-ad25-fb67642b25e4-kube-api-access-wr4qn\") on node \"crc\" DevicePath \"\"" Feb 26 13:18:10 crc kubenswrapper[4724]: I0226 13:18:10.482701 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535198-6xzdj" event={"ID":"438abc8e-3494-423f-ad25-fb67642b25e4","Type":"ContainerDied","Data":"988f0796bf97c0fdf10726170528018af6985fafce1919a327002f258a1dee40"} Feb 26 13:18:10 crc kubenswrapper[4724]: I0226 13:18:10.482740 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="988f0796bf97c0fdf10726170528018af6985fafce1919a327002f258a1dee40" Feb 26 13:18:10 crc kubenswrapper[4724]: I0226 13:18:10.482756 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535198-6xzdj" Feb 26 13:18:10 crc kubenswrapper[4724]: I0226 13:18:10.554668 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535192-wnwb2"] Feb 26 13:18:10 crc kubenswrapper[4724]: I0226 13:18:10.566541 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535192-wnwb2"] Feb 26 13:18:11 crc kubenswrapper[4724]: I0226 13:18:11.988894 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8256e3a2-215d-4cea-8641-15be415f6180" path="/var/lib/kubelet/pods/8256e3a2-215d-4cea-8641-15be415f6180/volumes" Feb 26 13:18:14 crc kubenswrapper[4724]: I0226 13:18:14.532753 4724 generic.go:334] "Generic (PLEG): container finished" podID="14b6ff63-4a92-49d9-9d37-0f2092545b77" containerID="aa44cb08d45e4b9bf327a86f7953cb12f197c9ca36499dcdd62d9d5ef4c89ca1" exitCode=0 Feb 26 13:18:14 crc kubenswrapper[4724]: I0226 13:18:14.533065 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"14b6ff63-4a92-49d9-9d37-0f2092545b77","Type":"ContainerDied","Data":"aa44cb08d45e4b9bf327a86f7953cb12f197c9ca36499dcdd62d9d5ef4c89ca1"} Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.446779 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.473154 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-ssh-key\") pod \"14b6ff63-4a92-49d9-9d37-0f2092545b77\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.473215 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/14b6ff63-4a92-49d9-9d37-0f2092545b77-test-operator-ephemeral-temporary\") pod \"14b6ff63-4a92-49d9-9d37-0f2092545b77\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.473248 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14b6ff63-4a92-49d9-9d37-0f2092545b77-config-data\") pod \"14b6ff63-4a92-49d9-9d37-0f2092545b77\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.473321 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"14b6ff63-4a92-49d9-9d37-0f2092545b77\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.473386 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-ca-certs\") pod \"14b6ff63-4a92-49d9-9d37-0f2092545b77\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.473412 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-openstack-config-secret\") pod \"14b6ff63-4a92-49d9-9d37-0f2092545b77\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.473471 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/14b6ff63-4a92-49d9-9d37-0f2092545b77-openstack-config\") pod \"14b6ff63-4a92-49d9-9d37-0f2092545b77\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.473530 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/14b6ff63-4a92-49d9-9d37-0f2092545b77-test-operator-ephemeral-workdir\") pod \"14b6ff63-4a92-49d9-9d37-0f2092545b77\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.473651 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvtqx\" (UniqueName: \"kubernetes.io/projected/14b6ff63-4a92-49d9-9d37-0f2092545b77-kube-api-access-bvtqx\") pod \"14b6ff63-4a92-49d9-9d37-0f2092545b77\" (UID: \"14b6ff63-4a92-49d9-9d37-0f2092545b77\") " Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.479927 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14b6ff63-4a92-49d9-9d37-0f2092545b77-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "14b6ff63-4a92-49d9-9d37-0f2092545b77" (UID: "14b6ff63-4a92-49d9-9d37-0f2092545b77"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.480330 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b6ff63-4a92-49d9-9d37-0f2092545b77-kube-api-access-bvtqx" (OuterVolumeSpecName: "kube-api-access-bvtqx") pod "14b6ff63-4a92-49d9-9d37-0f2092545b77" (UID: "14b6ff63-4a92-49d9-9d37-0f2092545b77"). InnerVolumeSpecName "kube-api-access-bvtqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.481691 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14b6ff63-4a92-49d9-9d37-0f2092545b77-config-data" (OuterVolumeSpecName: "config-data") pod "14b6ff63-4a92-49d9-9d37-0f2092545b77" (UID: "14b6ff63-4a92-49d9-9d37-0f2092545b77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.497459 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "14b6ff63-4a92-49d9-9d37-0f2092545b77" (UID: "14b6ff63-4a92-49d9-9d37-0f2092545b77"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.521643 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "14b6ff63-4a92-49d9-9d37-0f2092545b77" (UID: "14b6ff63-4a92-49d9-9d37-0f2092545b77"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.527014 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14b6ff63-4a92-49d9-9d37-0f2092545b77-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "14b6ff63-4a92-49d9-9d37-0f2092545b77" (UID: "14b6ff63-4a92-49d9-9d37-0f2092545b77"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.527646 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "14b6ff63-4a92-49d9-9d37-0f2092545b77" (UID: "14b6ff63-4a92-49d9-9d37-0f2092545b77"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.539406 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "14b6ff63-4a92-49d9-9d37-0f2092545b77" (UID: "14b6ff63-4a92-49d9-9d37-0f2092545b77"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.552220 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14b6ff63-4a92-49d9-9d37-0f2092545b77-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "14b6ff63-4a92-49d9-9d37-0f2092545b77" (UID: "14b6ff63-4a92-49d9-9d37-0f2092545b77"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.568655 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"14b6ff63-4a92-49d9-9d37-0f2092545b77","Type":"ContainerDied","Data":"9fc4205fd5b72c50c66826bc69e83ab35d49920066f2e5030285b0dba052ce6b"} Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.568701 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fc4205fd5b72c50c66826bc69e83ab35d49920066f2e5030285b0dba052ce6b" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.568708 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.575688 4724 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/14b6ff63-4a92-49d9-9d37-0f2092545b77-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.575727 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvtqx\" (UniqueName: \"kubernetes.io/projected/14b6ff63-4a92-49d9-9d37-0f2092545b77-kube-api-access-bvtqx\") on node \"crc\" DevicePath \"\"" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.575739 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.575749 4724 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/14b6ff63-4a92-49d9-9d37-0f2092545b77-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.575807 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14b6ff63-4a92-49d9-9d37-0f2092545b77-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.581360 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.581447 4724 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.581468 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/14b6ff63-4a92-49d9-9d37-0f2092545b77-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.581493 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/14b6ff63-4a92-49d9-9d37-0f2092545b77-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.624771 4724 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.683913 4724 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.714731 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Feb 26 13:18:17 crc kubenswrapper[4724]: E0226 13:18:17.715506 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b6ff63-4a92-49d9-9d37-0f2092545b77" containerName="tempest-tests-tempest-tests-runner" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.715600 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b6ff63-4a92-49d9-9d37-0f2092545b77" containerName="tempest-tests-tempest-tests-runner" Feb 26 13:18:17 crc kubenswrapper[4724]: E0226 13:18:17.715663 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438abc8e-3494-423f-ad25-fb67642b25e4" containerName="oc" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.715752 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="438abc8e-3494-423f-ad25-fb67642b25e4" containerName="oc" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.716023 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="14b6ff63-4a92-49d9-9d37-0f2092545b77" containerName="tempest-tests-tempest-tests-runner" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.716105 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="438abc8e-3494-423f-ad25-fb67642b25e4" containerName="oc" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.719724 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.723438 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.723672 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.724145 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.727234 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-khdhf" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.750206 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.785276 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.785533 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.785619 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.785773 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.785815 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.785904 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf5sx\" (UniqueName: \"kubernetes.io/projected/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-kube-api-access-lf5sx\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.785943 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.785962 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.785991 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.887553 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.887599 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.887629 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.887648 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.887668 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.887702 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.887747 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.888921 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.889529 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.889532 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.889633 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf5sx\" (UniqueName: \"kubernetes.io/projected/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-kube-api-access-lf5sx\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.890632 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.890959 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.891668 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.892061 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.892406 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.893100 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.909759 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf5sx\" (UniqueName: \"kubernetes.io/projected/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-kube-api-access-lf5sx\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:17 crc kubenswrapper[4724]: I0226 13:18:17.932153 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:18 crc kubenswrapper[4724]: I0226 13:18:18.063838 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 13:18:18 crc kubenswrapper[4724]: I0226 13:18:18.740976 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Feb 26 13:18:19 crc kubenswrapper[4724]: I0226 13:18:19.588122 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"b9b5bd47-dc7c-492d-8c33-cd7d528555f6","Type":"ContainerStarted","Data":"2b30a67cffca72bc4169a209b1ff757b3610dc3ea448d701deba4d6e33fa00a8"} Feb 26 13:18:26 crc kubenswrapper[4724]: I0226 13:18:26.042636 4724 scope.go:117] "RemoveContainer" containerID="4f3c1cc48e5cd01a3d5d65fffc6fcbbc8ef500d729e49e0736acb5b1a145b3b5" Feb 26 13:18:26 crc kubenswrapper[4724]: I0226 13:18:26.660752 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"b9b5bd47-dc7c-492d-8c33-cd7d528555f6","Type":"ContainerStarted","Data":"f1c7e303bd5f0b056c401fb99d967d6d5f95e751fe507c1a21461c7394030e47"} Feb 26 13:18:26 crc kubenswrapper[4724]: I0226 13:18:26.690080 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" podStartSLOduration=9.69006104 podStartE2EDuration="9.69006104s" podCreationTimestamp="2026-02-26 13:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 13:18:26.68221051 +0000 UTC m=+7973.337949625" watchObservedRunningTime="2026-02-26 13:18:26.69006104 +0000 UTC m=+7973.345800155" Feb 26 13:18:46 crc kubenswrapper[4724]: I0226 13:18:46.906001 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:18:46 crc kubenswrapper[4724]: I0226 13:18:46.906773 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:19:16 crc kubenswrapper[4724]: I0226 13:19:16.906394 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:19:16 crc kubenswrapper[4724]: I0226 13:19:16.907661 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:19:46 crc kubenswrapper[4724]: I0226 13:19:46.906652 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:19:46 crc kubenswrapper[4724]: I0226 13:19:46.907815 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:19:46 crc kubenswrapper[4724]: I0226 13:19:46.908084 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 13:19:46 crc kubenswrapper[4724]: I0226 13:19:46.910390 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28ba1bdecaca0324305e932c22e76ce7343db7b56c19c15304770da6d24d656d"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 13:19:46 crc kubenswrapper[4724]: I0226 13:19:46.910557 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://28ba1bdecaca0324305e932c22e76ce7343db7b56c19c15304770da6d24d656d" gracePeriod=600 Feb 26 13:19:47 crc kubenswrapper[4724]: I0226 13:19:47.245312 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="28ba1bdecaca0324305e932c22e76ce7343db7b56c19c15304770da6d24d656d" exitCode=0 Feb 26 13:19:47 crc kubenswrapper[4724]: I0226 13:19:47.246060 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"28ba1bdecaca0324305e932c22e76ce7343db7b56c19c15304770da6d24d656d"} Feb 26 13:19:47 crc kubenswrapper[4724]: I0226 13:19:47.246609 4724 scope.go:117] "RemoveContainer" containerID="61af9e01d19cee39eac456a5b7f43738849d042b1aa7f6b01dd113f71b720402" Feb 26 13:19:48 crc kubenswrapper[4724]: I0226 13:19:48.271415 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078"} Feb 26 13:19:51 crc kubenswrapper[4724]: I0226 13:19:51.236999 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-5f2tw" podUID="ee48a99c-cb5f-4564-9631-daeae942461e" containerName="registry-server" probeResult="failure" output=< Feb 26 13:19:51 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:19:51 crc kubenswrapper[4724]: > Feb 26 13:19:51 crc kubenswrapper[4724]: I0226 13:19:51.801612 4724 trace.go:236] Trace[2055734633]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-9wnqm" (26-Feb-2026 13:19:50.527) (total time: 1265ms): Feb 26 13:19:51 crc kubenswrapper[4724]: Trace[2055734633]: [1.265432739s] [1.265432739s] END Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.184857 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f468d56b9-wpq97"] Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.190222 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.334371 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-config\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.334458 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-ovndb-tls-certs\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.334517 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-combined-ca-bundle\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.334595 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb4w2\" (UniqueName: \"kubernetes.io/projected/b4d73817-96a8-4f4b-8900-777cd57d2d4c-kube-api-access-jb4w2\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.334891 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-public-tls-certs\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.335058 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-internal-tls-certs\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.335743 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-httpd-config\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.401900 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f468d56b9-wpq97"] Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.437886 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-config\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.437927 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-ovndb-tls-certs\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.437959 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-combined-ca-bundle\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.437995 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb4w2\" (UniqueName: \"kubernetes.io/projected/b4d73817-96a8-4f4b-8900-777cd57d2d4c-kube-api-access-jb4w2\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.438035 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-public-tls-certs\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.438062 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-internal-tls-certs\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.438148 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-httpd-config\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.448405 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-config\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.449385 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-ovndb-tls-certs\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.448469 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-internal-tls-certs\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.461364 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-public-tls-certs\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.476873 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-httpd-config\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.477311 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb4w2\" (UniqueName: \"kubernetes.io/projected/b4d73817-96a8-4f4b-8900-777cd57d2d4c-kube-api-access-jb4w2\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.477656 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-combined-ca-bundle\") pod \"neutron-6f468d56b9-wpq97\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:57 crc kubenswrapper[4724]: I0226 13:19:57.511305 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:19:58 crc kubenswrapper[4724]: I0226 13:19:58.707891 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f468d56b9-wpq97"] Feb 26 13:19:59 crc kubenswrapper[4724]: I0226 13:19:59.511649 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f468d56b9-wpq97" event={"ID":"b4d73817-96a8-4f4b-8900-777cd57d2d4c","Type":"ContainerStarted","Data":"c488d0b7b83fc46589dec68ef32408ea8fb8c617255772f9efbe3c55705a0422"} Feb 26 13:19:59 crc kubenswrapper[4724]: I0226 13:19:59.512038 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f468d56b9-wpq97" event={"ID":"b4d73817-96a8-4f4b-8900-777cd57d2d4c","Type":"ContainerStarted","Data":"cc9865120f619ab4c71d9789f05d65b9bb2e3b07b7cd9b82232cfda55c830dcb"} Feb 26 13:20:00 crc kubenswrapper[4724]: I0226 13:20:00.231222 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535200-7jcmg"] Feb 26 13:20:00 crc kubenswrapper[4724]: I0226 13:20:00.235709 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535200-7jcmg" Feb 26 13:20:00 crc kubenswrapper[4724]: I0226 13:20:00.239493 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:20:00 crc kubenswrapper[4724]: I0226 13:20:00.240486 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:20:00 crc kubenswrapper[4724]: I0226 13:20:00.245684 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535200-7jcmg"] Feb 26 13:20:00 crc kubenswrapper[4724]: I0226 13:20:00.250149 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:20:00 crc kubenswrapper[4724]: I0226 13:20:00.269324 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbpk5\" (UniqueName: \"kubernetes.io/projected/cc782d3e-34ab-48bd-ad62-03c79637f69b-kube-api-access-vbpk5\") pod \"auto-csr-approver-29535200-7jcmg\" (UID: \"cc782d3e-34ab-48bd-ad62-03c79637f69b\") " pod="openshift-infra/auto-csr-approver-29535200-7jcmg" Feb 26 13:20:00 crc kubenswrapper[4724]: I0226 13:20:00.373372 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbpk5\" (UniqueName: \"kubernetes.io/projected/cc782d3e-34ab-48bd-ad62-03c79637f69b-kube-api-access-vbpk5\") pod \"auto-csr-approver-29535200-7jcmg\" (UID: \"cc782d3e-34ab-48bd-ad62-03c79637f69b\") " pod="openshift-infra/auto-csr-approver-29535200-7jcmg" Feb 26 13:20:00 crc kubenswrapper[4724]: I0226 13:20:00.399675 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbpk5\" (UniqueName: \"kubernetes.io/projected/cc782d3e-34ab-48bd-ad62-03c79637f69b-kube-api-access-vbpk5\") pod \"auto-csr-approver-29535200-7jcmg\" (UID: \"cc782d3e-34ab-48bd-ad62-03c79637f69b\") " pod="openshift-infra/auto-csr-approver-29535200-7jcmg" Feb 26 13:20:00 crc kubenswrapper[4724]: I0226 13:20:00.526465 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f468d56b9-wpq97" event={"ID":"b4d73817-96a8-4f4b-8900-777cd57d2d4c","Type":"ContainerStarted","Data":"b0708210790c7874b094f3b7159c4d2badd22d6cd0d1ce6cf79ab92203079526"} Feb 26 13:20:00 crc kubenswrapper[4724]: I0226 13:20:00.526845 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:20:00 crc kubenswrapper[4724]: I0226 13:20:00.557343 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f468d56b9-wpq97" podStartSLOduration=4.55730615 podStartE2EDuration="4.55730615s" podCreationTimestamp="2026-02-26 13:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 13:20:00.554733944 +0000 UTC m=+8067.210473059" watchObservedRunningTime="2026-02-26 13:20:00.55730615 +0000 UTC m=+8067.213045265" Feb 26 13:20:00 crc kubenswrapper[4724]: I0226 13:20:00.563120 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535200-7jcmg" Feb 26 13:20:01 crc kubenswrapper[4724]: I0226 13:20:01.654033 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535200-7jcmg"] Feb 26 13:20:02 crc kubenswrapper[4724]: I0226 13:20:02.547503 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535200-7jcmg" event={"ID":"cc782d3e-34ab-48bd-ad62-03c79637f69b","Type":"ContainerStarted","Data":"abe7ab577b896001770675aac9b88f737bb16987686304979b6d95cd60b84727"} Feb 26 13:20:05 crc kubenswrapper[4724]: I0226 13:20:05.584579 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535200-7jcmg" event={"ID":"cc782d3e-34ab-48bd-ad62-03c79637f69b","Type":"ContainerStarted","Data":"7f8ae851bbcc235ee3a32f0bfd88fdebeeecabcc79c3ffdfdbbe0f257c4e4aab"} Feb 26 13:20:05 crc kubenswrapper[4724]: I0226 13:20:05.610760 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535200-7jcmg" podStartSLOduration=3.11171511 podStartE2EDuration="5.610730035s" podCreationTimestamp="2026-02-26 13:20:00 +0000 UTC" firstStartedPulling="2026-02-26 13:20:01.681029116 +0000 UTC m=+8068.336768231" lastFinishedPulling="2026-02-26 13:20:04.180044041 +0000 UTC m=+8070.835783156" observedRunningTime="2026-02-26 13:20:05.604438584 +0000 UTC m=+8072.260177689" watchObservedRunningTime="2026-02-26 13:20:05.610730035 +0000 UTC m=+8072.266469150" Feb 26 13:20:08 crc kubenswrapper[4724]: I0226 13:20:08.628122 4724 generic.go:334] "Generic (PLEG): container finished" podID="cc782d3e-34ab-48bd-ad62-03c79637f69b" containerID="7f8ae851bbcc235ee3a32f0bfd88fdebeeecabcc79c3ffdfdbbe0f257c4e4aab" exitCode=0 Feb 26 13:20:08 crc kubenswrapper[4724]: I0226 13:20:08.628266 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535200-7jcmg" event={"ID":"cc782d3e-34ab-48bd-ad62-03c79637f69b","Type":"ContainerDied","Data":"7f8ae851bbcc235ee3a32f0bfd88fdebeeecabcc79c3ffdfdbbe0f257c4e4aab"} Feb 26 13:20:10 crc kubenswrapper[4724]: I0226 13:20:10.544519 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535200-7jcmg" Feb 26 13:20:10 crc kubenswrapper[4724]: I0226 13:20:10.647604 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbpk5\" (UniqueName: \"kubernetes.io/projected/cc782d3e-34ab-48bd-ad62-03c79637f69b-kube-api-access-vbpk5\") pod \"cc782d3e-34ab-48bd-ad62-03c79637f69b\" (UID: \"cc782d3e-34ab-48bd-ad62-03c79637f69b\") " Feb 26 13:20:10 crc kubenswrapper[4724]: I0226 13:20:10.659027 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535200-7jcmg" event={"ID":"cc782d3e-34ab-48bd-ad62-03c79637f69b","Type":"ContainerDied","Data":"abe7ab577b896001770675aac9b88f737bb16987686304979b6d95cd60b84727"} Feb 26 13:20:10 crc kubenswrapper[4724]: I0226 13:20:10.659071 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abe7ab577b896001770675aac9b88f737bb16987686304979b6d95cd60b84727" Feb 26 13:20:10 crc kubenswrapper[4724]: I0226 13:20:10.659129 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535200-7jcmg" Feb 26 13:20:10 crc kubenswrapper[4724]: I0226 13:20:10.678932 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc782d3e-34ab-48bd-ad62-03c79637f69b-kube-api-access-vbpk5" (OuterVolumeSpecName: "kube-api-access-vbpk5") pod "cc782d3e-34ab-48bd-ad62-03c79637f69b" (UID: "cc782d3e-34ab-48bd-ad62-03c79637f69b"). InnerVolumeSpecName "kube-api-access-vbpk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:20:10 crc kubenswrapper[4724]: I0226 13:20:10.751070 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbpk5\" (UniqueName: \"kubernetes.io/projected/cc782d3e-34ab-48bd-ad62-03c79637f69b-kube-api-access-vbpk5\") on node \"crc\" DevicePath \"\"" Feb 26 13:20:10 crc kubenswrapper[4724]: I0226 13:20:10.756091 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535194-2q7dp"] Feb 26 13:20:10 crc kubenswrapper[4724]: I0226 13:20:10.766766 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535194-2q7dp"] Feb 26 13:20:11 crc kubenswrapper[4724]: I0226 13:20:11.990962 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b26798b-9ef9-4b67-9326-a987e89231ed" path="/var/lib/kubelet/pods/4b26798b-9ef9-4b67-9326-a987e89231ed/volumes" Feb 26 13:20:26 crc kubenswrapper[4724]: I0226 13:20:26.209332 4724 scope.go:117] "RemoveContainer" containerID="9ecb622f6bf3bbb40885190faa27ecd9794c81753cba76ff1c3b2683e1d810ec" Feb 26 13:20:27 crc kubenswrapper[4724]: I0226 13:20:27.528859 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 13:20:27 crc kubenswrapper[4724]: I0226 13:20:27.664684 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-555b8bfd77-p4h8t"] Feb 26 13:20:27 crc kubenswrapper[4724]: I0226 13:20:27.665928 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-555b8bfd77-p4h8t" podUID="e7b8af94-a922-4315-bab6-3b67cda647e0" containerName="neutron-api" containerID="cri-o://aff0fea17a29a376504998816473a6ceda732ced9fc9d08ff62f8ee9435e7897" gracePeriod=30 Feb 26 13:20:27 crc kubenswrapper[4724]: I0226 13:20:27.666074 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-555b8bfd77-p4h8t" podUID="e7b8af94-a922-4315-bab6-3b67cda647e0" containerName="neutron-httpd" containerID="cri-o://7abab6a00fbf38c719c85c773734fe9c390c4ef4a63f97e9ee3e057437f2a57d" gracePeriod=30 Feb 26 13:20:28 crc kubenswrapper[4724]: I0226 13:20:28.910367 4724 generic.go:334] "Generic (PLEG): container finished" podID="e7b8af94-a922-4315-bab6-3b67cda647e0" containerID="7abab6a00fbf38c719c85c773734fe9c390c4ef4a63f97e9ee3e057437f2a57d" exitCode=0 Feb 26 13:20:28 crc kubenswrapper[4724]: I0226 13:20:28.910437 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-555b8bfd77-p4h8t" event={"ID":"e7b8af94-a922-4315-bab6-3b67cda647e0","Type":"ContainerDied","Data":"7abab6a00fbf38c719c85c773734fe9c390c4ef4a63f97e9ee3e057437f2a57d"} Feb 26 13:20:34 crc kubenswrapper[4724]: I0226 13:20:34.973783 4724 generic.go:334] "Generic (PLEG): container finished" podID="e7b8af94-a922-4315-bab6-3b67cda647e0" containerID="aff0fea17a29a376504998816473a6ceda732ced9fc9d08ff62f8ee9435e7897" exitCode=0 Feb 26 13:20:34 crc kubenswrapper[4724]: I0226 13:20:34.973867 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-555b8bfd77-p4h8t" event={"ID":"e7b8af94-a922-4315-bab6-3b67cda647e0","Type":"ContainerDied","Data":"aff0fea17a29a376504998816473a6ceda732ced9fc9d08ff62f8ee9435e7897"} Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.206629 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.229924 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-ovndb-tls-certs\") pod \"e7b8af94-a922-4315-bab6-3b67cda647e0\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.229973 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-internal-tls-certs\") pod \"e7b8af94-a922-4315-bab6-3b67cda647e0\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.230240 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-public-tls-certs\") pod \"e7b8af94-a922-4315-bab6-3b67cda647e0\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.230366 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-config\") pod \"e7b8af94-a922-4315-bab6-3b67cda647e0\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.230431 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j96f8\" (UniqueName: \"kubernetes.io/projected/e7b8af94-a922-4315-bab6-3b67cda647e0-kube-api-access-j96f8\") pod \"e7b8af94-a922-4315-bab6-3b67cda647e0\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.230454 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-combined-ca-bundle\") pod \"e7b8af94-a922-4315-bab6-3b67cda647e0\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.230518 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-httpd-config\") pod \"e7b8af94-a922-4315-bab6-3b67cda647e0\" (UID: \"e7b8af94-a922-4315-bab6-3b67cda647e0\") " Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.325647 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e7b8af94-a922-4315-bab6-3b67cda647e0" (UID: "e7b8af94-a922-4315-bab6-3b67cda647e0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.328097 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b8af94-a922-4315-bab6-3b67cda647e0-kube-api-access-j96f8" (OuterVolumeSpecName: "kube-api-access-j96f8") pod "e7b8af94-a922-4315-bab6-3b67cda647e0" (UID: "e7b8af94-a922-4315-bab6-3b67cda647e0"). InnerVolumeSpecName "kube-api-access-j96f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.333441 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j96f8\" (UniqueName: \"kubernetes.io/projected/e7b8af94-a922-4315-bab6-3b67cda647e0-kube-api-access-j96f8\") on node \"crc\" DevicePath \"\"" Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.333478 4724 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.426141 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-config" (OuterVolumeSpecName: "config") pod "e7b8af94-a922-4315-bab6-3b67cda647e0" (UID: "e7b8af94-a922-4315-bab6-3b67cda647e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.436267 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-config\") on node \"crc\" DevicePath \"\"" Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.456624 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e7b8af94-a922-4315-bab6-3b67cda647e0" (UID: "e7b8af94-a922-4315-bab6-3b67cda647e0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.476429 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e7b8af94-a922-4315-bab6-3b67cda647e0" (UID: "e7b8af94-a922-4315-bab6-3b67cda647e0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.476524 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e7b8af94-a922-4315-bab6-3b67cda647e0" (UID: "e7b8af94-a922-4315-bab6-3b67cda647e0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.476574 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7b8af94-a922-4315-bab6-3b67cda647e0" (UID: "e7b8af94-a922-4315-bab6-3b67cda647e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.538300 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.538338 4724 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.538351 4724 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 13:20:36 crc kubenswrapper[4724]: I0226 13:20:36.538362 4724 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7b8af94-a922-4315-bab6-3b67cda647e0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 13:20:37 crc kubenswrapper[4724]: I0226 13:20:37.016286 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-555b8bfd77-p4h8t" event={"ID":"e7b8af94-a922-4315-bab6-3b67cda647e0","Type":"ContainerDied","Data":"dd9db7b4ca78b6af142c92add520db98e02f9a61043135797508ebaf3416aefa"} Feb 26 13:20:37 crc kubenswrapper[4724]: I0226 13:20:37.016659 4724 scope.go:117] "RemoveContainer" containerID="7abab6a00fbf38c719c85c773734fe9c390c4ef4a63f97e9ee3e057437f2a57d" Feb 26 13:20:37 crc kubenswrapper[4724]: I0226 13:20:37.016824 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-555b8bfd77-p4h8t" Feb 26 13:20:37 crc kubenswrapper[4724]: I0226 13:20:37.054591 4724 scope.go:117] "RemoveContainer" containerID="aff0fea17a29a376504998816473a6ceda732ced9fc9d08ff62f8ee9435e7897" Feb 26 13:20:37 crc kubenswrapper[4724]: I0226 13:20:37.056513 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-555b8bfd77-p4h8t"] Feb 26 13:20:37 crc kubenswrapper[4724]: I0226 13:20:37.075911 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-555b8bfd77-p4h8t"] Feb 26 13:20:37 crc kubenswrapper[4724]: I0226 13:20:37.986132 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7b8af94-a922-4315-bab6-3b67cda647e0" path="/var/lib/kubelet/pods/e7b8af94-a922-4315-bab6-3b67cda647e0/volumes" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.645626 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x8l8d"] Feb 26 13:20:55 crc kubenswrapper[4724]: E0226 13:20:55.647120 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b8af94-a922-4315-bab6-3b67cda647e0" containerName="neutron-httpd" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.647144 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b8af94-a922-4315-bab6-3b67cda647e0" containerName="neutron-httpd" Feb 26 13:20:55 crc kubenswrapper[4724]: E0226 13:20:55.647211 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc782d3e-34ab-48bd-ad62-03c79637f69b" containerName="oc" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.647220 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc782d3e-34ab-48bd-ad62-03c79637f69b" containerName="oc" Feb 26 13:20:55 crc kubenswrapper[4724]: E0226 13:20:55.647240 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b8af94-a922-4315-bab6-3b67cda647e0" containerName="neutron-api" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.647253 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b8af94-a922-4315-bab6-3b67cda647e0" containerName="neutron-api" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.647612 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b8af94-a922-4315-bab6-3b67cda647e0" containerName="neutron-httpd" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.647635 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc782d3e-34ab-48bd-ad62-03c79637f69b" containerName="oc" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.647652 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b8af94-a922-4315-bab6-3b67cda647e0" containerName="neutron-api" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.651347 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.690635 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x8l8d"] Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.718371 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea4160fe-1944-4874-ae62-704c7884d8ca-catalog-content\") pod \"redhat-operators-x8l8d\" (UID: \"ea4160fe-1944-4874-ae62-704c7884d8ca\") " pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.718463 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j2wd\" (UniqueName: \"kubernetes.io/projected/ea4160fe-1944-4874-ae62-704c7884d8ca-kube-api-access-9j2wd\") pod \"redhat-operators-x8l8d\" (UID: \"ea4160fe-1944-4874-ae62-704c7884d8ca\") " pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.718489 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea4160fe-1944-4874-ae62-704c7884d8ca-utilities\") pod \"redhat-operators-x8l8d\" (UID: \"ea4160fe-1944-4874-ae62-704c7884d8ca\") " pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.821933 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea4160fe-1944-4874-ae62-704c7884d8ca-catalog-content\") pod \"redhat-operators-x8l8d\" (UID: \"ea4160fe-1944-4874-ae62-704c7884d8ca\") " pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.822486 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j2wd\" (UniqueName: \"kubernetes.io/projected/ea4160fe-1944-4874-ae62-704c7884d8ca-kube-api-access-9j2wd\") pod \"redhat-operators-x8l8d\" (UID: \"ea4160fe-1944-4874-ae62-704c7884d8ca\") " pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.822538 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea4160fe-1944-4874-ae62-704c7884d8ca-utilities\") pod \"redhat-operators-x8l8d\" (UID: \"ea4160fe-1944-4874-ae62-704c7884d8ca\") " pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.823134 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea4160fe-1944-4874-ae62-704c7884d8ca-utilities\") pod \"redhat-operators-x8l8d\" (UID: \"ea4160fe-1944-4874-ae62-704c7884d8ca\") " pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.824030 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea4160fe-1944-4874-ae62-704c7884d8ca-catalog-content\") pod \"redhat-operators-x8l8d\" (UID: \"ea4160fe-1944-4874-ae62-704c7884d8ca\") " pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.853371 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j2wd\" (UniqueName: \"kubernetes.io/projected/ea4160fe-1944-4874-ae62-704c7884d8ca-kube-api-access-9j2wd\") pod \"redhat-operators-x8l8d\" (UID: \"ea4160fe-1944-4874-ae62-704c7884d8ca\") " pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 13:20:55 crc kubenswrapper[4724]: I0226 13:20:55.991520 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 13:20:56 crc kubenswrapper[4724]: I0226 13:20:56.541011 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x8l8d"] Feb 26 13:20:57 crc kubenswrapper[4724]: I0226 13:20:57.241676 4724 generic.go:334] "Generic (PLEG): container finished" podID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerID="8b281804370be754a98e143389bad4fe75aa0328aca2e31155b6ef493133f2a8" exitCode=0 Feb 26 13:20:57 crc kubenswrapper[4724]: I0226 13:20:57.241869 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8l8d" event={"ID":"ea4160fe-1944-4874-ae62-704c7884d8ca","Type":"ContainerDied","Data":"8b281804370be754a98e143389bad4fe75aa0328aca2e31155b6ef493133f2a8"} Feb 26 13:20:57 crc kubenswrapper[4724]: I0226 13:20:57.242088 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8l8d" event={"ID":"ea4160fe-1944-4874-ae62-704c7884d8ca","Type":"ContainerStarted","Data":"6bf23ca54dc1c29ca4edb648084dd7904ebe665e957b59101a5f14a4c343a684"} Feb 26 13:21:14 crc kubenswrapper[4724]: E0226 13:21:14.599045 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 26 13:21:14 crc kubenswrapper[4724]: E0226 13:21:14.608305 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9j2wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-x8l8d_openshift-marketplace(ea4160fe-1944-4874-ae62-704c7884d8ca): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 13:21:14 crc kubenswrapper[4724]: E0226 13:21:14.610596 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-x8l8d" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" Feb 26 13:21:15 crc kubenswrapper[4724]: E0226 13:21:15.466043 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-x8l8d" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" Feb 26 13:21:23 crc kubenswrapper[4724]: I0226 13:21:23.129644 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n5f7l"] Feb 26 13:21:23 crc kubenswrapper[4724]: I0226 13:21:23.136278 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:21:23 crc kubenswrapper[4724]: I0226 13:21:23.158601 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n5f7l"] Feb 26 13:21:23 crc kubenswrapper[4724]: I0226 13:21:23.195134 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3614a5e-277e-4889-a502-614e441951d0-catalog-content\") pod \"community-operators-n5f7l\" (UID: \"e3614a5e-277e-4889-a502-614e441951d0\") " pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:21:23 crc kubenswrapper[4724]: I0226 13:21:23.195499 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3614a5e-277e-4889-a502-614e441951d0-utilities\") pod \"community-operators-n5f7l\" (UID: \"e3614a5e-277e-4889-a502-614e441951d0\") " pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:21:23 crc kubenswrapper[4724]: I0226 13:21:23.195780 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rht8q\" (UniqueName: \"kubernetes.io/projected/e3614a5e-277e-4889-a502-614e441951d0-kube-api-access-rht8q\") pod \"community-operators-n5f7l\" (UID: \"e3614a5e-277e-4889-a502-614e441951d0\") " pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:21:23 crc kubenswrapper[4724]: I0226 13:21:23.301249 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3614a5e-277e-4889-a502-614e441951d0-utilities\") pod \"community-operators-n5f7l\" (UID: \"e3614a5e-277e-4889-a502-614e441951d0\") " pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:21:23 crc kubenswrapper[4724]: I0226 13:21:23.301923 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3614a5e-277e-4889-a502-614e441951d0-utilities\") pod \"community-operators-n5f7l\" (UID: \"e3614a5e-277e-4889-a502-614e441951d0\") " pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:21:23 crc kubenswrapper[4724]: I0226 13:21:23.302883 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rht8q\" (UniqueName: \"kubernetes.io/projected/e3614a5e-277e-4889-a502-614e441951d0-kube-api-access-rht8q\") pod \"community-operators-n5f7l\" (UID: \"e3614a5e-277e-4889-a502-614e441951d0\") " pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:21:23 crc kubenswrapper[4724]: I0226 13:21:23.304388 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3614a5e-277e-4889-a502-614e441951d0-catalog-content\") pod \"community-operators-n5f7l\" (UID: \"e3614a5e-277e-4889-a502-614e441951d0\") " pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:21:23 crc kubenswrapper[4724]: I0226 13:21:23.304963 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3614a5e-277e-4889-a502-614e441951d0-catalog-content\") pod \"community-operators-n5f7l\" (UID: \"e3614a5e-277e-4889-a502-614e441951d0\") " pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:21:23 crc kubenswrapper[4724]: I0226 13:21:23.330285 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rht8q\" (UniqueName: \"kubernetes.io/projected/e3614a5e-277e-4889-a502-614e441951d0-kube-api-access-rht8q\") pod \"community-operators-n5f7l\" (UID: \"e3614a5e-277e-4889-a502-614e441951d0\") " pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:21:23 crc kubenswrapper[4724]: I0226 13:21:23.474557 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:21:25 crc kubenswrapper[4724]: I0226 13:21:25.558565 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n5f7l"] Feb 26 13:21:25 crc kubenswrapper[4724]: I0226 13:21:25.589224 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5f7l" event={"ID":"e3614a5e-277e-4889-a502-614e441951d0","Type":"ContainerStarted","Data":"52afa6bc2d1bab68fdc631e32439527a10f4483f6347ae6e6c19ce8573427e14"} Feb 26 13:21:26 crc kubenswrapper[4724]: I0226 13:21:26.602693 4724 generic.go:334] "Generic (PLEG): container finished" podID="e3614a5e-277e-4889-a502-614e441951d0" containerID="7c59b3005b39b1891c0f576d6dbfa507df955b70c0d9cff017df764bdd8de538" exitCode=0 Feb 26 13:21:26 crc kubenswrapper[4724]: I0226 13:21:26.602788 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5f7l" event={"ID":"e3614a5e-277e-4889-a502-614e441951d0","Type":"ContainerDied","Data":"7c59b3005b39b1891c0f576d6dbfa507df955b70c0d9cff017df764bdd8de538"} Feb 26 13:21:26 crc kubenswrapper[4724]: I0226 13:21:26.613722 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 13:21:28 crc kubenswrapper[4724]: I0226 13:21:28.625464 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5f7l" event={"ID":"e3614a5e-277e-4889-a502-614e441951d0","Type":"ContainerStarted","Data":"d240a196759de2d3c74b0176b46b0b75d0296367f30f8693a14e46ebfcbc1b1c"} Feb 26 13:21:32 crc kubenswrapper[4724]: I0226 13:21:32.683770 4724 generic.go:334] "Generic (PLEG): container finished" podID="e3614a5e-277e-4889-a502-614e441951d0" containerID="d240a196759de2d3c74b0176b46b0b75d0296367f30f8693a14e46ebfcbc1b1c" exitCode=0 Feb 26 13:21:32 crc kubenswrapper[4724]: I0226 13:21:32.683880 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5f7l" event={"ID":"e3614a5e-277e-4889-a502-614e441951d0","Type":"ContainerDied","Data":"d240a196759de2d3c74b0176b46b0b75d0296367f30f8693a14e46ebfcbc1b1c"} Feb 26 13:21:32 crc kubenswrapper[4724]: I0226 13:21:32.689534 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8l8d" event={"ID":"ea4160fe-1944-4874-ae62-704c7884d8ca","Type":"ContainerStarted","Data":"8af64e7aa62838edaabefae1834d1c33835a5347e3e492c9fa4007c9678acf6a"} Feb 26 13:21:34 crc kubenswrapper[4724]: I0226 13:21:34.730912 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5f7l" event={"ID":"e3614a5e-277e-4889-a502-614e441951d0","Type":"ContainerStarted","Data":"0f40d0bafaa4b516751ea521e86fe257e5afdbabe5adf4d7001c35a239cae678"} Feb 26 13:21:43 crc kubenswrapper[4724]: I0226 13:21:43.475130 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:21:43 crc kubenswrapper[4724]: I0226 13:21:43.477580 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:21:44 crc kubenswrapper[4724]: I0226 13:21:44.782639 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n5f7l" podUID="e3614a5e-277e-4889-a502-614e441951d0" containerName="registry-server" probeResult="failure" output=< Feb 26 13:21:44 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:21:44 crc kubenswrapper[4724]: > Feb 26 13:21:44 crc kubenswrapper[4724]: I0226 13:21:44.867050 4724 generic.go:334] "Generic (PLEG): container finished" podID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerID="8af64e7aa62838edaabefae1834d1c33835a5347e3e492c9fa4007c9678acf6a" exitCode=0 Feb 26 13:21:44 crc kubenswrapper[4724]: I0226 13:21:44.867104 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8l8d" event={"ID":"ea4160fe-1944-4874-ae62-704c7884d8ca","Type":"ContainerDied","Data":"8af64e7aa62838edaabefae1834d1c33835a5347e3e492c9fa4007c9678acf6a"} Feb 26 13:21:44 crc kubenswrapper[4724]: I0226 13:21:44.904463 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n5f7l" podStartSLOduration=15.27204541 podStartE2EDuration="21.904420837s" podCreationTimestamp="2026-02-26 13:21:23 +0000 UTC" firstStartedPulling="2026-02-26 13:21:26.605041867 +0000 UTC m=+8153.260780982" lastFinishedPulling="2026-02-26 13:21:33.237417294 +0000 UTC m=+8159.893156409" observedRunningTime="2026-02-26 13:21:34.759397536 +0000 UTC m=+8161.415136651" watchObservedRunningTime="2026-02-26 13:21:44.904420837 +0000 UTC m=+8171.560159952" Feb 26 13:21:46 crc kubenswrapper[4724]: I0226 13:21:46.897977 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8l8d" event={"ID":"ea4160fe-1944-4874-ae62-704c7884d8ca","Type":"ContainerStarted","Data":"8290842d5669847747148dc173ddbe0c543b73a29ca00335979254c8e45ea05b"} Feb 26 13:21:46 crc kubenswrapper[4724]: I0226 13:21:46.947680 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x8l8d" podStartSLOduration=3.120748396 podStartE2EDuration="51.94765358s" podCreationTimestamp="2026-02-26 13:20:55 +0000 UTC" firstStartedPulling="2026-02-26 13:20:57.253598324 +0000 UTC m=+8123.909337439" lastFinishedPulling="2026-02-26 13:21:46.080503508 +0000 UTC m=+8172.736242623" observedRunningTime="2026-02-26 13:21:46.945040494 +0000 UTC m=+8173.600779609" watchObservedRunningTime="2026-02-26 13:21:46.94765358 +0000 UTC m=+8173.603392705" Feb 26 13:21:54 crc kubenswrapper[4724]: I0226 13:21:54.535845 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n5f7l" podUID="e3614a5e-277e-4889-a502-614e441951d0" containerName="registry-server" probeResult="failure" output=< Feb 26 13:21:54 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:21:54 crc kubenswrapper[4724]: > Feb 26 13:21:55 crc kubenswrapper[4724]: I0226 13:21:55.991747 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 13:21:55 crc kubenswrapper[4724]: I0226 13:21:55.992364 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 13:21:57 crc kubenswrapper[4724]: I0226 13:21:57.058450 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x8l8d" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="registry-server" probeResult="failure" output=< Feb 26 13:21:57 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:21:57 crc kubenswrapper[4724]: > Feb 26 13:22:00 crc kubenswrapper[4724]: I0226 13:22:00.192326 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535202-wcwln"] Feb 26 13:22:00 crc kubenswrapper[4724]: I0226 13:22:00.195144 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535202-wcwln" Feb 26 13:22:00 crc kubenswrapper[4724]: I0226 13:22:00.201076 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:22:00 crc kubenswrapper[4724]: I0226 13:22:00.201424 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:22:00 crc kubenswrapper[4724]: I0226 13:22:00.201595 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:22:00 crc kubenswrapper[4724]: I0226 13:22:00.211707 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535202-wcwln"] Feb 26 13:22:00 crc kubenswrapper[4724]: I0226 13:22:00.248062 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-262rj\" (UniqueName: \"kubernetes.io/projected/5b03994e-43be-4db7-abcb-76798381572c-kube-api-access-262rj\") pod \"auto-csr-approver-29535202-wcwln\" (UID: \"5b03994e-43be-4db7-abcb-76798381572c\") " pod="openshift-infra/auto-csr-approver-29535202-wcwln" Feb 26 13:22:00 crc kubenswrapper[4724]: I0226 13:22:00.350937 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-262rj\" (UniqueName: \"kubernetes.io/projected/5b03994e-43be-4db7-abcb-76798381572c-kube-api-access-262rj\") pod \"auto-csr-approver-29535202-wcwln\" (UID: \"5b03994e-43be-4db7-abcb-76798381572c\") " pod="openshift-infra/auto-csr-approver-29535202-wcwln" Feb 26 13:22:00 crc kubenswrapper[4724]: I0226 13:22:00.383325 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-262rj\" (UniqueName: \"kubernetes.io/projected/5b03994e-43be-4db7-abcb-76798381572c-kube-api-access-262rj\") pod \"auto-csr-approver-29535202-wcwln\" (UID: \"5b03994e-43be-4db7-abcb-76798381572c\") " pod="openshift-infra/auto-csr-approver-29535202-wcwln" Feb 26 13:22:00 crc kubenswrapper[4724]: I0226 13:22:00.551117 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535202-wcwln" Feb 26 13:22:01 crc kubenswrapper[4724]: I0226 13:22:01.741625 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535202-wcwln"] Feb 26 13:22:02 crc kubenswrapper[4724]: I0226 13:22:02.293314 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535202-wcwln" event={"ID":"5b03994e-43be-4db7-abcb-76798381572c","Type":"ContainerStarted","Data":"7a2662ac60ee51cf62e38f7e96267b190804871b90b4c7ef15179c8bc2b8ffeb"} Feb 26 13:22:04 crc kubenswrapper[4724]: I0226 13:22:04.530791 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n5f7l" podUID="e3614a5e-277e-4889-a502-614e441951d0" containerName="registry-server" probeResult="failure" output=< Feb 26 13:22:04 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:22:04 crc kubenswrapper[4724]: > Feb 26 13:22:05 crc kubenswrapper[4724]: I0226 13:22:05.329715 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535202-wcwln" event={"ID":"5b03994e-43be-4db7-abcb-76798381572c","Type":"ContainerStarted","Data":"c9377e56fb91caa517db01d1c94fff82491317d1cd5b764a807ba4f97e646ad9"} Feb 26 13:22:05 crc kubenswrapper[4724]: I0226 13:22:05.373289 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535202-wcwln" podStartSLOduration=2.796208354 podStartE2EDuration="5.37325875s" podCreationTimestamp="2026-02-26 13:22:00 +0000 UTC" firstStartedPulling="2026-02-26 13:22:01.730779166 +0000 UTC m=+8188.386518271" lastFinishedPulling="2026-02-26 13:22:04.307829552 +0000 UTC m=+8190.963568667" observedRunningTime="2026-02-26 13:22:05.352632244 +0000 UTC m=+8192.008371359" watchObservedRunningTime="2026-02-26 13:22:05.37325875 +0000 UTC m=+8192.028997855" Feb 26 13:22:07 crc kubenswrapper[4724]: I0226 13:22:07.042106 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x8l8d" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="registry-server" probeResult="failure" output=< Feb 26 13:22:07 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:22:07 crc kubenswrapper[4724]: > Feb 26 13:22:07 crc kubenswrapper[4724]: I0226 13:22:07.359534 4724 generic.go:334] "Generic (PLEG): container finished" podID="5b03994e-43be-4db7-abcb-76798381572c" containerID="c9377e56fb91caa517db01d1c94fff82491317d1cd5b764a807ba4f97e646ad9" exitCode=0 Feb 26 13:22:07 crc kubenswrapper[4724]: I0226 13:22:07.359581 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535202-wcwln" event={"ID":"5b03994e-43be-4db7-abcb-76798381572c","Type":"ContainerDied","Data":"c9377e56fb91caa517db01d1c94fff82491317d1cd5b764a807ba4f97e646ad9"} Feb 26 13:22:10 crc kubenswrapper[4724]: I0226 13:22:08.902216 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535202-wcwln" Feb 26 13:22:10 crc kubenswrapper[4724]: I0226 13:22:09.078359 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-262rj\" (UniqueName: \"kubernetes.io/projected/5b03994e-43be-4db7-abcb-76798381572c-kube-api-access-262rj\") pod \"5b03994e-43be-4db7-abcb-76798381572c\" (UID: \"5b03994e-43be-4db7-abcb-76798381572c\") " Feb 26 13:22:10 crc kubenswrapper[4724]: I0226 13:22:09.099617 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b03994e-43be-4db7-abcb-76798381572c-kube-api-access-262rj" (OuterVolumeSpecName: "kube-api-access-262rj") pod "5b03994e-43be-4db7-abcb-76798381572c" (UID: "5b03994e-43be-4db7-abcb-76798381572c"). InnerVolumeSpecName "kube-api-access-262rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:22:10 crc kubenswrapper[4724]: I0226 13:22:09.181304 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-262rj\" (UniqueName: \"kubernetes.io/projected/5b03994e-43be-4db7-abcb-76798381572c-kube-api-access-262rj\") on node \"crc\" DevicePath \"\"" Feb 26 13:22:10 crc kubenswrapper[4724]: I0226 13:22:09.381558 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535202-wcwln" event={"ID":"5b03994e-43be-4db7-abcb-76798381572c","Type":"ContainerDied","Data":"7a2662ac60ee51cf62e38f7e96267b190804871b90b4c7ef15179c8bc2b8ffeb"} Feb 26 13:22:10 crc kubenswrapper[4724]: I0226 13:22:09.381595 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a2662ac60ee51cf62e38f7e96267b190804871b90b4c7ef15179c8bc2b8ffeb" Feb 26 13:22:10 crc kubenswrapper[4724]: I0226 13:22:09.381643 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535202-wcwln" Feb 26 13:22:10 crc kubenswrapper[4724]: I0226 13:22:09.491277 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535196-9ws75"] Feb 26 13:22:10 crc kubenswrapper[4724]: I0226 13:22:09.504632 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535196-9ws75"] Feb 26 13:22:10 crc kubenswrapper[4724]: I0226 13:22:09.995758 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f45e6f70-4f96-4935-9f53-b00971cbe271" path="/var/lib/kubelet/pods/f45e6f70-4f96-4935-9f53-b00971cbe271/volumes" Feb 26 13:22:13 crc kubenswrapper[4724]: I0226 13:22:13.540625 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:22:13 crc kubenswrapper[4724]: I0226 13:22:13.596151 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:22:13 crc kubenswrapper[4724]: I0226 13:22:13.806211 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n5f7l"] Feb 26 13:22:15 crc kubenswrapper[4724]: I0226 13:22:15.443986 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n5f7l" podUID="e3614a5e-277e-4889-a502-614e441951d0" containerName="registry-server" containerID="cri-o://0f40d0bafaa4b516751ea521e86fe257e5afdbabe5adf4d7001c35a239cae678" gracePeriod=2 Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.075981 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.230957 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3614a5e-277e-4889-a502-614e441951d0-catalog-content\") pod \"e3614a5e-277e-4889-a502-614e441951d0\" (UID: \"e3614a5e-277e-4889-a502-614e441951d0\") " Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.231531 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rht8q\" (UniqueName: \"kubernetes.io/projected/e3614a5e-277e-4889-a502-614e441951d0-kube-api-access-rht8q\") pod \"e3614a5e-277e-4889-a502-614e441951d0\" (UID: \"e3614a5e-277e-4889-a502-614e441951d0\") " Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.231622 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3614a5e-277e-4889-a502-614e441951d0-utilities\") pod \"e3614a5e-277e-4889-a502-614e441951d0\" (UID: \"e3614a5e-277e-4889-a502-614e441951d0\") " Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.232394 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3614a5e-277e-4889-a502-614e441951d0-utilities" (OuterVolumeSpecName: "utilities") pod "e3614a5e-277e-4889-a502-614e441951d0" (UID: "e3614a5e-277e-4889-a502-614e441951d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.233103 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3614a5e-277e-4889-a502-614e441951d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.238386 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3614a5e-277e-4889-a502-614e441951d0-kube-api-access-rht8q" (OuterVolumeSpecName: "kube-api-access-rht8q") pod "e3614a5e-277e-4889-a502-614e441951d0" (UID: "e3614a5e-277e-4889-a502-614e441951d0"). InnerVolumeSpecName "kube-api-access-rht8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.290714 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3614a5e-277e-4889-a502-614e441951d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3614a5e-277e-4889-a502-614e441951d0" (UID: "e3614a5e-277e-4889-a502-614e441951d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.334916 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3614a5e-277e-4889-a502-614e441951d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.334956 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rht8q\" (UniqueName: \"kubernetes.io/projected/e3614a5e-277e-4889-a502-614e441951d0-kube-api-access-rht8q\") on node \"crc\" DevicePath \"\"" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.458164 4724 generic.go:334] "Generic (PLEG): container finished" podID="e3614a5e-277e-4889-a502-614e441951d0" containerID="0f40d0bafaa4b516751ea521e86fe257e5afdbabe5adf4d7001c35a239cae678" exitCode=0 Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.458334 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5f7l" event={"ID":"e3614a5e-277e-4889-a502-614e441951d0","Type":"ContainerDied","Data":"0f40d0bafaa4b516751ea521e86fe257e5afdbabe5adf4d7001c35a239cae678"} Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.458386 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5f7l" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.458428 4724 scope.go:117] "RemoveContainer" containerID="0f40d0bafaa4b516751ea521e86fe257e5afdbabe5adf4d7001c35a239cae678" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.458395 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5f7l" event={"ID":"e3614a5e-277e-4889-a502-614e441951d0","Type":"ContainerDied","Data":"52afa6bc2d1bab68fdc631e32439527a10f4483f6347ae6e6c19ce8573427e14"} Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.499255 4724 scope.go:117] "RemoveContainer" containerID="d240a196759de2d3c74b0176b46b0b75d0296367f30f8693a14e46ebfcbc1b1c" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.504261 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n5f7l"] Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.518708 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n5f7l"] Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.537076 4724 scope.go:117] "RemoveContainer" containerID="7c59b3005b39b1891c0f576d6dbfa507df955b70c0d9cff017df764bdd8de538" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.583364 4724 scope.go:117] "RemoveContainer" containerID="0f40d0bafaa4b516751ea521e86fe257e5afdbabe5adf4d7001c35a239cae678" Feb 26 13:22:16 crc kubenswrapper[4724]: E0226 13:22:16.592122 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f40d0bafaa4b516751ea521e86fe257e5afdbabe5adf4d7001c35a239cae678\": container with ID starting with 0f40d0bafaa4b516751ea521e86fe257e5afdbabe5adf4d7001c35a239cae678 not found: ID does not exist" containerID="0f40d0bafaa4b516751ea521e86fe257e5afdbabe5adf4d7001c35a239cae678" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.593281 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f40d0bafaa4b516751ea521e86fe257e5afdbabe5adf4d7001c35a239cae678"} err="failed to get container status \"0f40d0bafaa4b516751ea521e86fe257e5afdbabe5adf4d7001c35a239cae678\": rpc error: code = NotFound desc = could not find container \"0f40d0bafaa4b516751ea521e86fe257e5afdbabe5adf4d7001c35a239cae678\": container with ID starting with 0f40d0bafaa4b516751ea521e86fe257e5afdbabe5adf4d7001c35a239cae678 not found: ID does not exist" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.593364 4724 scope.go:117] "RemoveContainer" containerID="d240a196759de2d3c74b0176b46b0b75d0296367f30f8693a14e46ebfcbc1b1c" Feb 26 13:22:16 crc kubenswrapper[4724]: E0226 13:22:16.594237 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d240a196759de2d3c74b0176b46b0b75d0296367f30f8693a14e46ebfcbc1b1c\": container with ID starting with d240a196759de2d3c74b0176b46b0b75d0296367f30f8693a14e46ebfcbc1b1c not found: ID does not exist" containerID="d240a196759de2d3c74b0176b46b0b75d0296367f30f8693a14e46ebfcbc1b1c" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.594301 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d240a196759de2d3c74b0176b46b0b75d0296367f30f8693a14e46ebfcbc1b1c"} err="failed to get container status \"d240a196759de2d3c74b0176b46b0b75d0296367f30f8693a14e46ebfcbc1b1c\": rpc error: code = NotFound desc = could not find container \"d240a196759de2d3c74b0176b46b0b75d0296367f30f8693a14e46ebfcbc1b1c\": container with ID starting with d240a196759de2d3c74b0176b46b0b75d0296367f30f8693a14e46ebfcbc1b1c not found: ID does not exist" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.594343 4724 scope.go:117] "RemoveContainer" containerID="7c59b3005b39b1891c0f576d6dbfa507df955b70c0d9cff017df764bdd8de538" Feb 26 13:22:16 crc kubenswrapper[4724]: E0226 13:22:16.596233 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c59b3005b39b1891c0f576d6dbfa507df955b70c0d9cff017df764bdd8de538\": container with ID starting with 7c59b3005b39b1891c0f576d6dbfa507df955b70c0d9cff017df764bdd8de538 not found: ID does not exist" containerID="7c59b3005b39b1891c0f576d6dbfa507df955b70c0d9cff017df764bdd8de538" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.596277 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c59b3005b39b1891c0f576d6dbfa507df955b70c0d9cff017df764bdd8de538"} err="failed to get container status \"7c59b3005b39b1891c0f576d6dbfa507df955b70c0d9cff017df764bdd8de538\": rpc error: code = NotFound desc = could not find container \"7c59b3005b39b1891c0f576d6dbfa507df955b70c0d9cff017df764bdd8de538\": container with ID starting with 7c59b3005b39b1891c0f576d6dbfa507df955b70c0d9cff017df764bdd8de538 not found: ID does not exist" Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.906858 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:22:16 crc kubenswrapper[4724]: I0226 13:22:16.907749 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:22:17 crc kubenswrapper[4724]: I0226 13:22:17.059877 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x8l8d" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="registry-server" probeResult="failure" output=< Feb 26 13:22:17 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:22:17 crc kubenswrapper[4724]: > Feb 26 13:22:17 crc kubenswrapper[4724]: I0226 13:22:17.991968 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3614a5e-277e-4889-a502-614e441951d0" path="/var/lib/kubelet/pods/e3614a5e-277e-4889-a502-614e441951d0/volumes" Feb 26 13:22:26 crc kubenswrapper[4724]: I0226 13:22:26.409818 4724 scope.go:117] "RemoveContainer" containerID="233e34c9a8bf0c29be97d97f14f503e507d538ba701245bcc7df3ddddf2768a2" Feb 26 13:22:27 crc kubenswrapper[4724]: I0226 13:22:27.051022 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x8l8d" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="registry-server" probeResult="failure" output=< Feb 26 13:22:27 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:22:27 crc kubenswrapper[4724]: > Feb 26 13:22:37 crc kubenswrapper[4724]: I0226 13:22:37.038784 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x8l8d" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="registry-server" probeResult="failure" output=< Feb 26 13:22:37 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:22:37 crc kubenswrapper[4724]: > Feb 26 13:22:46 crc kubenswrapper[4724]: I0226 13:22:46.905868 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:22:46 crc kubenswrapper[4724]: I0226 13:22:46.906532 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:22:47 crc kubenswrapper[4724]: I0226 13:22:47.053921 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x8l8d" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="registry-server" probeResult="failure" output=< Feb 26 13:22:47 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:22:47 crc kubenswrapper[4724]: > Feb 26 13:22:57 crc kubenswrapper[4724]: I0226 13:22:57.044115 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x8l8d" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="registry-server" probeResult="failure" output=< Feb 26 13:22:57 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:22:57 crc kubenswrapper[4724]: > Feb 26 13:23:07 crc kubenswrapper[4724]: I0226 13:23:07.051338 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x8l8d" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="registry-server" probeResult="failure" output=< Feb 26 13:23:07 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:23:07 crc kubenswrapper[4724]: > Feb 26 13:23:16 crc kubenswrapper[4724]: I0226 13:23:16.045820 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 13:23:16 crc kubenswrapper[4724]: I0226 13:23:16.100469 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 13:23:16 crc kubenswrapper[4724]: I0226 13:23:16.262950 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x8l8d"] Feb 26 13:23:16 crc kubenswrapper[4724]: I0226 13:23:16.328933 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kvrc9"] Feb 26 13:23:16 crc kubenswrapper[4724]: I0226 13:23:16.329374 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kvrc9" podUID="4638ff21-51d9-4b6d-b860-322f48d04d41" containerName="registry-server" containerID="cri-o://2ef9acd2479557da1f6ac2c3e5875f7f34d0d90dd5382d3e2b63cb59ab78920d" gracePeriod=2 Feb 26 13:23:16 crc kubenswrapper[4724]: I0226 13:23:16.906025 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:23:16 crc kubenswrapper[4724]: I0226 13:23:16.906109 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:23:16 crc kubenswrapper[4724]: I0226 13:23:16.906195 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 13:23:16 crc kubenswrapper[4724]: I0226 13:23:16.907630 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 13:23:16 crc kubenswrapper[4724]: I0226 13:23:16.907697 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" gracePeriod=600 Feb 26 13:23:17 crc kubenswrapper[4724]: I0226 13:23:17.104877 4724 generic.go:334] "Generic (PLEG): container finished" podID="4638ff21-51d9-4b6d-b860-322f48d04d41" containerID="2ef9acd2479557da1f6ac2c3e5875f7f34d0d90dd5382d3e2b63cb59ab78920d" exitCode=0 Feb 26 13:23:17 crc kubenswrapper[4724]: I0226 13:23:17.104925 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvrc9" event={"ID":"4638ff21-51d9-4b6d-b860-322f48d04d41","Type":"ContainerDied","Data":"2ef9acd2479557da1f6ac2c3e5875f7f34d0d90dd5382d3e2b63cb59ab78920d"} Feb 26 13:23:17 crc kubenswrapper[4724]: I0226 13:23:17.108981 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" exitCode=0 Feb 26 13:23:17 crc kubenswrapper[4724]: I0226 13:23:17.109048 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078"} Feb 26 13:23:17 crc kubenswrapper[4724]: I0226 13:23:17.109150 4724 scope.go:117] "RemoveContainer" containerID="28ba1bdecaca0324305e932c22e76ce7343db7b56c19c15304770da6d24d656d" Feb 26 13:23:17 crc kubenswrapper[4724]: I0226 13:23:17.715432 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 13:23:17 crc kubenswrapper[4724]: I0226 13:23:17.855283 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5mrv\" (UniqueName: \"kubernetes.io/projected/4638ff21-51d9-4b6d-b860-322f48d04d41-kube-api-access-c5mrv\") pod \"4638ff21-51d9-4b6d-b860-322f48d04d41\" (UID: \"4638ff21-51d9-4b6d-b860-322f48d04d41\") " Feb 26 13:23:17 crc kubenswrapper[4724]: I0226 13:23:17.855363 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4638ff21-51d9-4b6d-b860-322f48d04d41-utilities\") pod \"4638ff21-51d9-4b6d-b860-322f48d04d41\" (UID: \"4638ff21-51d9-4b6d-b860-322f48d04d41\") " Feb 26 13:23:17 crc kubenswrapper[4724]: I0226 13:23:17.855565 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4638ff21-51d9-4b6d-b860-322f48d04d41-catalog-content\") pod \"4638ff21-51d9-4b6d-b860-322f48d04d41\" (UID: \"4638ff21-51d9-4b6d-b860-322f48d04d41\") " Feb 26 13:23:17 crc kubenswrapper[4724]: I0226 13:23:17.856563 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4638ff21-51d9-4b6d-b860-322f48d04d41-utilities" (OuterVolumeSpecName: "utilities") pod "4638ff21-51d9-4b6d-b860-322f48d04d41" (UID: "4638ff21-51d9-4b6d-b860-322f48d04d41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:23:17 crc kubenswrapper[4724]: I0226 13:23:17.873532 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4638ff21-51d9-4b6d-b860-322f48d04d41-kube-api-access-c5mrv" (OuterVolumeSpecName: "kube-api-access-c5mrv") pod "4638ff21-51d9-4b6d-b860-322f48d04d41" (UID: "4638ff21-51d9-4b6d-b860-322f48d04d41"). InnerVolumeSpecName "kube-api-access-c5mrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:23:17 crc kubenswrapper[4724]: I0226 13:23:17.957719 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5mrv\" (UniqueName: \"kubernetes.io/projected/4638ff21-51d9-4b6d-b860-322f48d04d41-kube-api-access-c5mrv\") on node \"crc\" DevicePath \"\"" Feb 26 13:23:17 crc kubenswrapper[4724]: I0226 13:23:17.957750 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4638ff21-51d9-4b6d-b860-322f48d04d41-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:23:17 crc kubenswrapper[4724]: I0226 13:23:17.972637 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4638ff21-51d9-4b6d-b860-322f48d04d41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4638ff21-51d9-4b6d-b860-322f48d04d41" (UID: "4638ff21-51d9-4b6d-b860-322f48d04d41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:23:18 crc kubenswrapper[4724]: I0226 13:23:18.060105 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4638ff21-51d9-4b6d-b860-322f48d04d41-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:23:18 crc kubenswrapper[4724]: E0226 13:23:18.063365 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:23:18 crc kubenswrapper[4724]: I0226 13:23:18.124080 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvrc9" event={"ID":"4638ff21-51d9-4b6d-b860-322f48d04d41","Type":"ContainerDied","Data":"72210747e3293351c2b8dd6aed481f0039d41ba2975c6f5e602c44de8cf4d216"} Feb 26 13:23:18 crc kubenswrapper[4724]: I0226 13:23:18.124972 4724 scope.go:117] "RemoveContainer" containerID="2ef9acd2479557da1f6ac2c3e5875f7f34d0d90dd5382d3e2b63cb59ab78920d" Feb 26 13:23:18 crc kubenswrapper[4724]: I0226 13:23:18.125159 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kvrc9" Feb 26 13:23:18 crc kubenswrapper[4724]: I0226 13:23:18.132251 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:23:18 crc kubenswrapper[4724]: E0226 13:23:18.132574 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:23:18 crc kubenswrapper[4724]: I0226 13:23:18.191565 4724 scope.go:117] "RemoveContainer" containerID="cb04aa0ca4513d88aadda6ae00e45b59fe5ef7a8dd95b5aaf66cfdc0e2c0fc10" Feb 26 13:23:18 crc kubenswrapper[4724]: I0226 13:23:18.231258 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kvrc9"] Feb 26 13:23:18 crc kubenswrapper[4724]: I0226 13:23:18.245464 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kvrc9"] Feb 26 13:23:18 crc kubenswrapper[4724]: I0226 13:23:18.245998 4724 scope.go:117] "RemoveContainer" containerID="6a78be5c990c68530fb888dc18c9580308ba7c294de5f5b475776002f4ec49b4" Feb 26 13:23:19 crc kubenswrapper[4724]: I0226 13:23:19.991095 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4638ff21-51d9-4b6d-b860-322f48d04d41" path="/var/lib/kubelet/pods/4638ff21-51d9-4b6d-b860-322f48d04d41/volumes" Feb 26 13:23:30 crc kubenswrapper[4724]: I0226 13:23:30.975655 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:23:30 crc kubenswrapper[4724]: E0226 13:23:30.976941 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:23:43 crc kubenswrapper[4724]: I0226 13:23:43.981558 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:23:43 crc kubenswrapper[4724]: E0226 13:23:43.982323 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:23:56 crc kubenswrapper[4724]: I0226 13:23:56.989173 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:23:56 crc kubenswrapper[4724]: E0226 13:23:56.990463 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.188035 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535204-5kwg7"] Feb 26 13:24:00 crc kubenswrapper[4724]: E0226 13:24:00.193316 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4638ff21-51d9-4b6d-b860-322f48d04d41" containerName="registry-server" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.193409 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4638ff21-51d9-4b6d-b860-322f48d04d41" containerName="registry-server" Feb 26 13:24:00 crc kubenswrapper[4724]: E0226 13:24:00.193516 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3614a5e-277e-4889-a502-614e441951d0" containerName="extract-content" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.193529 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3614a5e-277e-4889-a502-614e441951d0" containerName="extract-content" Feb 26 13:24:00 crc kubenswrapper[4724]: E0226 13:24:00.193562 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b03994e-43be-4db7-abcb-76798381572c" containerName="oc" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.193600 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b03994e-43be-4db7-abcb-76798381572c" containerName="oc" Feb 26 13:24:00 crc kubenswrapper[4724]: E0226 13:24:00.193679 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3614a5e-277e-4889-a502-614e441951d0" containerName="extract-utilities" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.193692 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3614a5e-277e-4889-a502-614e441951d0" containerName="extract-utilities" Feb 26 13:24:00 crc kubenswrapper[4724]: E0226 13:24:00.193728 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4638ff21-51d9-4b6d-b860-322f48d04d41" containerName="extract-utilities" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.193739 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4638ff21-51d9-4b6d-b860-322f48d04d41" containerName="extract-utilities" Feb 26 13:24:00 crc kubenswrapper[4724]: E0226 13:24:00.193768 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4638ff21-51d9-4b6d-b860-322f48d04d41" containerName="extract-content" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.193785 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4638ff21-51d9-4b6d-b860-322f48d04d41" containerName="extract-content" Feb 26 13:24:00 crc kubenswrapper[4724]: E0226 13:24:00.193801 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3614a5e-277e-4889-a502-614e441951d0" containerName="registry-server" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.193814 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3614a5e-277e-4889-a502-614e441951d0" containerName="registry-server" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.195494 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4638ff21-51d9-4b6d-b860-322f48d04d41" containerName="registry-server" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.195536 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b03994e-43be-4db7-abcb-76798381572c" containerName="oc" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.195559 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3614a5e-277e-4889-a502-614e441951d0" containerName="registry-server" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.197963 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535204-5kwg7" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.202668 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535204-5kwg7"] Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.208033 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.208087 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.208031 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.288997 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8868j\" (UniqueName: \"kubernetes.io/projected/4b202c1d-17d8-4d89-8e39-808aea75e518-kube-api-access-8868j\") pod \"auto-csr-approver-29535204-5kwg7\" (UID: \"4b202c1d-17d8-4d89-8e39-808aea75e518\") " pod="openshift-infra/auto-csr-approver-29535204-5kwg7" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.391592 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8868j\" (UniqueName: \"kubernetes.io/projected/4b202c1d-17d8-4d89-8e39-808aea75e518-kube-api-access-8868j\") pod \"auto-csr-approver-29535204-5kwg7\" (UID: \"4b202c1d-17d8-4d89-8e39-808aea75e518\") " pod="openshift-infra/auto-csr-approver-29535204-5kwg7" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.422407 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8868j\" (UniqueName: \"kubernetes.io/projected/4b202c1d-17d8-4d89-8e39-808aea75e518-kube-api-access-8868j\") pod \"auto-csr-approver-29535204-5kwg7\" (UID: \"4b202c1d-17d8-4d89-8e39-808aea75e518\") " pod="openshift-infra/auto-csr-approver-29535204-5kwg7" Feb 26 13:24:00 crc kubenswrapper[4724]: I0226 13:24:00.537886 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535204-5kwg7" Feb 26 13:24:01 crc kubenswrapper[4724]: I0226 13:24:01.208065 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535204-5kwg7"] Feb 26 13:24:01 crc kubenswrapper[4724]: I0226 13:24:01.676423 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535204-5kwg7" event={"ID":"4b202c1d-17d8-4d89-8e39-808aea75e518","Type":"ContainerStarted","Data":"c7b79b41abddb3187e52245327d99be51ff13c8516fe59d3124274c75348beb1"} Feb 26 13:24:03 crc kubenswrapper[4724]: I0226 13:24:03.708528 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535204-5kwg7" event={"ID":"4b202c1d-17d8-4d89-8e39-808aea75e518","Type":"ContainerStarted","Data":"bfb9afa390143ed1c8ea6116215b67c2714a32a399fd5707fd6c9918cbff9cef"} Feb 26 13:24:03 crc kubenswrapper[4724]: I0226 13:24:03.728317 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535204-5kwg7" podStartSLOduration=2.579417457 podStartE2EDuration="3.728285823s" podCreationTimestamp="2026-02-26 13:24:00 +0000 UTC" firstStartedPulling="2026-02-26 13:24:01.231502186 +0000 UTC m=+8307.887241311" lastFinishedPulling="2026-02-26 13:24:02.380370562 +0000 UTC m=+8309.036109677" observedRunningTime="2026-02-26 13:24:03.726321733 +0000 UTC m=+8310.382060858" watchObservedRunningTime="2026-02-26 13:24:03.728285823 +0000 UTC m=+8310.384024938" Feb 26 13:24:04 crc kubenswrapper[4724]: I0226 13:24:04.722229 4724 generic.go:334] "Generic (PLEG): container finished" podID="4b202c1d-17d8-4d89-8e39-808aea75e518" containerID="bfb9afa390143ed1c8ea6116215b67c2714a32a399fd5707fd6c9918cbff9cef" exitCode=0 Feb 26 13:24:04 crc kubenswrapper[4724]: I0226 13:24:04.722329 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535204-5kwg7" event={"ID":"4b202c1d-17d8-4d89-8e39-808aea75e518","Type":"ContainerDied","Data":"bfb9afa390143ed1c8ea6116215b67c2714a32a399fd5707fd6c9918cbff9cef"} Feb 26 13:24:06 crc kubenswrapper[4724]: I0226 13:24:06.192987 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535204-5kwg7" Feb 26 13:24:06 crc kubenswrapper[4724]: I0226 13:24:06.303970 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8868j\" (UniqueName: \"kubernetes.io/projected/4b202c1d-17d8-4d89-8e39-808aea75e518-kube-api-access-8868j\") pod \"4b202c1d-17d8-4d89-8e39-808aea75e518\" (UID: \"4b202c1d-17d8-4d89-8e39-808aea75e518\") " Feb 26 13:24:06 crc kubenswrapper[4724]: I0226 13:24:06.324871 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b202c1d-17d8-4d89-8e39-808aea75e518-kube-api-access-8868j" (OuterVolumeSpecName: "kube-api-access-8868j") pod "4b202c1d-17d8-4d89-8e39-808aea75e518" (UID: "4b202c1d-17d8-4d89-8e39-808aea75e518"). InnerVolumeSpecName "kube-api-access-8868j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:24:06 crc kubenswrapper[4724]: I0226 13:24:06.407579 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8868j\" (UniqueName: \"kubernetes.io/projected/4b202c1d-17d8-4d89-8e39-808aea75e518-kube-api-access-8868j\") on node \"crc\" DevicePath \"\"" Feb 26 13:24:06 crc kubenswrapper[4724]: I0226 13:24:06.747626 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535204-5kwg7" event={"ID":"4b202c1d-17d8-4d89-8e39-808aea75e518","Type":"ContainerDied","Data":"c7b79b41abddb3187e52245327d99be51ff13c8516fe59d3124274c75348beb1"} Feb 26 13:24:06 crc kubenswrapper[4724]: I0226 13:24:06.747700 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535204-5kwg7" Feb 26 13:24:06 crc kubenswrapper[4724]: I0226 13:24:06.747711 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7b79b41abddb3187e52245327d99be51ff13c8516fe59d3124274c75348beb1" Feb 26 13:24:06 crc kubenswrapper[4724]: I0226 13:24:06.854923 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535198-6xzdj"] Feb 26 13:24:06 crc kubenswrapper[4724]: I0226 13:24:06.866114 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535198-6xzdj"] Feb 26 13:24:07 crc kubenswrapper[4724]: I0226 13:24:07.989205 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="438abc8e-3494-423f-ad25-fb67642b25e4" path="/var/lib/kubelet/pods/438abc8e-3494-423f-ad25-fb67642b25e4/volumes" Feb 26 13:24:08 crc kubenswrapper[4724]: I0226 13:24:08.975320 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:24:08 crc kubenswrapper[4724]: E0226 13:24:08.975645 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:24:23 crc kubenswrapper[4724]: I0226 13:24:23.986684 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:24:23 crc kubenswrapper[4724]: E0226 13:24:23.988037 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:24:26 crc kubenswrapper[4724]: I0226 13:24:26.605207 4724 scope.go:117] "RemoveContainer" containerID="3a2a473359947c74c0e4a44c9caab3621e6496e8d660d140919cc5d2f4e2ffd6" Feb 26 13:24:34 crc kubenswrapper[4724]: I0226 13:24:34.976012 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:24:34 crc kubenswrapper[4724]: E0226 13:24:34.977463 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:24:49 crc kubenswrapper[4724]: I0226 13:24:49.978150 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:24:49 crc kubenswrapper[4724]: E0226 13:24:49.980650 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:25:01 crc kubenswrapper[4724]: I0226 13:25:01.976577 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:25:01 crc kubenswrapper[4724]: E0226 13:25:01.977724 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:25:15 crc kubenswrapper[4724]: I0226 13:25:15.976043 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:25:15 crc kubenswrapper[4724]: E0226 13:25:15.978030 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:25:26 crc kubenswrapper[4724]: I0226 13:25:26.976379 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:25:26 crc kubenswrapper[4724]: E0226 13:25:26.977321 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:25:41 crc kubenswrapper[4724]: I0226 13:25:41.976386 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:25:41 crc kubenswrapper[4724]: E0226 13:25:41.979036 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:25:55 crc kubenswrapper[4724]: I0226 13:25:55.976661 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:25:55 crc kubenswrapper[4724]: E0226 13:25:55.978080 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:26:00 crc kubenswrapper[4724]: I0226 13:26:00.183583 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535206-cjh7b"] Feb 26 13:26:00 crc kubenswrapper[4724]: E0226 13:26:00.186900 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b202c1d-17d8-4d89-8e39-808aea75e518" containerName="oc" Feb 26 13:26:00 crc kubenswrapper[4724]: I0226 13:26:00.187028 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b202c1d-17d8-4d89-8e39-808aea75e518" containerName="oc" Feb 26 13:26:00 crc kubenswrapper[4724]: I0226 13:26:00.187470 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b202c1d-17d8-4d89-8e39-808aea75e518" containerName="oc" Feb 26 13:26:00 crc kubenswrapper[4724]: I0226 13:26:00.188653 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535206-cjh7b" Feb 26 13:26:00 crc kubenswrapper[4724]: I0226 13:26:00.193075 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:26:00 crc kubenswrapper[4724]: I0226 13:26:00.193443 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:26:00 crc kubenswrapper[4724]: I0226 13:26:00.196581 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:26:00 crc kubenswrapper[4724]: I0226 13:26:00.208757 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535206-cjh7b"] Feb 26 13:26:00 crc kubenswrapper[4724]: I0226 13:26:00.289406 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdqck\" (UniqueName: \"kubernetes.io/projected/582f9049-54e0-4adb-bf40-4fbb18f663f7-kube-api-access-gdqck\") pod \"auto-csr-approver-29535206-cjh7b\" (UID: \"582f9049-54e0-4adb-bf40-4fbb18f663f7\") " pod="openshift-infra/auto-csr-approver-29535206-cjh7b" Feb 26 13:26:00 crc kubenswrapper[4724]: I0226 13:26:00.393359 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdqck\" (UniqueName: \"kubernetes.io/projected/582f9049-54e0-4adb-bf40-4fbb18f663f7-kube-api-access-gdqck\") pod \"auto-csr-approver-29535206-cjh7b\" (UID: \"582f9049-54e0-4adb-bf40-4fbb18f663f7\") " pod="openshift-infra/auto-csr-approver-29535206-cjh7b" Feb 26 13:26:00 crc kubenswrapper[4724]: I0226 13:26:00.420756 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdqck\" (UniqueName: \"kubernetes.io/projected/582f9049-54e0-4adb-bf40-4fbb18f663f7-kube-api-access-gdqck\") pod \"auto-csr-approver-29535206-cjh7b\" (UID: \"582f9049-54e0-4adb-bf40-4fbb18f663f7\") " pod="openshift-infra/auto-csr-approver-29535206-cjh7b" Feb 26 13:26:00 crc kubenswrapper[4724]: I0226 13:26:00.532716 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535206-cjh7b" Feb 26 13:26:01 crc kubenswrapper[4724]: I0226 13:26:01.137281 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535206-cjh7b"] Feb 26 13:26:01 crc kubenswrapper[4724]: I0226 13:26:01.313689 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535206-cjh7b" event={"ID":"582f9049-54e0-4adb-bf40-4fbb18f663f7","Type":"ContainerStarted","Data":"dd0ef1372f086b5b3a594dc7ec98dbe0d8cd7389b1ced2c2bd6d4ca1cbd7f8b4"} Feb 26 13:26:03 crc kubenswrapper[4724]: I0226 13:26:03.415479 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535206-cjh7b" event={"ID":"582f9049-54e0-4adb-bf40-4fbb18f663f7","Type":"ContainerStarted","Data":"868a6495024e6e7be1d5a44a061d100280b267b484f88e3ed59d22cabbf51b3a"} Feb 26 13:26:03 crc kubenswrapper[4724]: I0226 13:26:03.469916 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535206-cjh7b" podStartSLOduration=2.360423665 podStartE2EDuration="3.469879357s" podCreationTimestamp="2026-02-26 13:26:00 +0000 UTC" firstStartedPulling="2026-02-26 13:26:01.137756567 +0000 UTC m=+8427.793495682" lastFinishedPulling="2026-02-26 13:26:02.247212259 +0000 UTC m=+8428.902951374" observedRunningTime="2026-02-26 13:26:03.462807847 +0000 UTC m=+8430.118546972" watchObservedRunningTime="2026-02-26 13:26:03.469879357 +0000 UTC m=+8430.125618482" Feb 26 13:26:07 crc kubenswrapper[4724]: I0226 13:26:07.478849 4724 generic.go:334] "Generic (PLEG): container finished" podID="582f9049-54e0-4adb-bf40-4fbb18f663f7" containerID="868a6495024e6e7be1d5a44a061d100280b267b484f88e3ed59d22cabbf51b3a" exitCode=0 Feb 26 13:26:07 crc kubenswrapper[4724]: I0226 13:26:07.478956 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535206-cjh7b" event={"ID":"582f9049-54e0-4adb-bf40-4fbb18f663f7","Type":"ContainerDied","Data":"868a6495024e6e7be1d5a44a061d100280b267b484f88e3ed59d22cabbf51b3a"} Feb 26 13:26:07 crc kubenswrapper[4724]: I0226 13:26:07.977064 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:26:07 crc kubenswrapper[4724]: E0226 13:26:07.977563 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:26:09 crc kubenswrapper[4724]: I0226 13:26:09.309335 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535206-cjh7b" Feb 26 13:26:09 crc kubenswrapper[4724]: I0226 13:26:09.493529 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdqck\" (UniqueName: \"kubernetes.io/projected/582f9049-54e0-4adb-bf40-4fbb18f663f7-kube-api-access-gdqck\") pod \"582f9049-54e0-4adb-bf40-4fbb18f663f7\" (UID: \"582f9049-54e0-4adb-bf40-4fbb18f663f7\") " Feb 26 13:26:09 crc kubenswrapper[4724]: I0226 13:26:09.502528 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535206-cjh7b" event={"ID":"582f9049-54e0-4adb-bf40-4fbb18f663f7","Type":"ContainerDied","Data":"dd0ef1372f086b5b3a594dc7ec98dbe0d8cd7389b1ced2c2bd6d4ca1cbd7f8b4"} Feb 26 13:26:09 crc kubenswrapper[4724]: I0226 13:26:09.502599 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd0ef1372f086b5b3a594dc7ec98dbe0d8cd7389b1ced2c2bd6d4ca1cbd7f8b4" Feb 26 13:26:09 crc kubenswrapper[4724]: I0226 13:26:09.502689 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535206-cjh7b" Feb 26 13:26:09 crc kubenswrapper[4724]: I0226 13:26:09.510821 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582f9049-54e0-4adb-bf40-4fbb18f663f7-kube-api-access-gdqck" (OuterVolumeSpecName: "kube-api-access-gdqck") pod "582f9049-54e0-4adb-bf40-4fbb18f663f7" (UID: "582f9049-54e0-4adb-bf40-4fbb18f663f7"). InnerVolumeSpecName "kube-api-access-gdqck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:26:09 crc kubenswrapper[4724]: I0226 13:26:09.596807 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdqck\" (UniqueName: \"kubernetes.io/projected/582f9049-54e0-4adb-bf40-4fbb18f663f7-kube-api-access-gdqck\") on node \"crc\" DevicePath \"\"" Feb 26 13:26:09 crc kubenswrapper[4724]: I0226 13:26:09.597148 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535200-7jcmg"] Feb 26 13:26:09 crc kubenswrapper[4724]: I0226 13:26:09.621232 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535200-7jcmg"] Feb 26 13:26:09 crc kubenswrapper[4724]: I0226 13:26:09.992117 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc782d3e-34ab-48bd-ad62-03c79637f69b" path="/var/lib/kubelet/pods/cc782d3e-34ab-48bd-ad62-03c79637f69b/volumes" Feb 26 13:26:21 crc kubenswrapper[4724]: I0226 13:26:21.978570 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:26:21 crc kubenswrapper[4724]: E0226 13:26:21.979828 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:26:26 crc kubenswrapper[4724]: I0226 13:26:26.746633 4724 scope.go:117] "RemoveContainer" containerID="7f8ae851bbcc235ee3a32f0bfd88fdebeeecabcc79c3ffdfdbbe0f257c4e4aab" Feb 26 13:26:36 crc kubenswrapper[4724]: I0226 13:26:36.975140 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:26:36 crc kubenswrapper[4724]: E0226 13:26:36.976014 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:26:47 crc kubenswrapper[4724]: I0226 13:26:47.975978 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:26:47 crc kubenswrapper[4724]: E0226 13:26:47.977125 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:27:01 crc kubenswrapper[4724]: I0226 13:27:01.976628 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:27:01 crc kubenswrapper[4724]: E0226 13:27:01.977448 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:27:13 crc kubenswrapper[4724]: I0226 13:27:13.984106 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:27:13 crc kubenswrapper[4724]: E0226 13:27:13.986052 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:27:20 crc kubenswrapper[4724]: I0226 13:27:20.723257 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pjxj2"] Feb 26 13:27:20 crc kubenswrapper[4724]: E0226 13:27:20.724683 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582f9049-54e0-4adb-bf40-4fbb18f663f7" containerName="oc" Feb 26 13:27:20 crc kubenswrapper[4724]: I0226 13:27:20.724703 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="582f9049-54e0-4adb-bf40-4fbb18f663f7" containerName="oc" Feb 26 13:27:20 crc kubenswrapper[4724]: I0226 13:27:20.725010 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="582f9049-54e0-4adb-bf40-4fbb18f663f7" containerName="oc" Feb 26 13:27:20 crc kubenswrapper[4724]: I0226 13:27:20.734029 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:20 crc kubenswrapper[4724]: I0226 13:27:20.763320 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12c3fbf5-d959-432a-8719-ae991227613a-utilities\") pod \"redhat-marketplace-pjxj2\" (UID: \"12c3fbf5-d959-432a-8719-ae991227613a\") " pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:20 crc kubenswrapper[4724]: I0226 13:27:20.763649 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8d5p\" (UniqueName: \"kubernetes.io/projected/12c3fbf5-d959-432a-8719-ae991227613a-kube-api-access-l8d5p\") pod \"redhat-marketplace-pjxj2\" (UID: \"12c3fbf5-d959-432a-8719-ae991227613a\") " pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:20 crc kubenswrapper[4724]: I0226 13:27:20.764579 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12c3fbf5-d959-432a-8719-ae991227613a-catalog-content\") pod \"redhat-marketplace-pjxj2\" (UID: \"12c3fbf5-d959-432a-8719-ae991227613a\") " pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:20 crc kubenswrapper[4724]: I0226 13:27:20.766077 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjxj2"] Feb 26 13:27:20 crc kubenswrapper[4724]: I0226 13:27:20.867704 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12c3fbf5-d959-432a-8719-ae991227613a-catalog-content\") pod \"redhat-marketplace-pjxj2\" (UID: \"12c3fbf5-d959-432a-8719-ae991227613a\") " pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:20 crc kubenswrapper[4724]: I0226 13:27:20.868372 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12c3fbf5-d959-432a-8719-ae991227613a-utilities\") pod \"redhat-marketplace-pjxj2\" (UID: \"12c3fbf5-d959-432a-8719-ae991227613a\") " pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:20 crc kubenswrapper[4724]: I0226 13:27:20.868496 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8d5p\" (UniqueName: \"kubernetes.io/projected/12c3fbf5-d959-432a-8719-ae991227613a-kube-api-access-l8d5p\") pod \"redhat-marketplace-pjxj2\" (UID: \"12c3fbf5-d959-432a-8719-ae991227613a\") " pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:20 crc kubenswrapper[4724]: I0226 13:27:20.868836 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12c3fbf5-d959-432a-8719-ae991227613a-catalog-content\") pod \"redhat-marketplace-pjxj2\" (UID: \"12c3fbf5-d959-432a-8719-ae991227613a\") " pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:20 crc kubenswrapper[4724]: I0226 13:27:20.869367 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12c3fbf5-d959-432a-8719-ae991227613a-utilities\") pod \"redhat-marketplace-pjxj2\" (UID: \"12c3fbf5-d959-432a-8719-ae991227613a\") " pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:20 crc kubenswrapper[4724]: I0226 13:27:20.908254 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8d5p\" (UniqueName: \"kubernetes.io/projected/12c3fbf5-d959-432a-8719-ae991227613a-kube-api-access-l8d5p\") pod \"redhat-marketplace-pjxj2\" (UID: \"12c3fbf5-d959-432a-8719-ae991227613a\") " pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:21 crc kubenswrapper[4724]: I0226 13:27:21.091074 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:22 crc kubenswrapper[4724]: I0226 13:27:22.203223 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjxj2"] Feb 26 13:27:22 crc kubenswrapper[4724]: I0226 13:27:22.867062 4724 generic.go:334] "Generic (PLEG): container finished" podID="12c3fbf5-d959-432a-8719-ae991227613a" containerID="9a86f38405c255f7658bfc1419ed867f576191210980bc54cf0e8541c948f0e3" exitCode=0 Feb 26 13:27:22 crc kubenswrapper[4724]: I0226 13:27:22.867276 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjxj2" event={"ID":"12c3fbf5-d959-432a-8719-ae991227613a","Type":"ContainerDied","Data":"9a86f38405c255f7658bfc1419ed867f576191210980bc54cf0e8541c948f0e3"} Feb 26 13:27:22 crc kubenswrapper[4724]: I0226 13:27:22.867744 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjxj2" event={"ID":"12c3fbf5-d959-432a-8719-ae991227613a","Type":"ContainerStarted","Data":"4d6a2788d41c1ccd3ee11ad80eea91a1eaa8268c4c4b7949b429cdfab21a8315"} Feb 26 13:27:22 crc kubenswrapper[4724]: I0226 13:27:22.874493 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 13:27:23 crc kubenswrapper[4724]: I0226 13:27:23.884068 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjxj2" event={"ID":"12c3fbf5-d959-432a-8719-ae991227613a","Type":"ContainerStarted","Data":"2488a29e78a2a6fccee207f07d8fe81e53a183cb1289d0b1affa7590aaac49ea"} Feb 26 13:27:26 crc kubenswrapper[4724]: I0226 13:27:26.928421 4724 generic.go:334] "Generic (PLEG): container finished" podID="12c3fbf5-d959-432a-8719-ae991227613a" containerID="2488a29e78a2a6fccee207f07d8fe81e53a183cb1289d0b1affa7590aaac49ea" exitCode=0 Feb 26 13:27:26 crc kubenswrapper[4724]: I0226 13:27:26.928636 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjxj2" event={"ID":"12c3fbf5-d959-432a-8719-ae991227613a","Type":"ContainerDied","Data":"2488a29e78a2a6fccee207f07d8fe81e53a183cb1289d0b1affa7590aaac49ea"} Feb 26 13:27:27 crc kubenswrapper[4724]: I0226 13:27:27.946100 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjxj2" event={"ID":"12c3fbf5-d959-432a-8719-ae991227613a","Type":"ContainerStarted","Data":"a9cdae531372d4a05243f0c8b6059911cdb2cb39aa78a01308e9e9dc074e0614"} Feb 26 13:27:27 crc kubenswrapper[4724]: I0226 13:27:27.977102 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:27:27 crc kubenswrapper[4724]: E0226 13:27:27.977450 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:27:27 crc kubenswrapper[4724]: I0226 13:27:27.979132 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pjxj2" podStartSLOduration=3.477675874 podStartE2EDuration="7.979100372s" podCreationTimestamp="2026-02-26 13:27:20 +0000 UTC" firstStartedPulling="2026-02-26 13:27:22.870936712 +0000 UTC m=+8509.526675827" lastFinishedPulling="2026-02-26 13:27:27.37236121 +0000 UTC m=+8514.028100325" observedRunningTime="2026-02-26 13:27:27.973483929 +0000 UTC m=+8514.629223054" watchObservedRunningTime="2026-02-26 13:27:27.979100372 +0000 UTC m=+8514.634839497" Feb 26 13:27:31 crc kubenswrapper[4724]: I0226 13:27:31.092362 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:31 crc kubenswrapper[4724]: I0226 13:27:31.093366 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:32 crc kubenswrapper[4724]: I0226 13:27:32.154631 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-pjxj2" podUID="12c3fbf5-d959-432a-8719-ae991227613a" containerName="registry-server" probeResult="failure" output=< Feb 26 13:27:32 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:27:32 crc kubenswrapper[4724]: > Feb 26 13:27:39 crc kubenswrapper[4724]: I0226 13:27:39.867414 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7lxq6"] Feb 26 13:27:39 crc kubenswrapper[4724]: I0226 13:27:39.872024 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:27:39 crc kubenswrapper[4724]: I0226 13:27:39.887374 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lxq6"] Feb 26 13:27:40 crc kubenswrapper[4724]: I0226 13:27:40.006216 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m6fd\" (UniqueName: \"kubernetes.io/projected/5ada39f2-69a8-4c5f-8779-2b5f68429da1-kube-api-access-2m6fd\") pod \"certified-operators-7lxq6\" (UID: \"5ada39f2-69a8-4c5f-8779-2b5f68429da1\") " pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:27:40 crc kubenswrapper[4724]: I0226 13:27:40.006398 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ada39f2-69a8-4c5f-8779-2b5f68429da1-utilities\") pod \"certified-operators-7lxq6\" (UID: \"5ada39f2-69a8-4c5f-8779-2b5f68429da1\") " pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:27:40 crc kubenswrapper[4724]: I0226 13:27:40.006809 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ada39f2-69a8-4c5f-8779-2b5f68429da1-catalog-content\") pod \"certified-operators-7lxq6\" (UID: \"5ada39f2-69a8-4c5f-8779-2b5f68429da1\") " pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:27:40 crc kubenswrapper[4724]: I0226 13:27:40.109726 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ada39f2-69a8-4c5f-8779-2b5f68429da1-utilities\") pod \"certified-operators-7lxq6\" (UID: \"5ada39f2-69a8-4c5f-8779-2b5f68429da1\") " pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:27:40 crc kubenswrapper[4724]: I0226 13:27:40.109890 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ada39f2-69a8-4c5f-8779-2b5f68429da1-catalog-content\") pod \"certified-operators-7lxq6\" (UID: \"5ada39f2-69a8-4c5f-8779-2b5f68429da1\") " pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:27:40 crc kubenswrapper[4724]: I0226 13:27:40.110078 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m6fd\" (UniqueName: \"kubernetes.io/projected/5ada39f2-69a8-4c5f-8779-2b5f68429da1-kube-api-access-2m6fd\") pod \"certified-operators-7lxq6\" (UID: \"5ada39f2-69a8-4c5f-8779-2b5f68429da1\") " pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:27:40 crc kubenswrapper[4724]: I0226 13:27:40.110503 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ada39f2-69a8-4c5f-8779-2b5f68429da1-catalog-content\") pod \"certified-operators-7lxq6\" (UID: \"5ada39f2-69a8-4c5f-8779-2b5f68429da1\") " pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:27:40 crc kubenswrapper[4724]: I0226 13:27:40.110507 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ada39f2-69a8-4c5f-8779-2b5f68429da1-utilities\") pod \"certified-operators-7lxq6\" (UID: \"5ada39f2-69a8-4c5f-8779-2b5f68429da1\") " pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:27:40 crc kubenswrapper[4724]: I0226 13:27:40.140130 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m6fd\" (UniqueName: \"kubernetes.io/projected/5ada39f2-69a8-4c5f-8779-2b5f68429da1-kube-api-access-2m6fd\") pod \"certified-operators-7lxq6\" (UID: \"5ada39f2-69a8-4c5f-8779-2b5f68429da1\") " pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:27:40 crc kubenswrapper[4724]: I0226 13:27:40.197605 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:27:41 crc kubenswrapper[4724]: I0226 13:27:41.021507 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lxq6"] Feb 26 13:27:41 crc kubenswrapper[4724]: I0226 13:27:41.115752 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxq6" event={"ID":"5ada39f2-69a8-4c5f-8779-2b5f68429da1","Type":"ContainerStarted","Data":"6474bce750998bd7e482c6ad8a53bb1176942fd51ca66ef1b120c04744ad1719"} Feb 26 13:27:41 crc kubenswrapper[4724]: I0226 13:27:41.976013 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:27:41 crc kubenswrapper[4724]: E0226 13:27:41.976826 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:27:42 crc kubenswrapper[4724]: I0226 13:27:42.135284 4724 generic.go:334] "Generic (PLEG): container finished" podID="5ada39f2-69a8-4c5f-8779-2b5f68429da1" containerID="cf585c7032c3b3cc134a33b8f758f48bbf9f57920b7e4fd7c40c8c0667105169" exitCode=0 Feb 26 13:27:42 crc kubenswrapper[4724]: I0226 13:27:42.136758 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxq6" event={"ID":"5ada39f2-69a8-4c5f-8779-2b5f68429da1","Type":"ContainerDied","Data":"cf585c7032c3b3cc134a33b8f758f48bbf9f57920b7e4fd7c40c8c0667105169"} Feb 26 13:27:42 crc kubenswrapper[4724]: I0226 13:27:42.161587 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-pjxj2" podUID="12c3fbf5-d959-432a-8719-ae991227613a" containerName="registry-server" probeResult="failure" output=< Feb 26 13:27:42 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:27:42 crc kubenswrapper[4724]: > Feb 26 13:27:44 crc kubenswrapper[4724]: I0226 13:27:44.164030 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxq6" event={"ID":"5ada39f2-69a8-4c5f-8779-2b5f68429da1","Type":"ContainerStarted","Data":"8bfa3fdd37e17e466f756efa32c3504c292774d0e2132d2567adbbd804d63f15"} Feb 26 13:27:47 crc kubenswrapper[4724]: I0226 13:27:47.202334 4724 generic.go:334] "Generic (PLEG): container finished" podID="5ada39f2-69a8-4c5f-8779-2b5f68429da1" containerID="8bfa3fdd37e17e466f756efa32c3504c292774d0e2132d2567adbbd804d63f15" exitCode=0 Feb 26 13:27:47 crc kubenswrapper[4724]: I0226 13:27:47.202437 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxq6" event={"ID":"5ada39f2-69a8-4c5f-8779-2b5f68429da1","Type":"ContainerDied","Data":"8bfa3fdd37e17e466f756efa32c3504c292774d0e2132d2567adbbd804d63f15"} Feb 26 13:27:48 crc kubenswrapper[4724]: I0226 13:27:48.229830 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxq6" event={"ID":"5ada39f2-69a8-4c5f-8779-2b5f68429da1","Type":"ContainerStarted","Data":"37fce17d4f6d7cb2bdfdb89b42d792233eebde052a9862e26b067c076ec84c28"} Feb 26 13:27:48 crc kubenswrapper[4724]: I0226 13:27:48.342924 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7lxq6" podStartSLOduration=3.842728579 podStartE2EDuration="9.342892126s" podCreationTimestamp="2026-02-26 13:27:39 +0000 UTC" firstStartedPulling="2026-02-26 13:27:42.140112943 +0000 UTC m=+8528.795852058" lastFinishedPulling="2026-02-26 13:27:47.64027649 +0000 UTC m=+8534.296015605" observedRunningTime="2026-02-26 13:27:48.323904872 +0000 UTC m=+8534.979643987" watchObservedRunningTime="2026-02-26 13:27:48.342892126 +0000 UTC m=+8534.998631241" Feb 26 13:27:50 crc kubenswrapper[4724]: I0226 13:27:50.198536 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:27:50 crc kubenswrapper[4724]: I0226 13:27:50.199270 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:27:51 crc kubenswrapper[4724]: I0226 13:27:51.155764 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:51 crc kubenswrapper[4724]: I0226 13:27:51.219373 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:51 crc kubenswrapper[4724]: I0226 13:27:51.266930 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7lxq6" podUID="5ada39f2-69a8-4c5f-8779-2b5f68429da1" containerName="registry-server" probeResult="failure" output=< Feb 26 13:27:51 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:27:51 crc kubenswrapper[4724]: > Feb 26 13:27:51 crc kubenswrapper[4724]: I0226 13:27:51.930686 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjxj2"] Feb 26 13:27:52 crc kubenswrapper[4724]: I0226 13:27:52.280630 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pjxj2" podUID="12c3fbf5-d959-432a-8719-ae991227613a" containerName="registry-server" containerID="cri-o://a9cdae531372d4a05243f0c8b6059911cdb2cb39aa78a01308e9e9dc074e0614" gracePeriod=2 Feb 26 13:27:52 crc kubenswrapper[4724]: I0226 13:27:52.937563 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.053617 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12c3fbf5-d959-432a-8719-ae991227613a-catalog-content\") pod \"12c3fbf5-d959-432a-8719-ae991227613a\" (UID: \"12c3fbf5-d959-432a-8719-ae991227613a\") " Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.054502 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12c3fbf5-d959-432a-8719-ae991227613a-utilities\") pod \"12c3fbf5-d959-432a-8719-ae991227613a\" (UID: \"12c3fbf5-d959-432a-8719-ae991227613a\") " Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.054865 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8d5p\" (UniqueName: \"kubernetes.io/projected/12c3fbf5-d959-432a-8719-ae991227613a-kube-api-access-l8d5p\") pod \"12c3fbf5-d959-432a-8719-ae991227613a\" (UID: \"12c3fbf5-d959-432a-8719-ae991227613a\") " Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.055393 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12c3fbf5-d959-432a-8719-ae991227613a-utilities" (OuterVolumeSpecName: "utilities") pod "12c3fbf5-d959-432a-8719-ae991227613a" (UID: "12c3fbf5-d959-432a-8719-ae991227613a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.056619 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12c3fbf5-d959-432a-8719-ae991227613a-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.080226 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c3fbf5-d959-432a-8719-ae991227613a-kube-api-access-l8d5p" (OuterVolumeSpecName: "kube-api-access-l8d5p") pod "12c3fbf5-d959-432a-8719-ae991227613a" (UID: "12c3fbf5-d959-432a-8719-ae991227613a"). InnerVolumeSpecName "kube-api-access-l8d5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.092233 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12c3fbf5-d959-432a-8719-ae991227613a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12c3fbf5-d959-432a-8719-ae991227613a" (UID: "12c3fbf5-d959-432a-8719-ae991227613a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.159554 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8d5p\" (UniqueName: \"kubernetes.io/projected/12c3fbf5-d959-432a-8719-ae991227613a-kube-api-access-l8d5p\") on node \"crc\" DevicePath \"\"" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.159612 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12c3fbf5-d959-432a-8719-ae991227613a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.294070 4724 generic.go:334] "Generic (PLEG): container finished" podID="12c3fbf5-d959-432a-8719-ae991227613a" containerID="a9cdae531372d4a05243f0c8b6059911cdb2cb39aa78a01308e9e9dc074e0614" exitCode=0 Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.294139 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjxj2" event={"ID":"12c3fbf5-d959-432a-8719-ae991227613a","Type":"ContainerDied","Data":"a9cdae531372d4a05243f0c8b6059911cdb2cb39aa78a01308e9e9dc074e0614"} Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.294201 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjxj2" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.294241 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjxj2" event={"ID":"12c3fbf5-d959-432a-8719-ae991227613a","Type":"ContainerDied","Data":"4d6a2788d41c1ccd3ee11ad80eea91a1eaa8268c4c4b7949b429cdfab21a8315"} Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.294292 4724 scope.go:117] "RemoveContainer" containerID="a9cdae531372d4a05243f0c8b6059911cdb2cb39aa78a01308e9e9dc074e0614" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.350294 4724 scope.go:117] "RemoveContainer" containerID="2488a29e78a2a6fccee207f07d8fe81e53a183cb1289d0b1affa7590aaac49ea" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.351034 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjxj2"] Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.362953 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjxj2"] Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.390218 4724 scope.go:117] "RemoveContainer" containerID="9a86f38405c255f7658bfc1419ed867f576191210980bc54cf0e8541c948f0e3" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.431511 4724 scope.go:117] "RemoveContainer" containerID="a9cdae531372d4a05243f0c8b6059911cdb2cb39aa78a01308e9e9dc074e0614" Feb 26 13:27:53 crc kubenswrapper[4724]: E0226 13:27:53.439541 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9cdae531372d4a05243f0c8b6059911cdb2cb39aa78a01308e9e9dc074e0614\": container with ID starting with a9cdae531372d4a05243f0c8b6059911cdb2cb39aa78a01308e9e9dc074e0614 not found: ID does not exist" containerID="a9cdae531372d4a05243f0c8b6059911cdb2cb39aa78a01308e9e9dc074e0614" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.439600 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9cdae531372d4a05243f0c8b6059911cdb2cb39aa78a01308e9e9dc074e0614"} err="failed to get container status \"a9cdae531372d4a05243f0c8b6059911cdb2cb39aa78a01308e9e9dc074e0614\": rpc error: code = NotFound desc = could not find container \"a9cdae531372d4a05243f0c8b6059911cdb2cb39aa78a01308e9e9dc074e0614\": container with ID starting with a9cdae531372d4a05243f0c8b6059911cdb2cb39aa78a01308e9e9dc074e0614 not found: ID does not exist" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.439633 4724 scope.go:117] "RemoveContainer" containerID="2488a29e78a2a6fccee207f07d8fe81e53a183cb1289d0b1affa7590aaac49ea" Feb 26 13:27:53 crc kubenswrapper[4724]: E0226 13:27:53.440308 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2488a29e78a2a6fccee207f07d8fe81e53a183cb1289d0b1affa7590aaac49ea\": container with ID starting with 2488a29e78a2a6fccee207f07d8fe81e53a183cb1289d0b1affa7590aaac49ea not found: ID does not exist" containerID="2488a29e78a2a6fccee207f07d8fe81e53a183cb1289d0b1affa7590aaac49ea" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.440335 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2488a29e78a2a6fccee207f07d8fe81e53a183cb1289d0b1affa7590aaac49ea"} err="failed to get container status \"2488a29e78a2a6fccee207f07d8fe81e53a183cb1289d0b1affa7590aaac49ea\": rpc error: code = NotFound desc = could not find container \"2488a29e78a2a6fccee207f07d8fe81e53a183cb1289d0b1affa7590aaac49ea\": container with ID starting with 2488a29e78a2a6fccee207f07d8fe81e53a183cb1289d0b1affa7590aaac49ea not found: ID does not exist" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.440354 4724 scope.go:117] "RemoveContainer" containerID="9a86f38405c255f7658bfc1419ed867f576191210980bc54cf0e8541c948f0e3" Feb 26 13:27:53 crc kubenswrapper[4724]: E0226 13:27:53.440635 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a86f38405c255f7658bfc1419ed867f576191210980bc54cf0e8541c948f0e3\": container with ID starting with 9a86f38405c255f7658bfc1419ed867f576191210980bc54cf0e8541c948f0e3 not found: ID does not exist" containerID="9a86f38405c255f7658bfc1419ed867f576191210980bc54cf0e8541c948f0e3" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.440677 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a86f38405c255f7658bfc1419ed867f576191210980bc54cf0e8541c948f0e3"} err="failed to get container status \"9a86f38405c255f7658bfc1419ed867f576191210980bc54cf0e8541c948f0e3\": rpc error: code = NotFound desc = could not find container \"9a86f38405c255f7658bfc1419ed867f576191210980bc54cf0e8541c948f0e3\": container with ID starting with 9a86f38405c255f7658bfc1419ed867f576191210980bc54cf0e8541c948f0e3 not found: ID does not exist" Feb 26 13:27:53 crc kubenswrapper[4724]: I0226 13:27:53.995455 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12c3fbf5-d959-432a-8719-ae991227613a" path="/var/lib/kubelet/pods/12c3fbf5-d959-432a-8719-ae991227613a/volumes" Feb 26 13:27:54 crc kubenswrapper[4724]: I0226 13:27:54.976356 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:27:54 crc kubenswrapper[4724]: E0226 13:27:54.977158 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:28:00 crc kubenswrapper[4724]: I0226 13:28:00.230537 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535208-pj82n"] Feb 26 13:28:00 crc kubenswrapper[4724]: E0226 13:28:00.232065 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c3fbf5-d959-432a-8719-ae991227613a" containerName="extract-utilities" Feb 26 13:28:00 crc kubenswrapper[4724]: I0226 13:28:00.232086 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c3fbf5-d959-432a-8719-ae991227613a" containerName="extract-utilities" Feb 26 13:28:00 crc kubenswrapper[4724]: E0226 13:28:00.232127 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c3fbf5-d959-432a-8719-ae991227613a" containerName="extract-content" Feb 26 13:28:00 crc kubenswrapper[4724]: I0226 13:28:00.232133 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c3fbf5-d959-432a-8719-ae991227613a" containerName="extract-content" Feb 26 13:28:00 crc kubenswrapper[4724]: E0226 13:28:00.238754 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c3fbf5-d959-432a-8719-ae991227613a" containerName="registry-server" Feb 26 13:28:00 crc kubenswrapper[4724]: I0226 13:28:00.238801 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c3fbf5-d959-432a-8719-ae991227613a" containerName="registry-server" Feb 26 13:28:00 crc kubenswrapper[4724]: I0226 13:28:00.239466 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="12c3fbf5-d959-432a-8719-ae991227613a" containerName="registry-server" Feb 26 13:28:00 crc kubenswrapper[4724]: I0226 13:28:00.240496 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535208-pj82n" Feb 26 13:28:00 crc kubenswrapper[4724]: I0226 13:28:00.279883 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535208-pj82n"] Feb 26 13:28:00 crc kubenswrapper[4724]: I0226 13:28:00.309482 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:28:00 crc kubenswrapper[4724]: I0226 13:28:00.310330 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:28:00 crc kubenswrapper[4724]: I0226 13:28:00.309494 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:28:00 crc kubenswrapper[4724]: I0226 13:28:00.359583 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl2h8\" (UniqueName: \"kubernetes.io/projected/4c0430b1-2564-4984-b53e-e5dec336f43d-kube-api-access-cl2h8\") pod \"auto-csr-approver-29535208-pj82n\" (UID: \"4c0430b1-2564-4984-b53e-e5dec336f43d\") " pod="openshift-infra/auto-csr-approver-29535208-pj82n" Feb 26 13:28:00 crc kubenswrapper[4724]: I0226 13:28:00.463787 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl2h8\" (UniqueName: \"kubernetes.io/projected/4c0430b1-2564-4984-b53e-e5dec336f43d-kube-api-access-cl2h8\") pod \"auto-csr-approver-29535208-pj82n\" (UID: \"4c0430b1-2564-4984-b53e-e5dec336f43d\") " pod="openshift-infra/auto-csr-approver-29535208-pj82n" Feb 26 13:28:00 crc kubenswrapper[4724]: I0226 13:28:00.491779 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl2h8\" (UniqueName: \"kubernetes.io/projected/4c0430b1-2564-4984-b53e-e5dec336f43d-kube-api-access-cl2h8\") pod \"auto-csr-approver-29535208-pj82n\" (UID: \"4c0430b1-2564-4984-b53e-e5dec336f43d\") " pod="openshift-infra/auto-csr-approver-29535208-pj82n" Feb 26 13:28:00 crc kubenswrapper[4724]: I0226 13:28:00.606658 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535208-pj82n" Feb 26 13:28:01 crc kubenswrapper[4724]: I0226 13:28:01.185503 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535208-pj82n"] Feb 26 13:28:01 crc kubenswrapper[4724]: I0226 13:28:01.324226 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7lxq6" podUID="5ada39f2-69a8-4c5f-8779-2b5f68429da1" containerName="registry-server" probeResult="failure" output=< Feb 26 13:28:01 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:28:01 crc kubenswrapper[4724]: > Feb 26 13:28:01 crc kubenswrapper[4724]: I0226 13:28:01.397696 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535208-pj82n" event={"ID":"4c0430b1-2564-4984-b53e-e5dec336f43d","Type":"ContainerStarted","Data":"6df02636d3d28917d296324c896b52c4a702f2acf4da87cfe8d66f304ea47128"} Feb 26 13:28:04 crc kubenswrapper[4724]: I0226 13:28:04.434498 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535208-pj82n" event={"ID":"4c0430b1-2564-4984-b53e-e5dec336f43d","Type":"ContainerStarted","Data":"6ff4f136c52b4a65fc481e3c3e2d62faa5942fb1fb4ca0e3a5b75791eb94141d"} Feb 26 13:28:04 crc kubenswrapper[4724]: I0226 13:28:04.461880 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535208-pj82n" podStartSLOduration=2.886419653 podStartE2EDuration="4.461840886s" podCreationTimestamp="2026-02-26 13:28:00 +0000 UTC" firstStartedPulling="2026-02-26 13:28:01.201397784 +0000 UTC m=+8547.857136899" lastFinishedPulling="2026-02-26 13:28:02.776819017 +0000 UTC m=+8549.432558132" observedRunningTime="2026-02-26 13:28:04.45299228 +0000 UTC m=+8551.108731395" watchObservedRunningTime="2026-02-26 13:28:04.461840886 +0000 UTC m=+8551.117580011" Feb 26 13:28:06 crc kubenswrapper[4724]: I0226 13:28:06.483148 4724 generic.go:334] "Generic (PLEG): container finished" podID="4c0430b1-2564-4984-b53e-e5dec336f43d" containerID="6ff4f136c52b4a65fc481e3c3e2d62faa5942fb1fb4ca0e3a5b75791eb94141d" exitCode=0 Feb 26 13:28:06 crc kubenswrapper[4724]: I0226 13:28:06.483283 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535208-pj82n" event={"ID":"4c0430b1-2564-4984-b53e-e5dec336f43d","Type":"ContainerDied","Data":"6ff4f136c52b4a65fc481e3c3e2d62faa5942fb1fb4ca0e3a5b75791eb94141d"} Feb 26 13:28:06 crc kubenswrapper[4724]: I0226 13:28:06.977230 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:28:06 crc kubenswrapper[4724]: E0226 13:28:06.977620 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:28:08 crc kubenswrapper[4724]: I0226 13:28:08.015948 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535208-pj82n" Feb 26 13:28:08 crc kubenswrapper[4724]: I0226 13:28:08.084090 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl2h8\" (UniqueName: \"kubernetes.io/projected/4c0430b1-2564-4984-b53e-e5dec336f43d-kube-api-access-cl2h8\") pod \"4c0430b1-2564-4984-b53e-e5dec336f43d\" (UID: \"4c0430b1-2564-4984-b53e-e5dec336f43d\") " Feb 26 13:28:08 crc kubenswrapper[4724]: I0226 13:28:08.107555 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c0430b1-2564-4984-b53e-e5dec336f43d-kube-api-access-cl2h8" (OuterVolumeSpecName: "kube-api-access-cl2h8") pod "4c0430b1-2564-4984-b53e-e5dec336f43d" (UID: "4c0430b1-2564-4984-b53e-e5dec336f43d"). InnerVolumeSpecName "kube-api-access-cl2h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:28:08 crc kubenswrapper[4724]: I0226 13:28:08.189816 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl2h8\" (UniqueName: \"kubernetes.io/projected/4c0430b1-2564-4984-b53e-e5dec336f43d-kube-api-access-cl2h8\") on node \"crc\" DevicePath \"\"" Feb 26 13:28:08 crc kubenswrapper[4724]: I0226 13:28:08.511714 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535208-pj82n" event={"ID":"4c0430b1-2564-4984-b53e-e5dec336f43d","Type":"ContainerDied","Data":"6df02636d3d28917d296324c896b52c4a702f2acf4da87cfe8d66f304ea47128"} Feb 26 13:28:08 crc kubenswrapper[4724]: I0226 13:28:08.512233 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6df02636d3d28917d296324c896b52c4a702f2acf4da87cfe8d66f304ea47128" Feb 26 13:28:08 crc kubenswrapper[4724]: I0226 13:28:08.511799 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535208-pj82n" Feb 26 13:28:08 crc kubenswrapper[4724]: I0226 13:28:08.601037 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535202-wcwln"] Feb 26 13:28:08 crc kubenswrapper[4724]: I0226 13:28:08.610733 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535202-wcwln"] Feb 26 13:28:09 crc kubenswrapper[4724]: I0226 13:28:09.994597 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b03994e-43be-4db7-abcb-76798381572c" path="/var/lib/kubelet/pods/5b03994e-43be-4db7-abcb-76798381572c/volumes" Feb 26 13:28:10 crc kubenswrapper[4724]: I0226 13:28:10.257520 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:28:10 crc kubenswrapper[4724]: I0226 13:28:10.330381 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:28:11 crc kubenswrapper[4724]: I0226 13:28:11.061937 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lxq6"] Feb 26 13:28:11 crc kubenswrapper[4724]: I0226 13:28:11.583414 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7lxq6" podUID="5ada39f2-69a8-4c5f-8779-2b5f68429da1" containerName="registry-server" containerID="cri-o://37fce17d4f6d7cb2bdfdb89b42d792233eebde052a9862e26b067c076ec84c28" gracePeriod=2 Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.216022 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.309444 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ada39f2-69a8-4c5f-8779-2b5f68429da1-catalog-content\") pod \"5ada39f2-69a8-4c5f-8779-2b5f68429da1\" (UID: \"5ada39f2-69a8-4c5f-8779-2b5f68429da1\") " Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.309529 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m6fd\" (UniqueName: \"kubernetes.io/projected/5ada39f2-69a8-4c5f-8779-2b5f68429da1-kube-api-access-2m6fd\") pod \"5ada39f2-69a8-4c5f-8779-2b5f68429da1\" (UID: \"5ada39f2-69a8-4c5f-8779-2b5f68429da1\") " Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.309610 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ada39f2-69a8-4c5f-8779-2b5f68429da1-utilities\") pod \"5ada39f2-69a8-4c5f-8779-2b5f68429da1\" (UID: \"5ada39f2-69a8-4c5f-8779-2b5f68429da1\") " Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.310900 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ada39f2-69a8-4c5f-8779-2b5f68429da1-utilities" (OuterVolumeSpecName: "utilities") pod "5ada39f2-69a8-4c5f-8779-2b5f68429da1" (UID: "5ada39f2-69a8-4c5f-8779-2b5f68429da1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.321942 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ada39f2-69a8-4c5f-8779-2b5f68429da1-kube-api-access-2m6fd" (OuterVolumeSpecName: "kube-api-access-2m6fd") pod "5ada39f2-69a8-4c5f-8779-2b5f68429da1" (UID: "5ada39f2-69a8-4c5f-8779-2b5f68429da1"). InnerVolumeSpecName "kube-api-access-2m6fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.387792 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ada39f2-69a8-4c5f-8779-2b5f68429da1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ada39f2-69a8-4c5f-8779-2b5f68429da1" (UID: "5ada39f2-69a8-4c5f-8779-2b5f68429da1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.413315 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ada39f2-69a8-4c5f-8779-2b5f68429da1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.413827 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m6fd\" (UniqueName: \"kubernetes.io/projected/5ada39f2-69a8-4c5f-8779-2b5f68429da1-kube-api-access-2m6fd\") on node \"crc\" DevicePath \"\"" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.413841 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ada39f2-69a8-4c5f-8779-2b5f68429da1-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.599622 4724 generic.go:334] "Generic (PLEG): container finished" podID="5ada39f2-69a8-4c5f-8779-2b5f68429da1" containerID="37fce17d4f6d7cb2bdfdb89b42d792233eebde052a9862e26b067c076ec84c28" exitCode=0 Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.599691 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxq6" event={"ID":"5ada39f2-69a8-4c5f-8779-2b5f68429da1","Type":"ContainerDied","Data":"37fce17d4f6d7cb2bdfdb89b42d792233eebde052a9862e26b067c076ec84c28"} Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.599771 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxq6" event={"ID":"5ada39f2-69a8-4c5f-8779-2b5f68429da1","Type":"ContainerDied","Data":"6474bce750998bd7e482c6ad8a53bb1176942fd51ca66ef1b120c04744ad1719"} Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.599782 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lxq6" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.599798 4724 scope.go:117] "RemoveContainer" containerID="37fce17d4f6d7cb2bdfdb89b42d792233eebde052a9862e26b067c076ec84c28" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.652999 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lxq6"] Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.653489 4724 scope.go:117] "RemoveContainer" containerID="8bfa3fdd37e17e466f756efa32c3504c292774d0e2132d2567adbbd804d63f15" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.662937 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7lxq6"] Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.684141 4724 scope.go:117] "RemoveContainer" containerID="cf585c7032c3b3cc134a33b8f758f48bbf9f57920b7e4fd7c40c8c0667105169" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.742615 4724 scope.go:117] "RemoveContainer" containerID="37fce17d4f6d7cb2bdfdb89b42d792233eebde052a9862e26b067c076ec84c28" Feb 26 13:28:12 crc kubenswrapper[4724]: E0226 13:28:12.743288 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37fce17d4f6d7cb2bdfdb89b42d792233eebde052a9862e26b067c076ec84c28\": container with ID starting with 37fce17d4f6d7cb2bdfdb89b42d792233eebde052a9862e26b067c076ec84c28 not found: ID does not exist" containerID="37fce17d4f6d7cb2bdfdb89b42d792233eebde052a9862e26b067c076ec84c28" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.743360 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37fce17d4f6d7cb2bdfdb89b42d792233eebde052a9862e26b067c076ec84c28"} err="failed to get container status \"37fce17d4f6d7cb2bdfdb89b42d792233eebde052a9862e26b067c076ec84c28\": rpc error: code = NotFound desc = could not find container \"37fce17d4f6d7cb2bdfdb89b42d792233eebde052a9862e26b067c076ec84c28\": container with ID starting with 37fce17d4f6d7cb2bdfdb89b42d792233eebde052a9862e26b067c076ec84c28 not found: ID does not exist" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.743400 4724 scope.go:117] "RemoveContainer" containerID="8bfa3fdd37e17e466f756efa32c3504c292774d0e2132d2567adbbd804d63f15" Feb 26 13:28:12 crc kubenswrapper[4724]: E0226 13:28:12.743864 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bfa3fdd37e17e466f756efa32c3504c292774d0e2132d2567adbbd804d63f15\": container with ID starting with 8bfa3fdd37e17e466f756efa32c3504c292774d0e2132d2567adbbd804d63f15 not found: ID does not exist" containerID="8bfa3fdd37e17e466f756efa32c3504c292774d0e2132d2567adbbd804d63f15" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.743911 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bfa3fdd37e17e466f756efa32c3504c292774d0e2132d2567adbbd804d63f15"} err="failed to get container status \"8bfa3fdd37e17e466f756efa32c3504c292774d0e2132d2567adbbd804d63f15\": rpc error: code = NotFound desc = could not find container \"8bfa3fdd37e17e466f756efa32c3504c292774d0e2132d2567adbbd804d63f15\": container with ID starting with 8bfa3fdd37e17e466f756efa32c3504c292774d0e2132d2567adbbd804d63f15 not found: ID does not exist" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.743945 4724 scope.go:117] "RemoveContainer" containerID="cf585c7032c3b3cc134a33b8f758f48bbf9f57920b7e4fd7c40c8c0667105169" Feb 26 13:28:12 crc kubenswrapper[4724]: E0226 13:28:12.744348 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf585c7032c3b3cc134a33b8f758f48bbf9f57920b7e4fd7c40c8c0667105169\": container with ID starting with cf585c7032c3b3cc134a33b8f758f48bbf9f57920b7e4fd7c40c8c0667105169 not found: ID does not exist" containerID="cf585c7032c3b3cc134a33b8f758f48bbf9f57920b7e4fd7c40c8c0667105169" Feb 26 13:28:12 crc kubenswrapper[4724]: I0226 13:28:12.744378 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf585c7032c3b3cc134a33b8f758f48bbf9f57920b7e4fd7c40c8c0667105169"} err="failed to get container status \"cf585c7032c3b3cc134a33b8f758f48bbf9f57920b7e4fd7c40c8c0667105169\": rpc error: code = NotFound desc = could not find container \"cf585c7032c3b3cc134a33b8f758f48bbf9f57920b7e4fd7c40c8c0667105169\": container with ID starting with cf585c7032c3b3cc134a33b8f758f48bbf9f57920b7e4fd7c40c8c0667105169 not found: ID does not exist" Feb 26 13:28:13 crc kubenswrapper[4724]: I0226 13:28:13.990632 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ada39f2-69a8-4c5f-8779-2b5f68429da1" path="/var/lib/kubelet/pods/5ada39f2-69a8-4c5f-8779-2b5f68429da1/volumes" Feb 26 13:28:17 crc kubenswrapper[4724]: I0226 13:28:17.976324 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:28:18 crc kubenswrapper[4724]: I0226 13:28:18.682409 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"100059746acf6ca85e65372a253a4172b975a3b5ca453fd61bc9f92ecf616151"} Feb 26 13:28:26 crc kubenswrapper[4724]: I0226 13:28:26.889610 4724 scope.go:117] "RemoveContainer" containerID="c9377e56fb91caa517db01d1c94fff82491317d1cd5b764a807ba4f97e646ad9" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.198458 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6"] Feb 26 13:30:00 crc kubenswrapper[4724]: E0226 13:30:00.200262 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ada39f2-69a8-4c5f-8779-2b5f68429da1" containerName="registry-server" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.200285 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ada39f2-69a8-4c5f-8779-2b5f68429da1" containerName="registry-server" Feb 26 13:30:00 crc kubenswrapper[4724]: E0226 13:30:00.200317 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ada39f2-69a8-4c5f-8779-2b5f68429da1" containerName="extract-utilities" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.200325 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ada39f2-69a8-4c5f-8779-2b5f68429da1" containerName="extract-utilities" Feb 26 13:30:00 crc kubenswrapper[4724]: E0226 13:30:00.200366 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ada39f2-69a8-4c5f-8779-2b5f68429da1" containerName="extract-content" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.200373 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ada39f2-69a8-4c5f-8779-2b5f68429da1" containerName="extract-content" Feb 26 13:30:00 crc kubenswrapper[4724]: E0226 13:30:00.200399 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0430b1-2564-4984-b53e-e5dec336f43d" containerName="oc" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.200410 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0430b1-2564-4984-b53e-e5dec336f43d" containerName="oc" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.200686 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c0430b1-2564-4984-b53e-e5dec336f43d" containerName="oc" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.200700 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ada39f2-69a8-4c5f-8779-2b5f68429da1" containerName="registry-server" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.201877 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.211085 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.211308 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.224657 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535210-tlvsx"] Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.227437 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535210-tlvsx" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.237296 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.238221 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.239232 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.250905 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6"] Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.285502 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535210-tlvsx"] Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.365994 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d732943-e434-4bb5-b301-74a6f7f2ce09-secret-volume\") pod \"collect-profiles-29535210-sjlh6\" (UID: \"3d732943-e434-4bb5-b301-74a6f7f2ce09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.366093 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzlb6\" (UniqueName: \"kubernetes.io/projected/3d732943-e434-4bb5-b301-74a6f7f2ce09-kube-api-access-wzlb6\") pod \"collect-profiles-29535210-sjlh6\" (UID: \"3d732943-e434-4bb5-b301-74a6f7f2ce09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.366139 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sklcz\" (UniqueName: \"kubernetes.io/projected/0198bbfc-b32b-4865-9407-843708d712a1-kube-api-access-sklcz\") pod \"auto-csr-approver-29535210-tlvsx\" (UID: \"0198bbfc-b32b-4865-9407-843708d712a1\") " pod="openshift-infra/auto-csr-approver-29535210-tlvsx" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.366306 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d732943-e434-4bb5-b301-74a6f7f2ce09-config-volume\") pod \"collect-profiles-29535210-sjlh6\" (UID: \"3d732943-e434-4bb5-b301-74a6f7f2ce09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.468603 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d732943-e434-4bb5-b301-74a6f7f2ce09-secret-volume\") pod \"collect-profiles-29535210-sjlh6\" (UID: \"3d732943-e434-4bb5-b301-74a6f7f2ce09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.468678 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlb6\" (UniqueName: \"kubernetes.io/projected/3d732943-e434-4bb5-b301-74a6f7f2ce09-kube-api-access-wzlb6\") pod \"collect-profiles-29535210-sjlh6\" (UID: \"3d732943-e434-4bb5-b301-74a6f7f2ce09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.468717 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sklcz\" (UniqueName: \"kubernetes.io/projected/0198bbfc-b32b-4865-9407-843708d712a1-kube-api-access-sklcz\") pod \"auto-csr-approver-29535210-tlvsx\" (UID: \"0198bbfc-b32b-4865-9407-843708d712a1\") " pod="openshift-infra/auto-csr-approver-29535210-tlvsx" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.468786 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d732943-e434-4bb5-b301-74a6f7f2ce09-config-volume\") pod \"collect-profiles-29535210-sjlh6\" (UID: \"3d732943-e434-4bb5-b301-74a6f7f2ce09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.470347 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d732943-e434-4bb5-b301-74a6f7f2ce09-config-volume\") pod \"collect-profiles-29535210-sjlh6\" (UID: \"3d732943-e434-4bb5-b301-74a6f7f2ce09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.480140 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d732943-e434-4bb5-b301-74a6f7f2ce09-secret-volume\") pod \"collect-profiles-29535210-sjlh6\" (UID: \"3d732943-e434-4bb5-b301-74a6f7f2ce09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.497983 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sklcz\" (UniqueName: \"kubernetes.io/projected/0198bbfc-b32b-4865-9407-843708d712a1-kube-api-access-sklcz\") pod \"auto-csr-approver-29535210-tlvsx\" (UID: \"0198bbfc-b32b-4865-9407-843708d712a1\") " pod="openshift-infra/auto-csr-approver-29535210-tlvsx" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.502219 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzlb6\" (UniqueName: \"kubernetes.io/projected/3d732943-e434-4bb5-b301-74a6f7f2ce09-kube-api-access-wzlb6\") pod \"collect-profiles-29535210-sjlh6\" (UID: \"3d732943-e434-4bb5-b301-74a6f7f2ce09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.537877 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" Feb 26 13:30:00 crc kubenswrapper[4724]: I0226 13:30:00.574955 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535210-tlvsx" Feb 26 13:30:01 crc kubenswrapper[4724]: I0226 13:30:01.262817 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6"] Feb 26 13:30:01 crc kubenswrapper[4724]: I0226 13:30:01.334889 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535210-tlvsx"] Feb 26 13:30:01 crc kubenswrapper[4724]: W0226 13:30:01.343390 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0198bbfc_b32b_4865_9407_843708d712a1.slice/crio-f9f7cd3d4ba27be8c518fa5546dd2f7f8b4342e10509a50f202eca180270c6ad WatchSource:0}: Error finding container f9f7cd3d4ba27be8c518fa5546dd2f7f8b4342e10509a50f202eca180270c6ad: Status 404 returned error can't find the container with id f9f7cd3d4ba27be8c518fa5546dd2f7f8b4342e10509a50f202eca180270c6ad Feb 26 13:30:01 crc kubenswrapper[4724]: I0226 13:30:01.952623 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" event={"ID":"3d732943-e434-4bb5-b301-74a6f7f2ce09","Type":"ContainerStarted","Data":"dae24897faed70ffc4c74ecc6cbab5243bfd8aa62952227dc78cf8a7cea0ca2d"} Feb 26 13:30:01 crc kubenswrapper[4724]: I0226 13:30:01.953238 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" event={"ID":"3d732943-e434-4bb5-b301-74a6f7f2ce09","Type":"ContainerStarted","Data":"2021b57a5a0a3e93ca68ab86909e83bc5f9ca1fb20ab755f7537b5256e27837c"} Feb 26 13:30:01 crc kubenswrapper[4724]: I0226 13:30:01.956650 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535210-tlvsx" event={"ID":"0198bbfc-b32b-4865-9407-843708d712a1","Type":"ContainerStarted","Data":"f9f7cd3d4ba27be8c518fa5546dd2f7f8b4342e10509a50f202eca180270c6ad"} Feb 26 13:30:01 crc kubenswrapper[4724]: I0226 13:30:01.979612 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" podStartSLOduration=1.979570874 podStartE2EDuration="1.979570874s" podCreationTimestamp="2026-02-26 13:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 13:30:01.974410244 +0000 UTC m=+8668.630149379" watchObservedRunningTime="2026-02-26 13:30:01.979570874 +0000 UTC m=+8668.635309989" Feb 26 13:30:03 crc kubenswrapper[4724]: I0226 13:30:03.994787 4724 generic.go:334] "Generic (PLEG): container finished" podID="3d732943-e434-4bb5-b301-74a6f7f2ce09" containerID="dae24897faed70ffc4c74ecc6cbab5243bfd8aa62952227dc78cf8a7cea0ca2d" exitCode=0 Feb 26 13:30:03 crc kubenswrapper[4724]: I0226 13:30:03.994864 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535210-tlvsx" event={"ID":"0198bbfc-b32b-4865-9407-843708d712a1","Type":"ContainerStarted","Data":"f58cc07c57fa0d578cc59f1fc8f36ee5e4ebee12a0be5be108a580c15a023baf"} Feb 26 13:30:03 crc kubenswrapper[4724]: I0226 13:30:03.996553 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" event={"ID":"3d732943-e434-4bb5-b301-74a6f7f2ce09","Type":"ContainerDied","Data":"dae24897faed70ffc4c74ecc6cbab5243bfd8aa62952227dc78cf8a7cea0ca2d"} Feb 26 13:30:04 crc kubenswrapper[4724]: I0226 13:30:04.079129 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535210-tlvsx" podStartSLOduration=1.892855975 podStartE2EDuration="4.079096024s" podCreationTimestamp="2026-02-26 13:30:00 +0000 UTC" firstStartedPulling="2026-02-26 13:30:01.343704703 +0000 UTC m=+8667.999443828" lastFinishedPulling="2026-02-26 13:30:03.529944762 +0000 UTC m=+8670.185683877" observedRunningTime="2026-02-26 13:30:04.06504366 +0000 UTC m=+8670.720782795" watchObservedRunningTime="2026-02-26 13:30:04.079096024 +0000 UTC m=+8670.734835139" Feb 26 13:30:05 crc kubenswrapper[4724]: I0226 13:30:05.011095 4724 generic.go:334] "Generic (PLEG): container finished" podID="0198bbfc-b32b-4865-9407-843708d712a1" containerID="f58cc07c57fa0d578cc59f1fc8f36ee5e4ebee12a0be5be108a580c15a023baf" exitCode=0 Feb 26 13:30:05 crc kubenswrapper[4724]: I0226 13:30:05.011241 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535210-tlvsx" event={"ID":"0198bbfc-b32b-4865-9407-843708d712a1","Type":"ContainerDied","Data":"f58cc07c57fa0d578cc59f1fc8f36ee5e4ebee12a0be5be108a580c15a023baf"} Feb 26 13:30:05 crc kubenswrapper[4724]: I0226 13:30:05.521549 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" Feb 26 13:30:05 crc kubenswrapper[4724]: I0226 13:30:05.658875 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d732943-e434-4bb5-b301-74a6f7f2ce09-secret-volume\") pod \"3d732943-e434-4bb5-b301-74a6f7f2ce09\" (UID: \"3d732943-e434-4bb5-b301-74a6f7f2ce09\") " Feb 26 13:30:05 crc kubenswrapper[4724]: I0226 13:30:05.659269 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d732943-e434-4bb5-b301-74a6f7f2ce09-config-volume\") pod \"3d732943-e434-4bb5-b301-74a6f7f2ce09\" (UID: \"3d732943-e434-4bb5-b301-74a6f7f2ce09\") " Feb 26 13:30:05 crc kubenswrapper[4724]: I0226 13:30:05.659517 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzlb6\" (UniqueName: \"kubernetes.io/projected/3d732943-e434-4bb5-b301-74a6f7f2ce09-kube-api-access-wzlb6\") pod \"3d732943-e434-4bb5-b301-74a6f7f2ce09\" (UID: \"3d732943-e434-4bb5-b301-74a6f7f2ce09\") " Feb 26 13:30:05 crc kubenswrapper[4724]: I0226 13:30:05.662311 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d732943-e434-4bb5-b301-74a6f7f2ce09-config-volume" (OuterVolumeSpecName: "config-volume") pod "3d732943-e434-4bb5-b301-74a6f7f2ce09" (UID: "3d732943-e434-4bb5-b301-74a6f7f2ce09"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 13:30:05 crc kubenswrapper[4724]: I0226 13:30:05.671098 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d732943-e434-4bb5-b301-74a6f7f2ce09-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3d732943-e434-4bb5-b301-74a6f7f2ce09" (UID: "3d732943-e434-4bb5-b301-74a6f7f2ce09"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:30:05 crc kubenswrapper[4724]: I0226 13:30:05.671756 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d732943-e434-4bb5-b301-74a6f7f2ce09-kube-api-access-wzlb6" (OuterVolumeSpecName: "kube-api-access-wzlb6") pod "3d732943-e434-4bb5-b301-74a6f7f2ce09" (UID: "3d732943-e434-4bb5-b301-74a6f7f2ce09"). InnerVolumeSpecName "kube-api-access-wzlb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:30:05 crc kubenswrapper[4724]: I0226 13:30:05.763031 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d732943-e434-4bb5-b301-74a6f7f2ce09-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 13:30:05 crc kubenswrapper[4724]: I0226 13:30:05.763379 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzlb6\" (UniqueName: \"kubernetes.io/projected/3d732943-e434-4bb5-b301-74a6f7f2ce09-kube-api-access-wzlb6\") on node \"crc\" DevicePath \"\"" Feb 26 13:30:05 crc kubenswrapper[4724]: I0226 13:30:05.763451 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d732943-e434-4bb5-b301-74a6f7f2ce09-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 13:30:06 crc kubenswrapper[4724]: I0226 13:30:06.031806 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" Feb 26 13:30:06 crc kubenswrapper[4724]: I0226 13:30:06.034708 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6" event={"ID":"3d732943-e434-4bb5-b301-74a6f7f2ce09","Type":"ContainerDied","Data":"2021b57a5a0a3e93ca68ab86909e83bc5f9ca1fb20ab755f7537b5256e27837c"} Feb 26 13:30:06 crc kubenswrapper[4724]: I0226 13:30:06.034833 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2021b57a5a0a3e93ca68ab86909e83bc5f9ca1fb20ab755f7537b5256e27837c" Feb 26 13:30:06 crc kubenswrapper[4724]: I0226 13:30:06.148329 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597"] Feb 26 13:30:06 crc kubenswrapper[4724]: I0226 13:30:06.159000 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535165-vg597"] Feb 26 13:30:06 crc kubenswrapper[4724]: I0226 13:30:06.567151 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535210-tlvsx" Feb 26 13:30:06 crc kubenswrapper[4724]: I0226 13:30:06.691654 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sklcz\" (UniqueName: \"kubernetes.io/projected/0198bbfc-b32b-4865-9407-843708d712a1-kube-api-access-sklcz\") pod \"0198bbfc-b32b-4865-9407-843708d712a1\" (UID: \"0198bbfc-b32b-4865-9407-843708d712a1\") " Feb 26 13:30:06 crc kubenswrapper[4724]: I0226 13:30:06.708699 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0198bbfc-b32b-4865-9407-843708d712a1-kube-api-access-sklcz" (OuterVolumeSpecName: "kube-api-access-sklcz") pod "0198bbfc-b32b-4865-9407-843708d712a1" (UID: "0198bbfc-b32b-4865-9407-843708d712a1"). InnerVolumeSpecName "kube-api-access-sklcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:30:06 crc kubenswrapper[4724]: I0226 13:30:06.795172 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sklcz\" (UniqueName: \"kubernetes.io/projected/0198bbfc-b32b-4865-9407-843708d712a1-kube-api-access-sklcz\") on node \"crc\" DevicePath \"\"" Feb 26 13:30:07 crc kubenswrapper[4724]: I0226 13:30:07.046097 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535210-tlvsx" event={"ID":"0198bbfc-b32b-4865-9407-843708d712a1","Type":"ContainerDied","Data":"f9f7cd3d4ba27be8c518fa5546dd2f7f8b4342e10509a50f202eca180270c6ad"} Feb 26 13:30:07 crc kubenswrapper[4724]: I0226 13:30:07.046163 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9f7cd3d4ba27be8c518fa5546dd2f7f8b4342e10509a50f202eca180270c6ad" Feb 26 13:30:07 crc kubenswrapper[4724]: I0226 13:30:07.046172 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535210-tlvsx" Feb 26 13:30:07 crc kubenswrapper[4724]: I0226 13:30:07.115507 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535204-5kwg7"] Feb 26 13:30:07 crc kubenswrapper[4724]: I0226 13:30:07.128210 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535204-5kwg7"] Feb 26 13:30:07 crc kubenswrapper[4724]: I0226 13:30:07.989738 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b202c1d-17d8-4d89-8e39-808aea75e518" path="/var/lib/kubelet/pods/4b202c1d-17d8-4d89-8e39-808aea75e518/volumes" Feb 26 13:30:07 crc kubenswrapper[4724]: I0226 13:30:07.991350 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fbff6a3-55eb-4222-92a4-960f632ccbaf" path="/var/lib/kubelet/pods/5fbff6a3-55eb-4222-92a4-960f632ccbaf/volumes" Feb 26 13:30:27 crc kubenswrapper[4724]: I0226 13:30:27.093007 4724 scope.go:117] "RemoveContainer" containerID="fc8f38930fdb54e9a403db9b885d3f9851d594e9a0ccf1b0a7c5b9f3e113b62c" Feb 26 13:30:27 crc kubenswrapper[4724]: I0226 13:30:27.136543 4724 scope.go:117] "RemoveContainer" containerID="bfb9afa390143ed1c8ea6116215b67c2714a32a399fd5707fd6c9918cbff9cef" Feb 26 13:30:41 crc kubenswrapper[4724]: E0226 13:30:41.127974 4724 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.08s" Feb 26 13:30:47 crc kubenswrapper[4724]: I0226 13:30:47.073881 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:30:47 crc kubenswrapper[4724]: I0226 13:30:47.081782 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:31:16 crc kubenswrapper[4724]: I0226 13:31:16.906988 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:31:16 crc kubenswrapper[4724]: I0226 13:31:16.907978 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.547014 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hnsm4"] Feb 26 13:31:20 crc kubenswrapper[4724]: E0226 13:31:20.551439 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0198bbfc-b32b-4865-9407-843708d712a1" containerName="oc" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.551568 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0198bbfc-b32b-4865-9407-843708d712a1" containerName="oc" Feb 26 13:31:20 crc kubenswrapper[4724]: E0226 13:31:20.551710 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d732943-e434-4bb5-b301-74a6f7f2ce09" containerName="collect-profiles" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.551809 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d732943-e434-4bb5-b301-74a6f7f2ce09" containerName="collect-profiles" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.552227 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0198bbfc-b32b-4865-9407-843708d712a1" containerName="oc" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.552310 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d732943-e434-4bb5-b301-74a6f7f2ce09" containerName="collect-profiles" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.555968 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.562772 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hnsm4"] Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.689154 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5fpd\" (UniqueName: \"kubernetes.io/projected/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-kube-api-access-l5fpd\") pod \"redhat-operators-hnsm4\" (UID: \"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a\") " pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.689706 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-catalog-content\") pod \"redhat-operators-hnsm4\" (UID: \"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a\") " pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.689744 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-utilities\") pod \"redhat-operators-hnsm4\" (UID: \"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a\") " pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.793057 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-catalog-content\") pod \"redhat-operators-hnsm4\" (UID: \"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a\") " pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.793137 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-utilities\") pod \"redhat-operators-hnsm4\" (UID: \"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a\") " pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.793253 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5fpd\" (UniqueName: \"kubernetes.io/projected/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-kube-api-access-l5fpd\") pod \"redhat-operators-hnsm4\" (UID: \"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a\") " pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.794082 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-catalog-content\") pod \"redhat-operators-hnsm4\" (UID: \"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a\") " pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.794095 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-utilities\") pod \"redhat-operators-hnsm4\" (UID: \"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a\") " pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.820006 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5fpd\" (UniqueName: \"kubernetes.io/projected/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-kube-api-access-l5fpd\") pod \"redhat-operators-hnsm4\" (UID: \"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a\") " pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:31:20 crc kubenswrapper[4724]: I0226 13:31:20.899425 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:31:21 crc kubenswrapper[4724]: I0226 13:31:21.718754 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hnsm4"] Feb 26 13:31:22 crc kubenswrapper[4724]: I0226 13:31:22.668736 4724 generic.go:334] "Generic (PLEG): container finished" podID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerID="bd83006574ee62b9f4ef54a3fdc84c19d2e9306d9e3c46ca7bf86ed62f3992e4" exitCode=0 Feb 26 13:31:22 crc kubenswrapper[4724]: I0226 13:31:22.668994 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnsm4" event={"ID":"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a","Type":"ContainerDied","Data":"bd83006574ee62b9f4ef54a3fdc84c19d2e9306d9e3c46ca7bf86ed62f3992e4"} Feb 26 13:31:22 crc kubenswrapper[4724]: I0226 13:31:22.669420 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnsm4" event={"ID":"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a","Type":"ContainerStarted","Data":"bec423c734233af259cf025710f222583433a77b5448c0d76b1ba35f064669f9"} Feb 26 13:31:25 crc kubenswrapper[4724]: I0226 13:31:25.711354 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnsm4" event={"ID":"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a","Type":"ContainerStarted","Data":"6fa3d4b8dd1f9746f884d486ee9f8114baf5c22b1597c225580563f3d8b49a91"} Feb 26 13:31:42 crc kubenswrapper[4724]: I0226 13:31:42.932419 4724 generic.go:334] "Generic (PLEG): container finished" podID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerID="6fa3d4b8dd1f9746f884d486ee9f8114baf5c22b1597c225580563f3d8b49a91" exitCode=0 Feb 26 13:31:42 crc kubenswrapper[4724]: I0226 13:31:42.932508 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnsm4" event={"ID":"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a","Type":"ContainerDied","Data":"6fa3d4b8dd1f9746f884d486ee9f8114baf5c22b1597c225580563f3d8b49a91"} Feb 26 13:31:44 crc kubenswrapper[4724]: I0226 13:31:44.959524 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnsm4" event={"ID":"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a","Type":"ContainerStarted","Data":"f7bbaac2b7d9bffb33722440489157a384747eeaa1bfe2837f09ee549af3194b"} Feb 26 13:31:45 crc kubenswrapper[4724]: I0226 13:31:45.012077 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hnsm4" podStartSLOduration=3.723511562 podStartE2EDuration="25.011995718s" podCreationTimestamp="2026-02-26 13:31:20 +0000 UTC" firstStartedPulling="2026-02-26 13:31:22.672876921 +0000 UTC m=+8749.328616036" lastFinishedPulling="2026-02-26 13:31:43.961361077 +0000 UTC m=+8770.617100192" observedRunningTime="2026-02-26 13:31:44.991552082 +0000 UTC m=+8771.647291217" watchObservedRunningTime="2026-02-26 13:31:45.011995718 +0000 UTC m=+8771.667734833" Feb 26 13:31:46 crc kubenswrapper[4724]: I0226 13:31:46.906739 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:31:46 crc kubenswrapper[4724]: I0226 13:31:46.909021 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:31:46 crc kubenswrapper[4724]: I0226 13:31:46.909236 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 13:31:46 crc kubenswrapper[4724]: I0226 13:31:46.911034 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"100059746acf6ca85e65372a253a4172b975a3b5ca453fd61bc9f92ecf616151"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 13:31:46 crc kubenswrapper[4724]: I0226 13:31:46.911477 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://100059746acf6ca85e65372a253a4172b975a3b5ca453fd61bc9f92ecf616151" gracePeriod=600 Feb 26 13:31:48 crc kubenswrapper[4724]: I0226 13:31:48.020418 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="100059746acf6ca85e65372a253a4172b975a3b5ca453fd61bc9f92ecf616151" exitCode=0 Feb 26 13:31:48 crc kubenswrapper[4724]: I0226 13:31:48.020513 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"100059746acf6ca85e65372a253a4172b975a3b5ca453fd61bc9f92ecf616151"} Feb 26 13:31:48 crc kubenswrapper[4724]: I0226 13:31:48.023381 4724 scope.go:117] "RemoveContainer" containerID="5d1590bcf201e1c81b91cc9ce0f14c749ffef67a16eeae7e6e1fac6afbdc3078" Feb 26 13:31:48 crc kubenswrapper[4724]: I0226 13:31:48.024766 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9"} Feb 26 13:31:50 crc kubenswrapper[4724]: I0226 13:31:50.900579 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:31:50 crc kubenswrapper[4724]: I0226 13:31:50.901694 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:31:51 crc kubenswrapper[4724]: I0226 13:31:51.971931 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hnsm4" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="registry-server" probeResult="failure" output=< Feb 26 13:31:51 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:31:51 crc kubenswrapper[4724]: > Feb 26 13:32:00 crc kubenswrapper[4724]: I0226 13:32:00.185289 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535212-j2qjq"] Feb 26 13:32:00 crc kubenswrapper[4724]: I0226 13:32:00.187850 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535212-j2qjq" Feb 26 13:32:00 crc kubenswrapper[4724]: I0226 13:32:00.202850 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:32:00 crc kubenswrapper[4724]: I0226 13:32:00.203261 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:32:00 crc kubenswrapper[4724]: I0226 13:32:00.203603 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:32:00 crc kubenswrapper[4724]: I0226 13:32:00.209008 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535212-j2qjq"] Feb 26 13:32:00 crc kubenswrapper[4724]: I0226 13:32:00.264214 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dt88\" (UniqueName: \"kubernetes.io/projected/3b28bed1-10a5-4eb2-83ee-95cac6bccef9-kube-api-access-7dt88\") pod \"auto-csr-approver-29535212-j2qjq\" (UID: \"3b28bed1-10a5-4eb2-83ee-95cac6bccef9\") " pod="openshift-infra/auto-csr-approver-29535212-j2qjq" Feb 26 13:32:00 crc kubenswrapper[4724]: I0226 13:32:00.367601 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dt88\" (UniqueName: \"kubernetes.io/projected/3b28bed1-10a5-4eb2-83ee-95cac6bccef9-kube-api-access-7dt88\") pod \"auto-csr-approver-29535212-j2qjq\" (UID: \"3b28bed1-10a5-4eb2-83ee-95cac6bccef9\") " pod="openshift-infra/auto-csr-approver-29535212-j2qjq" Feb 26 13:32:00 crc kubenswrapper[4724]: I0226 13:32:00.416596 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dt88\" (UniqueName: \"kubernetes.io/projected/3b28bed1-10a5-4eb2-83ee-95cac6bccef9-kube-api-access-7dt88\") pod \"auto-csr-approver-29535212-j2qjq\" (UID: \"3b28bed1-10a5-4eb2-83ee-95cac6bccef9\") " pod="openshift-infra/auto-csr-approver-29535212-j2qjq" Feb 26 13:32:00 crc kubenswrapper[4724]: I0226 13:32:00.529319 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535212-j2qjq" Feb 26 13:32:01 crc kubenswrapper[4724]: I0226 13:32:01.971068 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hnsm4" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="registry-server" probeResult="failure" output=< Feb 26 13:32:01 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:32:01 crc kubenswrapper[4724]: > Feb 26 13:32:02 crc kubenswrapper[4724]: I0226 13:32:02.055253 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535212-j2qjq"] Feb 26 13:32:02 crc kubenswrapper[4724]: W0226 13:32:02.063523 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b28bed1_10a5_4eb2_83ee_95cac6bccef9.slice/crio-4cf4dbb9b44981d427559f9f574505f39869f9b69afef7d288748dd5838eba76 WatchSource:0}: Error finding container 4cf4dbb9b44981d427559f9f574505f39869f9b69afef7d288748dd5838eba76: Status 404 returned error can't find the container with id 4cf4dbb9b44981d427559f9f574505f39869f9b69afef7d288748dd5838eba76 Feb 26 13:32:02 crc kubenswrapper[4724]: I0226 13:32:02.292508 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535212-j2qjq" event={"ID":"3b28bed1-10a5-4eb2-83ee-95cac6bccef9","Type":"ContainerStarted","Data":"4cf4dbb9b44981d427559f9f574505f39869f9b69afef7d288748dd5838eba76"} Feb 26 13:32:04 crc kubenswrapper[4724]: I0226 13:32:04.355824 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535212-j2qjq" event={"ID":"3b28bed1-10a5-4eb2-83ee-95cac6bccef9","Type":"ContainerStarted","Data":"9c4d61c3c81678563d9dc9fc90dc6dfbfc9841aad3575eea10866a3075a9bae6"} Feb 26 13:32:04 crc kubenswrapper[4724]: I0226 13:32:04.433777 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535212-j2qjq" podStartSLOduration=3.08408929 podStartE2EDuration="4.43374075s" podCreationTimestamp="2026-02-26 13:32:00 +0000 UTC" firstStartedPulling="2026-02-26 13:32:02.180830898 +0000 UTC m=+8788.836570013" lastFinishedPulling="2026-02-26 13:32:03.530482358 +0000 UTC m=+8790.186221473" observedRunningTime="2026-02-26 13:32:04.388550789 +0000 UTC m=+8791.044289914" watchObservedRunningTime="2026-02-26 13:32:04.43374075 +0000 UTC m=+8791.089479865" Feb 26 13:32:08 crc kubenswrapper[4724]: I0226 13:32:08.408691 4724 generic.go:334] "Generic (PLEG): container finished" podID="3b28bed1-10a5-4eb2-83ee-95cac6bccef9" containerID="9c4d61c3c81678563d9dc9fc90dc6dfbfc9841aad3575eea10866a3075a9bae6" exitCode=0 Feb 26 13:32:08 crc kubenswrapper[4724]: I0226 13:32:08.408797 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535212-j2qjq" event={"ID":"3b28bed1-10a5-4eb2-83ee-95cac6bccef9","Type":"ContainerDied","Data":"9c4d61c3c81678563d9dc9fc90dc6dfbfc9841aad3575eea10866a3075a9bae6"} Feb 26 13:32:10 crc kubenswrapper[4724]: I0226 13:32:10.433515 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535212-j2qjq" event={"ID":"3b28bed1-10a5-4eb2-83ee-95cac6bccef9","Type":"ContainerDied","Data":"4cf4dbb9b44981d427559f9f574505f39869f9b69afef7d288748dd5838eba76"} Feb 26 13:32:10 crc kubenswrapper[4724]: I0226 13:32:10.434700 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cf4dbb9b44981d427559f9f574505f39869f9b69afef7d288748dd5838eba76" Feb 26 13:32:10 crc kubenswrapper[4724]: I0226 13:32:10.434996 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535212-j2qjq" Feb 26 13:32:10 crc kubenswrapper[4724]: I0226 13:32:10.595689 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dt88\" (UniqueName: \"kubernetes.io/projected/3b28bed1-10a5-4eb2-83ee-95cac6bccef9-kube-api-access-7dt88\") pod \"3b28bed1-10a5-4eb2-83ee-95cac6bccef9\" (UID: \"3b28bed1-10a5-4eb2-83ee-95cac6bccef9\") " Feb 26 13:32:10 crc kubenswrapper[4724]: I0226 13:32:10.629778 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b28bed1-10a5-4eb2-83ee-95cac6bccef9-kube-api-access-7dt88" (OuterVolumeSpecName: "kube-api-access-7dt88") pod "3b28bed1-10a5-4eb2-83ee-95cac6bccef9" (UID: "3b28bed1-10a5-4eb2-83ee-95cac6bccef9"). InnerVolumeSpecName "kube-api-access-7dt88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:32:10 crc kubenswrapper[4724]: I0226 13:32:10.703248 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dt88\" (UniqueName: \"kubernetes.io/projected/3b28bed1-10a5-4eb2-83ee-95cac6bccef9-kube-api-access-7dt88\") on node \"crc\" DevicePath \"\"" Feb 26 13:32:11 crc kubenswrapper[4724]: I0226 13:32:11.444418 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535212-j2qjq" Feb 26 13:32:11 crc kubenswrapper[4724]: I0226 13:32:11.546249 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535206-cjh7b"] Feb 26 13:32:11 crc kubenswrapper[4724]: I0226 13:32:11.558610 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535206-cjh7b"] Feb 26 13:32:11 crc kubenswrapper[4724]: I0226 13:32:11.965650 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hnsm4" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="registry-server" probeResult="failure" output=< Feb 26 13:32:11 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:32:11 crc kubenswrapper[4724]: > Feb 26 13:32:11 crc kubenswrapper[4724]: I0226 13:32:11.989216 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582f9049-54e0-4adb-bf40-4fbb18f663f7" path="/var/lib/kubelet/pods/582f9049-54e0-4adb-bf40-4fbb18f663f7/volumes" Feb 26 13:32:21 crc kubenswrapper[4724]: I0226 13:32:21.964465 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hnsm4" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="registry-server" probeResult="failure" output=< Feb 26 13:32:21 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:32:21 crc kubenswrapper[4724]: > Feb 26 13:32:22 crc kubenswrapper[4724]: I0226 13:32:22.814308 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vh89w"] Feb 26 13:32:22 crc kubenswrapper[4724]: E0226 13:32:22.816084 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b28bed1-10a5-4eb2-83ee-95cac6bccef9" containerName="oc" Feb 26 13:32:22 crc kubenswrapper[4724]: I0226 13:32:22.816109 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b28bed1-10a5-4eb2-83ee-95cac6bccef9" containerName="oc" Feb 26 13:32:22 crc kubenswrapper[4724]: I0226 13:32:22.816382 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b28bed1-10a5-4eb2-83ee-95cac6bccef9" containerName="oc" Feb 26 13:32:22 crc kubenswrapper[4724]: I0226 13:32:22.821082 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:22 crc kubenswrapper[4724]: I0226 13:32:22.833639 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vh89w"] Feb 26 13:32:22 crc kubenswrapper[4724]: I0226 13:32:22.950321 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbkvx\" (UniqueName: \"kubernetes.io/projected/e576a5e1-d625-454d-a6a4-e11beb8c616d-kube-api-access-mbkvx\") pod \"community-operators-vh89w\" (UID: \"e576a5e1-d625-454d-a6a4-e11beb8c616d\") " pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:22 crc kubenswrapper[4724]: I0226 13:32:22.950885 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e576a5e1-d625-454d-a6a4-e11beb8c616d-utilities\") pod \"community-operators-vh89w\" (UID: \"e576a5e1-d625-454d-a6a4-e11beb8c616d\") " pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:22 crc kubenswrapper[4724]: I0226 13:32:22.951262 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e576a5e1-d625-454d-a6a4-e11beb8c616d-catalog-content\") pod \"community-operators-vh89w\" (UID: \"e576a5e1-d625-454d-a6a4-e11beb8c616d\") " pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:23 crc kubenswrapper[4724]: I0226 13:32:23.055990 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e576a5e1-d625-454d-a6a4-e11beb8c616d-utilities\") pod \"community-operators-vh89w\" (UID: \"e576a5e1-d625-454d-a6a4-e11beb8c616d\") " pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:23 crc kubenswrapper[4724]: I0226 13:32:23.056137 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e576a5e1-d625-454d-a6a4-e11beb8c616d-catalog-content\") pod \"community-operators-vh89w\" (UID: \"e576a5e1-d625-454d-a6a4-e11beb8c616d\") " pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:23 crc kubenswrapper[4724]: I0226 13:32:23.056650 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e576a5e1-d625-454d-a6a4-e11beb8c616d-utilities\") pod \"community-operators-vh89w\" (UID: \"e576a5e1-d625-454d-a6a4-e11beb8c616d\") " pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:23 crc kubenswrapper[4724]: I0226 13:32:23.056755 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbkvx\" (UniqueName: \"kubernetes.io/projected/e576a5e1-d625-454d-a6a4-e11beb8c616d-kube-api-access-mbkvx\") pod \"community-operators-vh89w\" (UID: \"e576a5e1-d625-454d-a6a4-e11beb8c616d\") " pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:23 crc kubenswrapper[4724]: I0226 13:32:23.057321 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e576a5e1-d625-454d-a6a4-e11beb8c616d-catalog-content\") pod \"community-operators-vh89w\" (UID: \"e576a5e1-d625-454d-a6a4-e11beb8c616d\") " pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:23 crc kubenswrapper[4724]: I0226 13:32:23.096143 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbkvx\" (UniqueName: \"kubernetes.io/projected/e576a5e1-d625-454d-a6a4-e11beb8c616d-kube-api-access-mbkvx\") pod \"community-operators-vh89w\" (UID: \"e576a5e1-d625-454d-a6a4-e11beb8c616d\") " pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:23 crc kubenswrapper[4724]: I0226 13:32:23.155054 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:23 crc kubenswrapper[4724]: I0226 13:32:23.907419 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vh89w"] Feb 26 13:32:24 crc kubenswrapper[4724]: I0226 13:32:24.623807 4724 generic.go:334] "Generic (PLEG): container finished" podID="e576a5e1-d625-454d-a6a4-e11beb8c616d" containerID="d81ced652a3a418343ba27c654b439603ed68124fef5eda591ff73383a645f9d" exitCode=0 Feb 26 13:32:24 crc kubenswrapper[4724]: I0226 13:32:24.626213 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vh89w" event={"ID":"e576a5e1-d625-454d-a6a4-e11beb8c616d","Type":"ContainerDied","Data":"d81ced652a3a418343ba27c654b439603ed68124fef5eda591ff73383a645f9d"} Feb 26 13:32:24 crc kubenswrapper[4724]: I0226 13:32:24.626494 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vh89w" event={"ID":"e576a5e1-d625-454d-a6a4-e11beb8c616d","Type":"ContainerStarted","Data":"788fb5dcd04b0766b06a91f9adf4edea79d2a8ac39d5c8308d790732a0082b3a"} Feb 26 13:32:24 crc kubenswrapper[4724]: I0226 13:32:24.633719 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 13:32:26 crc kubenswrapper[4724]: I0226 13:32:26.655509 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vh89w" event={"ID":"e576a5e1-d625-454d-a6a4-e11beb8c616d","Type":"ContainerStarted","Data":"acc08bbd1d8d56f576a01d2951f9875afc0885c53abe83ea503444523cbe5d4b"} Feb 26 13:32:27 crc kubenswrapper[4724]: I0226 13:32:27.326511 4724 scope.go:117] "RemoveContainer" containerID="868a6495024e6e7be1d5a44a061d100280b267b484f88e3ed59d22cabbf51b3a" Feb 26 13:32:30 crc kubenswrapper[4724]: I0226 13:32:30.864452 4724 generic.go:334] "Generic (PLEG): container finished" podID="e576a5e1-d625-454d-a6a4-e11beb8c616d" containerID="acc08bbd1d8d56f576a01d2951f9875afc0885c53abe83ea503444523cbe5d4b" exitCode=0 Feb 26 13:32:30 crc kubenswrapper[4724]: I0226 13:32:30.864561 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vh89w" event={"ID":"e576a5e1-d625-454d-a6a4-e11beb8c616d","Type":"ContainerDied","Data":"acc08bbd1d8d56f576a01d2951f9875afc0885c53abe83ea503444523cbe5d4b"} Feb 26 13:32:31 crc kubenswrapper[4724]: I0226 13:32:31.882799 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vh89w" event={"ID":"e576a5e1-d625-454d-a6a4-e11beb8c616d","Type":"ContainerStarted","Data":"acf3d3c209c0d7db45c24ba6ea64eeada8333a330927bfc3ef3c46c9e76fbd32"} Feb 26 13:32:31 crc kubenswrapper[4724]: I0226 13:32:31.927866 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vh89w" podStartSLOduration=3.180457059 podStartE2EDuration="9.927832587s" podCreationTimestamp="2026-02-26 13:32:22 +0000 UTC" firstStartedPulling="2026-02-26 13:32:24.633314458 +0000 UTC m=+8811.289053573" lastFinishedPulling="2026-02-26 13:32:31.380689986 +0000 UTC m=+8818.036429101" observedRunningTime="2026-02-26 13:32:31.920859701 +0000 UTC m=+8818.576598826" watchObservedRunningTime="2026-02-26 13:32:31.927832587 +0000 UTC m=+8818.583571722" Feb 26 13:32:31 crc kubenswrapper[4724]: I0226 13:32:31.988092 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hnsm4" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="registry-server" probeResult="failure" output=< Feb 26 13:32:31 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:32:31 crc kubenswrapper[4724]: > Feb 26 13:32:33 crc kubenswrapper[4724]: I0226 13:32:33.155482 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:33 crc kubenswrapper[4724]: I0226 13:32:33.157172 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:34 crc kubenswrapper[4724]: I0226 13:32:34.239022 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vh89w" podUID="e576a5e1-d625-454d-a6a4-e11beb8c616d" containerName="registry-server" probeResult="failure" output=< Feb 26 13:32:34 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:32:34 crc kubenswrapper[4724]: > Feb 26 13:32:41 crc kubenswrapper[4724]: I0226 13:32:41.969220 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hnsm4" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="registry-server" probeResult="failure" output=< Feb 26 13:32:41 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:32:41 crc kubenswrapper[4724]: > Feb 26 13:32:43 crc kubenswrapper[4724]: I0226 13:32:43.227350 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:43 crc kubenswrapper[4724]: I0226 13:32:43.300291 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:45 crc kubenswrapper[4724]: I0226 13:32:45.106237 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vh89w"] Feb 26 13:32:45 crc kubenswrapper[4724]: I0226 13:32:45.107519 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vh89w" podUID="e576a5e1-d625-454d-a6a4-e11beb8c616d" containerName="registry-server" containerID="cri-o://acf3d3c209c0d7db45c24ba6ea64eeada8333a330927bfc3ef3c46c9e76fbd32" gracePeriod=2 Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.060214 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.077196 4724 generic.go:334] "Generic (PLEG): container finished" podID="e576a5e1-d625-454d-a6a4-e11beb8c616d" containerID="acf3d3c209c0d7db45c24ba6ea64eeada8333a330927bfc3ef3c46c9e76fbd32" exitCode=0 Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.077298 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vh89w" event={"ID":"e576a5e1-d625-454d-a6a4-e11beb8c616d","Type":"ContainerDied","Data":"acf3d3c209c0d7db45c24ba6ea64eeada8333a330927bfc3ef3c46c9e76fbd32"} Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.077448 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vh89w" event={"ID":"e576a5e1-d625-454d-a6a4-e11beb8c616d","Type":"ContainerDied","Data":"788fb5dcd04b0766b06a91f9adf4edea79d2a8ac39d5c8308d790732a0082b3a"} Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.077353 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vh89w" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.077482 4724 scope.go:117] "RemoveContainer" containerID="acf3d3c209c0d7db45c24ba6ea64eeada8333a330927bfc3ef3c46c9e76fbd32" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.119978 4724 scope.go:117] "RemoveContainer" containerID="acc08bbd1d8d56f576a01d2951f9875afc0885c53abe83ea503444523cbe5d4b" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.170860 4724 scope.go:117] "RemoveContainer" containerID="d81ced652a3a418343ba27c654b439603ed68124fef5eda591ff73383a645f9d" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.189984 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbkvx\" (UniqueName: \"kubernetes.io/projected/e576a5e1-d625-454d-a6a4-e11beb8c616d-kube-api-access-mbkvx\") pod \"e576a5e1-d625-454d-a6a4-e11beb8c616d\" (UID: \"e576a5e1-d625-454d-a6a4-e11beb8c616d\") " Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.190315 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e576a5e1-d625-454d-a6a4-e11beb8c616d-utilities\") pod \"e576a5e1-d625-454d-a6a4-e11beb8c616d\" (UID: \"e576a5e1-d625-454d-a6a4-e11beb8c616d\") " Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.190821 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e576a5e1-d625-454d-a6a4-e11beb8c616d-catalog-content\") pod \"e576a5e1-d625-454d-a6a4-e11beb8c616d\" (UID: \"e576a5e1-d625-454d-a6a4-e11beb8c616d\") " Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.193764 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e576a5e1-d625-454d-a6a4-e11beb8c616d-utilities" (OuterVolumeSpecName: "utilities") pod "e576a5e1-d625-454d-a6a4-e11beb8c616d" (UID: "e576a5e1-d625-454d-a6a4-e11beb8c616d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.199367 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e576a5e1-d625-454d-a6a4-e11beb8c616d-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.213133 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e576a5e1-d625-454d-a6a4-e11beb8c616d-kube-api-access-mbkvx" (OuterVolumeSpecName: "kube-api-access-mbkvx") pod "e576a5e1-d625-454d-a6a4-e11beb8c616d" (UID: "e576a5e1-d625-454d-a6a4-e11beb8c616d"). InnerVolumeSpecName "kube-api-access-mbkvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.244056 4724 scope.go:117] "RemoveContainer" containerID="acf3d3c209c0d7db45c24ba6ea64eeada8333a330927bfc3ef3c46c9e76fbd32" Feb 26 13:32:46 crc kubenswrapper[4724]: E0226 13:32:46.249715 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf3d3c209c0d7db45c24ba6ea64eeada8333a330927bfc3ef3c46c9e76fbd32\": container with ID starting with acf3d3c209c0d7db45c24ba6ea64eeada8333a330927bfc3ef3c46c9e76fbd32 not found: ID does not exist" containerID="acf3d3c209c0d7db45c24ba6ea64eeada8333a330927bfc3ef3c46c9e76fbd32" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.249782 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf3d3c209c0d7db45c24ba6ea64eeada8333a330927bfc3ef3c46c9e76fbd32"} err="failed to get container status \"acf3d3c209c0d7db45c24ba6ea64eeada8333a330927bfc3ef3c46c9e76fbd32\": rpc error: code = NotFound desc = could not find container \"acf3d3c209c0d7db45c24ba6ea64eeada8333a330927bfc3ef3c46c9e76fbd32\": container with ID starting with acf3d3c209c0d7db45c24ba6ea64eeada8333a330927bfc3ef3c46c9e76fbd32 not found: ID does not exist" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.249836 4724 scope.go:117] "RemoveContainer" containerID="acc08bbd1d8d56f576a01d2951f9875afc0885c53abe83ea503444523cbe5d4b" Feb 26 13:32:46 crc kubenswrapper[4724]: E0226 13:32:46.253570 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc08bbd1d8d56f576a01d2951f9875afc0885c53abe83ea503444523cbe5d4b\": container with ID starting with acc08bbd1d8d56f576a01d2951f9875afc0885c53abe83ea503444523cbe5d4b not found: ID does not exist" containerID="acc08bbd1d8d56f576a01d2951f9875afc0885c53abe83ea503444523cbe5d4b" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.253615 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc08bbd1d8d56f576a01d2951f9875afc0885c53abe83ea503444523cbe5d4b"} err="failed to get container status \"acc08bbd1d8d56f576a01d2951f9875afc0885c53abe83ea503444523cbe5d4b\": rpc error: code = NotFound desc = could not find container \"acc08bbd1d8d56f576a01d2951f9875afc0885c53abe83ea503444523cbe5d4b\": container with ID starting with acc08bbd1d8d56f576a01d2951f9875afc0885c53abe83ea503444523cbe5d4b not found: ID does not exist" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.253648 4724 scope.go:117] "RemoveContainer" containerID="d81ced652a3a418343ba27c654b439603ed68124fef5eda591ff73383a645f9d" Feb 26 13:32:46 crc kubenswrapper[4724]: E0226 13:32:46.255270 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d81ced652a3a418343ba27c654b439603ed68124fef5eda591ff73383a645f9d\": container with ID starting with d81ced652a3a418343ba27c654b439603ed68124fef5eda591ff73383a645f9d not found: ID does not exist" containerID="d81ced652a3a418343ba27c654b439603ed68124fef5eda591ff73383a645f9d" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.255311 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81ced652a3a418343ba27c654b439603ed68124fef5eda591ff73383a645f9d"} err="failed to get container status \"d81ced652a3a418343ba27c654b439603ed68124fef5eda591ff73383a645f9d\": rpc error: code = NotFound desc = could not find container \"d81ced652a3a418343ba27c654b439603ed68124fef5eda591ff73383a645f9d\": container with ID starting with d81ced652a3a418343ba27c654b439603ed68124fef5eda591ff73383a645f9d not found: ID does not exist" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.297843 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e576a5e1-d625-454d-a6a4-e11beb8c616d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e576a5e1-d625-454d-a6a4-e11beb8c616d" (UID: "e576a5e1-d625-454d-a6a4-e11beb8c616d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.302663 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbkvx\" (UniqueName: \"kubernetes.io/projected/e576a5e1-d625-454d-a6a4-e11beb8c616d-kube-api-access-mbkvx\") on node \"crc\" DevicePath \"\"" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.302703 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e576a5e1-d625-454d-a6a4-e11beb8c616d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.427752 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vh89w"] Feb 26 13:32:46 crc kubenswrapper[4724]: I0226 13:32:46.441878 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vh89w"] Feb 26 13:32:47 crc kubenswrapper[4724]: I0226 13:32:47.995061 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e576a5e1-d625-454d-a6a4-e11beb8c616d" path="/var/lib/kubelet/pods/e576a5e1-d625-454d-a6a4-e11beb8c616d/volumes" Feb 26 13:32:51 crc kubenswrapper[4724]: I0226 13:32:51.954695 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hnsm4" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="registry-server" probeResult="failure" output=< Feb 26 13:32:51 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:32:51 crc kubenswrapper[4724]: > Feb 26 13:33:01 crc kubenswrapper[4724]: I0226 13:33:01.955489 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hnsm4" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="registry-server" probeResult="failure" output=< Feb 26 13:33:01 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:33:01 crc kubenswrapper[4724]: > Feb 26 13:33:10 crc kubenswrapper[4724]: I0226 13:33:10.960412 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:33:11 crc kubenswrapper[4724]: I0226 13:33:11.025971 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:33:11 crc kubenswrapper[4724]: I0226 13:33:11.207193 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hnsm4"] Feb 26 13:33:12 crc kubenswrapper[4724]: I0226 13:33:12.423665 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hnsm4" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="registry-server" containerID="cri-o://f7bbaac2b7d9bffb33722440489157a384747eeaa1bfe2837f09ee549af3194b" gracePeriod=2 Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.168691 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.253002 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5fpd\" (UniqueName: \"kubernetes.io/projected/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-kube-api-access-l5fpd\") pod \"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a\" (UID: \"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a\") " Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.253335 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-catalog-content\") pod \"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a\" (UID: \"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a\") " Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.253375 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-utilities\") pod \"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a\" (UID: \"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a\") " Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.262081 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-utilities" (OuterVolumeSpecName: "utilities") pod "a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" (UID: "a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.283823 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-kube-api-access-l5fpd" (OuterVolumeSpecName: "kube-api-access-l5fpd") pod "a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" (UID: "a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a"). InnerVolumeSpecName "kube-api-access-l5fpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.359372 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5fpd\" (UniqueName: \"kubernetes.io/projected/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-kube-api-access-l5fpd\") on node \"crc\" DevicePath \"\"" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.359420 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.408101 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" (UID: "a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.440326 4724 generic.go:334] "Generic (PLEG): container finished" podID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerID="f7bbaac2b7d9bffb33722440489157a384747eeaa1bfe2837f09ee549af3194b" exitCode=0 Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.440407 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnsm4" event={"ID":"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a","Type":"ContainerDied","Data":"f7bbaac2b7d9bffb33722440489157a384747eeaa1bfe2837f09ee549af3194b"} Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.440439 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnsm4" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.440457 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnsm4" event={"ID":"a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a","Type":"ContainerDied","Data":"bec423c734233af259cf025710f222583433a77b5448c0d76b1ba35f064669f9"} Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.440490 4724 scope.go:117] "RemoveContainer" containerID="f7bbaac2b7d9bffb33722440489157a384747eeaa1bfe2837f09ee549af3194b" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.470010 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.473895 4724 scope.go:117] "RemoveContainer" containerID="6fa3d4b8dd1f9746f884d486ee9f8114baf5c22b1597c225580563f3d8b49a91" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.494737 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hnsm4"] Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.505909 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hnsm4"] Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.540284 4724 scope.go:117] "RemoveContainer" containerID="bd83006574ee62b9f4ef54a3fdc84c19d2e9306d9e3c46ca7bf86ed62f3992e4" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.594796 4724 scope.go:117] "RemoveContainer" containerID="f7bbaac2b7d9bffb33722440489157a384747eeaa1bfe2837f09ee549af3194b" Feb 26 13:33:13 crc kubenswrapper[4724]: E0226 13:33:13.595756 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7bbaac2b7d9bffb33722440489157a384747eeaa1bfe2837f09ee549af3194b\": container with ID starting with f7bbaac2b7d9bffb33722440489157a384747eeaa1bfe2837f09ee549af3194b not found: ID does not exist" containerID="f7bbaac2b7d9bffb33722440489157a384747eeaa1bfe2837f09ee549af3194b" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.595855 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7bbaac2b7d9bffb33722440489157a384747eeaa1bfe2837f09ee549af3194b"} err="failed to get container status \"f7bbaac2b7d9bffb33722440489157a384747eeaa1bfe2837f09ee549af3194b\": rpc error: code = NotFound desc = could not find container \"f7bbaac2b7d9bffb33722440489157a384747eeaa1bfe2837f09ee549af3194b\": container with ID starting with f7bbaac2b7d9bffb33722440489157a384747eeaa1bfe2837f09ee549af3194b not found: ID does not exist" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.595897 4724 scope.go:117] "RemoveContainer" containerID="6fa3d4b8dd1f9746f884d486ee9f8114baf5c22b1597c225580563f3d8b49a91" Feb 26 13:33:13 crc kubenswrapper[4724]: E0226 13:33:13.596703 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fa3d4b8dd1f9746f884d486ee9f8114baf5c22b1597c225580563f3d8b49a91\": container with ID starting with 6fa3d4b8dd1f9746f884d486ee9f8114baf5c22b1597c225580563f3d8b49a91 not found: ID does not exist" containerID="6fa3d4b8dd1f9746f884d486ee9f8114baf5c22b1597c225580563f3d8b49a91" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.596902 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa3d4b8dd1f9746f884d486ee9f8114baf5c22b1597c225580563f3d8b49a91"} err="failed to get container status \"6fa3d4b8dd1f9746f884d486ee9f8114baf5c22b1597c225580563f3d8b49a91\": rpc error: code = NotFound desc = could not find container \"6fa3d4b8dd1f9746f884d486ee9f8114baf5c22b1597c225580563f3d8b49a91\": container with ID starting with 6fa3d4b8dd1f9746f884d486ee9f8114baf5c22b1597c225580563f3d8b49a91 not found: ID does not exist" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.596942 4724 scope.go:117] "RemoveContainer" containerID="bd83006574ee62b9f4ef54a3fdc84c19d2e9306d9e3c46ca7bf86ed62f3992e4" Feb 26 13:33:13 crc kubenswrapper[4724]: E0226 13:33:13.597575 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd83006574ee62b9f4ef54a3fdc84c19d2e9306d9e3c46ca7bf86ed62f3992e4\": container with ID starting with bd83006574ee62b9f4ef54a3fdc84c19d2e9306d9e3c46ca7bf86ed62f3992e4 not found: ID does not exist" containerID="bd83006574ee62b9f4ef54a3fdc84c19d2e9306d9e3c46ca7bf86ed62f3992e4" Feb 26 13:33:13 crc kubenswrapper[4724]: I0226 13:33:13.597616 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd83006574ee62b9f4ef54a3fdc84c19d2e9306d9e3c46ca7bf86ed62f3992e4"} err="failed to get container status \"bd83006574ee62b9f4ef54a3fdc84c19d2e9306d9e3c46ca7bf86ed62f3992e4\": rpc error: code = NotFound desc = could not find container \"bd83006574ee62b9f4ef54a3fdc84c19d2e9306d9e3c46ca7bf86ed62f3992e4\": container with ID starting with bd83006574ee62b9f4ef54a3fdc84c19d2e9306d9e3c46ca7bf86ed62f3992e4 not found: ID does not exist" Feb 26 13:33:14 crc kubenswrapper[4724]: I0226 13:33:14.018116 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" path="/var/lib/kubelet/pods/a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a/volumes" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.185432 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535214-28kr8"] Feb 26 13:34:00 crc kubenswrapper[4724]: E0226 13:34:00.213176 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e576a5e1-d625-454d-a6a4-e11beb8c616d" containerName="extract-utilities" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.213235 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e576a5e1-d625-454d-a6a4-e11beb8c616d" containerName="extract-utilities" Feb 26 13:34:00 crc kubenswrapper[4724]: E0226 13:34:00.213268 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e576a5e1-d625-454d-a6a4-e11beb8c616d" containerName="extract-content" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.213277 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e576a5e1-d625-454d-a6a4-e11beb8c616d" containerName="extract-content" Feb 26 13:34:00 crc kubenswrapper[4724]: E0226 13:34:00.213315 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e576a5e1-d625-454d-a6a4-e11beb8c616d" containerName="registry-server" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.213322 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e576a5e1-d625-454d-a6a4-e11beb8c616d" containerName="registry-server" Feb 26 13:34:00 crc kubenswrapper[4724]: E0226 13:34:00.213344 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="extract-utilities" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.213351 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="extract-utilities" Feb 26 13:34:00 crc kubenswrapper[4724]: E0226 13:34:00.213376 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="extract-content" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.213383 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="extract-content" Feb 26 13:34:00 crc kubenswrapper[4724]: E0226 13:34:00.213408 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="registry-server" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.213414 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="registry-server" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.213808 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e576a5e1-d625-454d-a6a4-e11beb8c616d" containerName="registry-server" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.213842 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f1d6a7-ca11-4de6-8fd6-7bf7ac27c83a" containerName="registry-server" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.220541 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535214-28kr8"] Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.220690 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535214-28kr8" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.226273 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.226958 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.234017 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.257890 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2r4n\" (UniqueName: \"kubernetes.io/projected/1e875781-84c5-41e3-9b07-a6956f211aa6-kube-api-access-r2r4n\") pod \"auto-csr-approver-29535214-28kr8\" (UID: \"1e875781-84c5-41e3-9b07-a6956f211aa6\") " pod="openshift-infra/auto-csr-approver-29535214-28kr8" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.361021 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2r4n\" (UniqueName: \"kubernetes.io/projected/1e875781-84c5-41e3-9b07-a6956f211aa6-kube-api-access-r2r4n\") pod \"auto-csr-approver-29535214-28kr8\" (UID: \"1e875781-84c5-41e3-9b07-a6956f211aa6\") " pod="openshift-infra/auto-csr-approver-29535214-28kr8" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.394222 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2r4n\" (UniqueName: \"kubernetes.io/projected/1e875781-84c5-41e3-9b07-a6956f211aa6-kube-api-access-r2r4n\") pod \"auto-csr-approver-29535214-28kr8\" (UID: \"1e875781-84c5-41e3-9b07-a6956f211aa6\") " pod="openshift-infra/auto-csr-approver-29535214-28kr8" Feb 26 13:34:00 crc kubenswrapper[4724]: I0226 13:34:00.555520 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535214-28kr8" Feb 26 13:34:01 crc kubenswrapper[4724]: I0226 13:34:01.176780 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535214-28kr8"] Feb 26 13:34:02 crc kubenswrapper[4724]: I0226 13:34:02.070738 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535214-28kr8" event={"ID":"1e875781-84c5-41e3-9b07-a6956f211aa6","Type":"ContainerStarted","Data":"2fe888ab2e5426c1733bb04bfd7e3585d18b05443d8b7ed23ec63c73d3bc9e54"} Feb 26 13:34:03 crc kubenswrapper[4724]: I0226 13:34:03.083167 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535214-28kr8" event={"ID":"1e875781-84c5-41e3-9b07-a6956f211aa6","Type":"ContainerStarted","Data":"82b96c00a98ee29621dfb70fdee5c331bb64b361d96c16bfda0deea5dbfeb311"} Feb 26 13:34:03 crc kubenswrapper[4724]: I0226 13:34:03.109878 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535214-28kr8" podStartSLOduration=1.806476315 podStartE2EDuration="3.109820746s" podCreationTimestamp="2026-02-26 13:34:00 +0000 UTC" firstStartedPulling="2026-02-26 13:34:01.197383409 +0000 UTC m=+8907.853122524" lastFinishedPulling="2026-02-26 13:34:02.50072784 +0000 UTC m=+8909.156466955" observedRunningTime="2026-02-26 13:34:03.10402044 +0000 UTC m=+8909.759759555" watchObservedRunningTime="2026-02-26 13:34:03.109820746 +0000 UTC m=+8909.765559861" Feb 26 13:34:04 crc kubenswrapper[4724]: I0226 13:34:04.100818 4724 generic.go:334] "Generic (PLEG): container finished" podID="1e875781-84c5-41e3-9b07-a6956f211aa6" containerID="82b96c00a98ee29621dfb70fdee5c331bb64b361d96c16bfda0deea5dbfeb311" exitCode=0 Feb 26 13:34:04 crc kubenswrapper[4724]: I0226 13:34:04.101042 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535214-28kr8" event={"ID":"1e875781-84c5-41e3-9b07-a6956f211aa6","Type":"ContainerDied","Data":"82b96c00a98ee29621dfb70fdee5c331bb64b361d96c16bfda0deea5dbfeb311"} Feb 26 13:34:05 crc kubenswrapper[4724]: I0226 13:34:05.537692 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535214-28kr8" Feb 26 13:34:05 crc kubenswrapper[4724]: I0226 13:34:05.630135 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2r4n\" (UniqueName: \"kubernetes.io/projected/1e875781-84c5-41e3-9b07-a6956f211aa6-kube-api-access-r2r4n\") pod \"1e875781-84c5-41e3-9b07-a6956f211aa6\" (UID: \"1e875781-84c5-41e3-9b07-a6956f211aa6\") " Feb 26 13:34:05 crc kubenswrapper[4724]: I0226 13:34:05.642854 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e875781-84c5-41e3-9b07-a6956f211aa6-kube-api-access-r2r4n" (OuterVolumeSpecName: "kube-api-access-r2r4n") pod "1e875781-84c5-41e3-9b07-a6956f211aa6" (UID: "1e875781-84c5-41e3-9b07-a6956f211aa6"). InnerVolumeSpecName "kube-api-access-r2r4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:34:05 crc kubenswrapper[4724]: I0226 13:34:05.732569 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2r4n\" (UniqueName: \"kubernetes.io/projected/1e875781-84c5-41e3-9b07-a6956f211aa6-kube-api-access-r2r4n\") on node \"crc\" DevicePath \"\"" Feb 26 13:34:06 crc kubenswrapper[4724]: I0226 13:34:06.123117 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535214-28kr8" event={"ID":"1e875781-84c5-41e3-9b07-a6956f211aa6","Type":"ContainerDied","Data":"2fe888ab2e5426c1733bb04bfd7e3585d18b05443d8b7ed23ec63c73d3bc9e54"} Feb 26 13:34:06 crc kubenswrapper[4724]: I0226 13:34:06.123153 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fe888ab2e5426c1733bb04bfd7e3585d18b05443d8b7ed23ec63c73d3bc9e54" Feb 26 13:34:06 crc kubenswrapper[4724]: I0226 13:34:06.123213 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535214-28kr8" Feb 26 13:34:06 crc kubenswrapper[4724]: I0226 13:34:06.247405 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535208-pj82n"] Feb 26 13:34:06 crc kubenswrapper[4724]: I0226 13:34:06.263545 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535208-pj82n"] Feb 26 13:34:07 crc kubenswrapper[4724]: I0226 13:34:07.987951 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c0430b1-2564-4984-b53e-e5dec336f43d" path="/var/lib/kubelet/pods/4c0430b1-2564-4984-b53e-e5dec336f43d/volumes" Feb 26 13:34:16 crc kubenswrapper[4724]: I0226 13:34:16.905959 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:34:16 crc kubenswrapper[4724]: I0226 13:34:16.907145 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:34:27 crc kubenswrapper[4724]: I0226 13:34:27.989754 4724 scope.go:117] "RemoveContainer" containerID="6ff4f136c52b4a65fc481e3c3e2d62faa5942fb1fb4ca0e3a5b75791eb94141d" Feb 26 13:34:46 crc kubenswrapper[4724]: I0226 13:34:46.908542 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:34:46 crc kubenswrapper[4724]: I0226 13:34:46.909623 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:35:16 crc kubenswrapper[4724]: I0226 13:35:16.907201 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:35:16 crc kubenswrapper[4724]: I0226 13:35:16.908593 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:35:16 crc kubenswrapper[4724]: I0226 13:35:16.908751 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 13:35:16 crc kubenswrapper[4724]: I0226 13:35:16.911263 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 13:35:16 crc kubenswrapper[4724]: I0226 13:35:16.911404 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" gracePeriod=600 Feb 26 13:35:17 crc kubenswrapper[4724]: E0226 13:35:17.039678 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:35:18 crc kubenswrapper[4724]: I0226 13:35:18.000029 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" exitCode=0 Feb 26 13:35:18 crc kubenswrapper[4724]: I0226 13:35:18.000117 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9"} Feb 26 13:35:18 crc kubenswrapper[4724]: I0226 13:35:18.000242 4724 scope.go:117] "RemoveContainer" containerID="100059746acf6ca85e65372a253a4172b975a3b5ca453fd61bc9f92ecf616151" Feb 26 13:35:18 crc kubenswrapper[4724]: I0226 13:35:18.003229 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:35:18 crc kubenswrapper[4724]: E0226 13:35:18.003572 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:35:30 crc kubenswrapper[4724]: I0226 13:35:30.976638 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:35:30 crc kubenswrapper[4724]: E0226 13:35:30.978126 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:35:42 crc kubenswrapper[4724]: I0226 13:35:42.976114 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:35:42 crc kubenswrapper[4724]: E0226 13:35:42.977457 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:35:48 crc kubenswrapper[4724]: E0226 13:35:48.670666 4724 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.145:35164->38.102.83.145:45037: write tcp 38.102.83.145:35164->38.102.83.145:45037: write: broken pipe Feb 26 13:35:53 crc kubenswrapper[4724]: I0226 13:35:53.990890 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:35:53 crc kubenswrapper[4724]: E0226 13:35:53.991683 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:36:00 crc kubenswrapper[4724]: I0226 13:36:00.165711 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535216-s6shz"] Feb 26 13:36:00 crc kubenswrapper[4724]: E0226 13:36:00.168904 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e875781-84c5-41e3-9b07-a6956f211aa6" containerName="oc" Feb 26 13:36:00 crc kubenswrapper[4724]: I0226 13:36:00.169044 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e875781-84c5-41e3-9b07-a6956f211aa6" containerName="oc" Feb 26 13:36:00 crc kubenswrapper[4724]: I0226 13:36:00.169509 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e875781-84c5-41e3-9b07-a6956f211aa6" containerName="oc" Feb 26 13:36:00 crc kubenswrapper[4724]: I0226 13:36:00.173490 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535216-s6shz" Feb 26 13:36:00 crc kubenswrapper[4724]: I0226 13:36:00.178745 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:36:00 crc kubenswrapper[4724]: I0226 13:36:00.179094 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:36:00 crc kubenswrapper[4724]: I0226 13:36:00.179281 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:36:00 crc kubenswrapper[4724]: I0226 13:36:00.196711 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535216-s6shz"] Feb 26 13:36:00 crc kubenswrapper[4724]: I0226 13:36:00.324299 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c8l8\" (UniqueName: \"kubernetes.io/projected/0ba7667e-6b8a-4c40-8211-4ac22e1460ec-kube-api-access-9c8l8\") pod \"auto-csr-approver-29535216-s6shz\" (UID: \"0ba7667e-6b8a-4c40-8211-4ac22e1460ec\") " pod="openshift-infra/auto-csr-approver-29535216-s6shz" Feb 26 13:36:00 crc kubenswrapper[4724]: I0226 13:36:00.427545 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c8l8\" (UniqueName: \"kubernetes.io/projected/0ba7667e-6b8a-4c40-8211-4ac22e1460ec-kube-api-access-9c8l8\") pod \"auto-csr-approver-29535216-s6shz\" (UID: \"0ba7667e-6b8a-4c40-8211-4ac22e1460ec\") " pod="openshift-infra/auto-csr-approver-29535216-s6shz" Feb 26 13:36:00 crc kubenswrapper[4724]: I0226 13:36:00.455623 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c8l8\" (UniqueName: \"kubernetes.io/projected/0ba7667e-6b8a-4c40-8211-4ac22e1460ec-kube-api-access-9c8l8\") pod \"auto-csr-approver-29535216-s6shz\" (UID: \"0ba7667e-6b8a-4c40-8211-4ac22e1460ec\") " pod="openshift-infra/auto-csr-approver-29535216-s6shz" Feb 26 13:36:00 crc kubenswrapper[4724]: I0226 13:36:00.535575 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535216-s6shz" Feb 26 13:36:01 crc kubenswrapper[4724]: I0226 13:36:01.376488 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535216-s6shz"] Feb 26 13:36:01 crc kubenswrapper[4724]: I0226 13:36:01.521481 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535216-s6shz" event={"ID":"0ba7667e-6b8a-4c40-8211-4ac22e1460ec","Type":"ContainerStarted","Data":"2dd7fbf6a37a5976daecd939d93d8628c3aa6e39f41c5eccb1afdbad81ce7c44"} Feb 26 13:36:03 crc kubenswrapper[4724]: I0226 13:36:03.555302 4724 generic.go:334] "Generic (PLEG): container finished" podID="0ba7667e-6b8a-4c40-8211-4ac22e1460ec" containerID="e8d2d35b0f4d01235d2d25d0375568cb3f0b8ae28258223d76c61a5aa6744b49" exitCode=0 Feb 26 13:36:03 crc kubenswrapper[4724]: I0226 13:36:03.555492 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535216-s6shz" event={"ID":"0ba7667e-6b8a-4c40-8211-4ac22e1460ec","Type":"ContainerDied","Data":"e8d2d35b0f4d01235d2d25d0375568cb3f0b8ae28258223d76c61a5aa6744b49"} Feb 26 13:36:05 crc kubenswrapper[4724]: I0226 13:36:05.062673 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535216-s6shz" Feb 26 13:36:05 crc kubenswrapper[4724]: I0226 13:36:05.286030 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c8l8\" (UniqueName: \"kubernetes.io/projected/0ba7667e-6b8a-4c40-8211-4ac22e1460ec-kube-api-access-9c8l8\") pod \"0ba7667e-6b8a-4c40-8211-4ac22e1460ec\" (UID: \"0ba7667e-6b8a-4c40-8211-4ac22e1460ec\") " Feb 26 13:36:05 crc kubenswrapper[4724]: I0226 13:36:05.298108 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba7667e-6b8a-4c40-8211-4ac22e1460ec-kube-api-access-9c8l8" (OuterVolumeSpecName: "kube-api-access-9c8l8") pod "0ba7667e-6b8a-4c40-8211-4ac22e1460ec" (UID: "0ba7667e-6b8a-4c40-8211-4ac22e1460ec"). InnerVolumeSpecName "kube-api-access-9c8l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:36:05 crc kubenswrapper[4724]: I0226 13:36:05.390347 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c8l8\" (UniqueName: \"kubernetes.io/projected/0ba7667e-6b8a-4c40-8211-4ac22e1460ec-kube-api-access-9c8l8\") on node \"crc\" DevicePath \"\"" Feb 26 13:36:05 crc kubenswrapper[4724]: I0226 13:36:05.580573 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535216-s6shz" event={"ID":"0ba7667e-6b8a-4c40-8211-4ac22e1460ec","Type":"ContainerDied","Data":"2dd7fbf6a37a5976daecd939d93d8628c3aa6e39f41c5eccb1afdbad81ce7c44"} Feb 26 13:36:05 crc kubenswrapper[4724]: I0226 13:36:05.580661 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dd7fbf6a37a5976daecd939d93d8628c3aa6e39f41c5eccb1afdbad81ce7c44" Feb 26 13:36:05 crc kubenswrapper[4724]: I0226 13:36:05.581240 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535216-s6shz" Feb 26 13:36:06 crc kubenswrapper[4724]: I0226 13:36:06.180085 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535210-tlvsx"] Feb 26 13:36:06 crc kubenswrapper[4724]: I0226 13:36:06.197151 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535210-tlvsx"] Feb 26 13:36:07 crc kubenswrapper[4724]: I0226 13:36:07.977008 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:36:07 crc kubenswrapper[4724]: E0226 13:36:07.978224 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:36:07 crc kubenswrapper[4724]: I0226 13:36:07.994577 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0198bbfc-b32b-4865-9407-843708d712a1" path="/var/lib/kubelet/pods/0198bbfc-b32b-4865-9407-843708d712a1/volumes" Feb 26 13:36:22 crc kubenswrapper[4724]: I0226 13:36:22.976123 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:36:22 crc kubenswrapper[4724]: E0226 13:36:22.977744 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:36:28 crc kubenswrapper[4724]: I0226 13:36:28.135424 4724 scope.go:117] "RemoveContainer" containerID="f58cc07c57fa0d578cc59f1fc8f36ee5e4ebee12a0be5be108a580c15a023baf" Feb 26 13:36:36 crc kubenswrapper[4724]: I0226 13:36:36.978213 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:36:36 crc kubenswrapper[4724]: E0226 13:36:36.979449 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:36:47 crc kubenswrapper[4724]: I0226 13:36:47.976396 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:36:47 crc kubenswrapper[4724]: E0226 13:36:47.977574 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:37:00 crc kubenswrapper[4724]: I0226 13:37:00.977860 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:37:00 crc kubenswrapper[4724]: E0226 13:37:00.979013 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:37:13 crc kubenswrapper[4724]: I0226 13:37:13.984065 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:37:13 crc kubenswrapper[4724]: E0226 13:37:13.985434 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:37:28 crc kubenswrapper[4724]: I0226 13:37:28.976343 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:37:28 crc kubenswrapper[4724]: E0226 13:37:28.977754 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:37:42 crc kubenswrapper[4724]: I0226 13:37:42.975938 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:37:42 crc kubenswrapper[4724]: E0226 13:37:42.977291 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:37:53 crc kubenswrapper[4724]: I0226 13:37:53.986960 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:37:53 crc kubenswrapper[4724]: E0226 13:37:53.991515 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:38:00 crc kubenswrapper[4724]: I0226 13:38:00.221492 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535218-7ssnh"] Feb 26 13:38:00 crc kubenswrapper[4724]: E0226 13:38:00.222704 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba7667e-6b8a-4c40-8211-4ac22e1460ec" containerName="oc" Feb 26 13:38:00 crc kubenswrapper[4724]: I0226 13:38:00.222841 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba7667e-6b8a-4c40-8211-4ac22e1460ec" containerName="oc" Feb 26 13:38:00 crc kubenswrapper[4724]: I0226 13:38:00.223162 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba7667e-6b8a-4c40-8211-4ac22e1460ec" containerName="oc" Feb 26 13:38:00 crc kubenswrapper[4724]: I0226 13:38:00.224436 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535218-7ssnh" Feb 26 13:38:00 crc kubenswrapper[4724]: I0226 13:38:00.229949 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:38:00 crc kubenswrapper[4724]: I0226 13:38:00.230145 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:38:00 crc kubenswrapper[4724]: I0226 13:38:00.230347 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:38:00 crc kubenswrapper[4724]: I0226 13:38:00.251220 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535218-7ssnh"] Feb 26 13:38:00 crc kubenswrapper[4724]: I0226 13:38:00.341498 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f2rb\" (UniqueName: \"kubernetes.io/projected/e8907499-9343-4e22-b6a9-6c2b936d3a61-kube-api-access-5f2rb\") pod \"auto-csr-approver-29535218-7ssnh\" (UID: \"e8907499-9343-4e22-b6a9-6c2b936d3a61\") " pod="openshift-infra/auto-csr-approver-29535218-7ssnh" Feb 26 13:38:00 crc kubenswrapper[4724]: I0226 13:38:00.445324 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f2rb\" (UniqueName: \"kubernetes.io/projected/e8907499-9343-4e22-b6a9-6c2b936d3a61-kube-api-access-5f2rb\") pod \"auto-csr-approver-29535218-7ssnh\" (UID: \"e8907499-9343-4e22-b6a9-6c2b936d3a61\") " pod="openshift-infra/auto-csr-approver-29535218-7ssnh" Feb 26 13:38:00 crc kubenswrapper[4724]: I0226 13:38:00.471335 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f2rb\" (UniqueName: \"kubernetes.io/projected/e8907499-9343-4e22-b6a9-6c2b936d3a61-kube-api-access-5f2rb\") pod \"auto-csr-approver-29535218-7ssnh\" (UID: \"e8907499-9343-4e22-b6a9-6c2b936d3a61\") " pod="openshift-infra/auto-csr-approver-29535218-7ssnh" Feb 26 13:38:00 crc kubenswrapper[4724]: I0226 13:38:00.557307 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535218-7ssnh" Feb 26 13:38:01 crc kubenswrapper[4724]: I0226 13:38:01.334992 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 13:38:01 crc kubenswrapper[4724]: I0226 13:38:01.344272 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535218-7ssnh"] Feb 26 13:38:02 crc kubenswrapper[4724]: I0226 13:38:02.135881 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535218-7ssnh" event={"ID":"e8907499-9343-4e22-b6a9-6c2b936d3a61","Type":"ContainerStarted","Data":"98ea504f34d5799e03dfb0e62cd3d53cdaaabdc3180df2bb949e87be54716a70"} Feb 26 13:38:03 crc kubenswrapper[4724]: I0226 13:38:03.151658 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535218-7ssnh" event={"ID":"e8907499-9343-4e22-b6a9-6c2b936d3a61","Type":"ContainerStarted","Data":"0634962f1435f0c915d7f7133a38d34885861c08253767b0689d0fe34e16bac6"} Feb 26 13:38:03 crc kubenswrapper[4724]: I0226 13:38:03.184519 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535218-7ssnh" podStartSLOduration=2.188538008 podStartE2EDuration="3.18448156s" podCreationTimestamp="2026-02-26 13:38:00 +0000 UTC" firstStartedPulling="2026-02-26 13:38:01.334675513 +0000 UTC m=+9147.990414628" lastFinishedPulling="2026-02-26 13:38:02.330619055 +0000 UTC m=+9148.986358180" observedRunningTime="2026-02-26 13:38:03.168095026 +0000 UTC m=+9149.823834151" watchObservedRunningTime="2026-02-26 13:38:03.18448156 +0000 UTC m=+9149.840220675" Feb 26 13:38:04 crc kubenswrapper[4724]: I0226 13:38:04.222734 4724 generic.go:334] "Generic (PLEG): container finished" podID="e8907499-9343-4e22-b6a9-6c2b936d3a61" containerID="0634962f1435f0c915d7f7133a38d34885861c08253767b0689d0fe34e16bac6" exitCode=0 Feb 26 13:38:04 crc kubenswrapper[4724]: I0226 13:38:04.223549 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535218-7ssnh" event={"ID":"e8907499-9343-4e22-b6a9-6c2b936d3a61","Type":"ContainerDied","Data":"0634962f1435f0c915d7f7133a38d34885861c08253767b0689d0fe34e16bac6"} Feb 26 13:38:04 crc kubenswrapper[4724]: I0226 13:38:04.976519 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:38:04 crc kubenswrapper[4724]: E0226 13:38:04.977034 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:38:05 crc kubenswrapper[4724]: I0226 13:38:05.759742 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535218-7ssnh" Feb 26 13:38:05 crc kubenswrapper[4724]: I0226 13:38:05.896934 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f2rb\" (UniqueName: \"kubernetes.io/projected/e8907499-9343-4e22-b6a9-6c2b936d3a61-kube-api-access-5f2rb\") pod \"e8907499-9343-4e22-b6a9-6c2b936d3a61\" (UID: \"e8907499-9343-4e22-b6a9-6c2b936d3a61\") " Feb 26 13:38:05 crc kubenswrapper[4724]: I0226 13:38:05.908903 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8907499-9343-4e22-b6a9-6c2b936d3a61-kube-api-access-5f2rb" (OuterVolumeSpecName: "kube-api-access-5f2rb") pod "e8907499-9343-4e22-b6a9-6c2b936d3a61" (UID: "e8907499-9343-4e22-b6a9-6c2b936d3a61"). InnerVolumeSpecName "kube-api-access-5f2rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.005788 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f2rb\" (UniqueName: \"kubernetes.io/projected/e8907499-9343-4e22-b6a9-6c2b936d3a61-kube-api-access-5f2rb\") on node \"crc\" DevicePath \"\"" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.197365 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wblsj"] Feb 26 13:38:06 crc kubenswrapper[4724]: E0226 13:38:06.198538 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8907499-9343-4e22-b6a9-6c2b936d3a61" containerName="oc" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.198623 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8907499-9343-4e22-b6a9-6c2b936d3a61" containerName="oc" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.198919 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8907499-9343-4e22-b6a9-6c2b936d3a61" containerName="oc" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.201683 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.271233 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wblsj"] Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.303859 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535218-7ssnh" event={"ID":"e8907499-9343-4e22-b6a9-6c2b936d3a61","Type":"ContainerDied","Data":"98ea504f34d5799e03dfb0e62cd3d53cdaaabdc3180df2bb949e87be54716a70"} Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.303967 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98ea504f34d5799e03dfb0e62cd3d53cdaaabdc3180df2bb949e87be54716a70" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.304076 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535218-7ssnh" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.315773 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17487af6-b06e-431b-b449-d22200ca12a3-utilities\") pod \"redhat-marketplace-wblsj\" (UID: \"17487af6-b06e-431b-b449-d22200ca12a3\") " pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.316038 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn5qg\" (UniqueName: \"kubernetes.io/projected/17487af6-b06e-431b-b449-d22200ca12a3-kube-api-access-hn5qg\") pod \"redhat-marketplace-wblsj\" (UID: \"17487af6-b06e-431b-b449-d22200ca12a3\") " pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.316103 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17487af6-b06e-431b-b449-d22200ca12a3-catalog-content\") pod \"redhat-marketplace-wblsj\" (UID: \"17487af6-b06e-431b-b449-d22200ca12a3\") " pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.359353 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535212-j2qjq"] Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.379522 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535212-j2qjq"] Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.422574 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn5qg\" (UniqueName: \"kubernetes.io/projected/17487af6-b06e-431b-b449-d22200ca12a3-kube-api-access-hn5qg\") pod \"redhat-marketplace-wblsj\" (UID: \"17487af6-b06e-431b-b449-d22200ca12a3\") " pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.422663 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17487af6-b06e-431b-b449-d22200ca12a3-catalog-content\") pod \"redhat-marketplace-wblsj\" (UID: \"17487af6-b06e-431b-b449-d22200ca12a3\") " pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.424780 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17487af6-b06e-431b-b449-d22200ca12a3-catalog-content\") pod \"redhat-marketplace-wblsj\" (UID: \"17487af6-b06e-431b-b449-d22200ca12a3\") " pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.424976 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17487af6-b06e-431b-b449-d22200ca12a3-utilities\") pod \"redhat-marketplace-wblsj\" (UID: \"17487af6-b06e-431b-b449-d22200ca12a3\") " pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.425456 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17487af6-b06e-431b-b449-d22200ca12a3-utilities\") pod \"redhat-marketplace-wblsj\" (UID: \"17487af6-b06e-431b-b449-d22200ca12a3\") " pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.458396 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn5qg\" (UniqueName: \"kubernetes.io/projected/17487af6-b06e-431b-b449-d22200ca12a3-kube-api-access-hn5qg\") pod \"redhat-marketplace-wblsj\" (UID: \"17487af6-b06e-431b-b449-d22200ca12a3\") " pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:06 crc kubenswrapper[4724]: I0226 13:38:06.552060 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:07 crc kubenswrapper[4724]: I0226 13:38:07.339930 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wblsj"] Feb 26 13:38:07 crc kubenswrapper[4724]: E0226 13:38:07.839092 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17487af6_b06e_431b_b449_d22200ca12a3.slice/crio-d3baa66ab11ab16d0bc6d1c6ff038b5c6985c5def612422b87db79bb82be929d.scope\": RecentStats: unable to find data in memory cache]" Feb 26 13:38:07 crc kubenswrapper[4724]: I0226 13:38:07.991045 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b28bed1-10a5-4eb2-83ee-95cac6bccef9" path="/var/lib/kubelet/pods/3b28bed1-10a5-4eb2-83ee-95cac6bccef9/volumes" Feb 26 13:38:08 crc kubenswrapper[4724]: I0226 13:38:08.330330 4724 generic.go:334] "Generic (PLEG): container finished" podID="17487af6-b06e-431b-b449-d22200ca12a3" containerID="d3baa66ab11ab16d0bc6d1c6ff038b5c6985c5def612422b87db79bb82be929d" exitCode=0 Feb 26 13:38:08 crc kubenswrapper[4724]: I0226 13:38:08.330839 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wblsj" event={"ID":"17487af6-b06e-431b-b449-d22200ca12a3","Type":"ContainerDied","Data":"d3baa66ab11ab16d0bc6d1c6ff038b5c6985c5def612422b87db79bb82be929d"} Feb 26 13:38:08 crc kubenswrapper[4724]: I0226 13:38:08.330978 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wblsj" event={"ID":"17487af6-b06e-431b-b449-d22200ca12a3","Type":"ContainerStarted","Data":"98a86a36326bd4bffe221ba8a5eb3f085abf56b44dce2af0742ecce597dbbaba"} Feb 26 13:38:10 crc kubenswrapper[4724]: I0226 13:38:10.357168 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wblsj" event={"ID":"17487af6-b06e-431b-b449-d22200ca12a3","Type":"ContainerStarted","Data":"4e86614882b1eca6a71c96ac29acf040c41236646a4f4d44504f8ad695a5b61e"} Feb 26 13:38:11 crc kubenswrapper[4724]: I0226 13:38:11.368053 4724 generic.go:334] "Generic (PLEG): container finished" podID="17487af6-b06e-431b-b449-d22200ca12a3" containerID="4e86614882b1eca6a71c96ac29acf040c41236646a4f4d44504f8ad695a5b61e" exitCode=0 Feb 26 13:38:11 crc kubenswrapper[4724]: I0226 13:38:11.368572 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wblsj" event={"ID":"17487af6-b06e-431b-b449-d22200ca12a3","Type":"ContainerDied","Data":"4e86614882b1eca6a71c96ac29acf040c41236646a4f4d44504f8ad695a5b61e"} Feb 26 13:38:12 crc kubenswrapper[4724]: I0226 13:38:12.384025 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wblsj" event={"ID":"17487af6-b06e-431b-b449-d22200ca12a3","Type":"ContainerStarted","Data":"f170c5f0aa1350cae2c8ddc48f0e130bd2953eeb3a48da7a412e7fc66f6dc4ce"} Feb 26 13:38:12 crc kubenswrapper[4724]: I0226 13:38:12.418994 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wblsj" podStartSLOduration=2.728653164 podStartE2EDuration="6.418962461s" podCreationTimestamp="2026-02-26 13:38:06 +0000 UTC" firstStartedPulling="2026-02-26 13:38:08.333487158 +0000 UTC m=+9154.989226273" lastFinishedPulling="2026-02-26 13:38:12.023796455 +0000 UTC m=+9158.679535570" observedRunningTime="2026-02-26 13:38:12.413348869 +0000 UTC m=+9159.069088004" watchObservedRunningTime="2026-02-26 13:38:12.418962461 +0000 UTC m=+9159.074701576" Feb 26 13:38:16 crc kubenswrapper[4724]: I0226 13:38:16.552605 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:16 crc kubenswrapper[4724]: I0226 13:38:16.555457 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:17 crc kubenswrapper[4724]: I0226 13:38:17.604073 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wblsj" podUID="17487af6-b06e-431b-b449-d22200ca12a3" containerName="registry-server" probeResult="failure" output=< Feb 26 13:38:17 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:38:17 crc kubenswrapper[4724]: > Feb 26 13:38:18 crc kubenswrapper[4724]: I0226 13:38:18.976407 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:38:18 crc kubenswrapper[4724]: E0226 13:38:18.980619 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:38:27 crc kubenswrapper[4724]: I0226 13:38:27.604039 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wblsj" podUID="17487af6-b06e-431b-b449-d22200ca12a3" containerName="registry-server" probeResult="failure" output=< Feb 26 13:38:27 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:38:27 crc kubenswrapper[4724]: > Feb 26 13:38:28 crc kubenswrapper[4724]: I0226 13:38:28.261404 4724 scope.go:117] "RemoveContainer" containerID="9c4d61c3c81678563d9dc9fc90dc6dfbfc9841aad3575eea10866a3075a9bae6" Feb 26 13:38:30 crc kubenswrapper[4724]: I0226 13:38:30.191792 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cvlmb"] Feb 26 13:38:30 crc kubenswrapper[4724]: I0226 13:38:30.194152 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:38:30 crc kubenswrapper[4724]: I0226 13:38:30.215784 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cvlmb"] Feb 26 13:38:30 crc kubenswrapper[4724]: I0226 13:38:30.239052 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbpf2\" (UniqueName: \"kubernetes.io/projected/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-kube-api-access-rbpf2\") pod \"certified-operators-cvlmb\" (UID: \"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb\") " pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:38:30 crc kubenswrapper[4724]: I0226 13:38:30.239208 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-catalog-content\") pod \"certified-operators-cvlmb\" (UID: \"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb\") " pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:38:30 crc kubenswrapper[4724]: I0226 13:38:30.239337 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-utilities\") pod \"certified-operators-cvlmb\" (UID: \"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb\") " pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:38:30 crc kubenswrapper[4724]: I0226 13:38:30.341851 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbpf2\" (UniqueName: \"kubernetes.io/projected/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-kube-api-access-rbpf2\") pod \"certified-operators-cvlmb\" (UID: \"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb\") " pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:38:30 crc kubenswrapper[4724]: I0226 13:38:30.341925 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-catalog-content\") pod \"certified-operators-cvlmb\" (UID: \"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb\") " pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:38:30 crc kubenswrapper[4724]: I0226 13:38:30.341987 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-utilities\") pod \"certified-operators-cvlmb\" (UID: \"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb\") " pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:38:30 crc kubenswrapper[4724]: I0226 13:38:30.342651 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-catalog-content\") pod \"certified-operators-cvlmb\" (UID: \"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb\") " pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:38:30 crc kubenswrapper[4724]: I0226 13:38:30.342808 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-utilities\") pod \"certified-operators-cvlmb\" (UID: \"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb\") " pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:38:30 crc kubenswrapper[4724]: I0226 13:38:30.371819 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbpf2\" (UniqueName: \"kubernetes.io/projected/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-kube-api-access-rbpf2\") pod \"certified-operators-cvlmb\" (UID: \"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb\") " pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:38:30 crc kubenswrapper[4724]: I0226 13:38:30.517799 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:38:31 crc kubenswrapper[4724]: I0226 13:38:31.054215 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cvlmb"] Feb 26 13:38:31 crc kubenswrapper[4724]: W0226 13:38:31.061835 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79e4e3fc_8b1c_4cf7_80e7_b14c134dccbb.slice/crio-e56b0397e2c5e7493d6d13ef82277b2b9ef6ffc0a8921ef0be07199a35fde94a WatchSource:0}: Error finding container e56b0397e2c5e7493d6d13ef82277b2b9ef6ffc0a8921ef0be07199a35fde94a: Status 404 returned error can't find the container with id e56b0397e2c5e7493d6d13ef82277b2b9ef6ffc0a8921ef0be07199a35fde94a Feb 26 13:38:31 crc kubenswrapper[4724]: I0226 13:38:31.568219 4724 generic.go:334] "Generic (PLEG): container finished" podID="79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" containerID="028b113461693da2ee1c59e48fcd306da882e515ead61faf26dc41aa93d6d415" exitCode=0 Feb 26 13:38:31 crc kubenswrapper[4724]: I0226 13:38:31.568321 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvlmb" event={"ID":"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb","Type":"ContainerDied","Data":"028b113461693da2ee1c59e48fcd306da882e515ead61faf26dc41aa93d6d415"} Feb 26 13:38:31 crc kubenswrapper[4724]: I0226 13:38:31.569568 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvlmb" event={"ID":"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb","Type":"ContainerStarted","Data":"e56b0397e2c5e7493d6d13ef82277b2b9ef6ffc0a8921ef0be07199a35fde94a"} Feb 26 13:38:31 crc kubenswrapper[4724]: I0226 13:38:31.976653 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:38:31 crc kubenswrapper[4724]: E0226 13:38:31.976969 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:38:32 crc kubenswrapper[4724]: I0226 13:38:32.582431 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvlmb" event={"ID":"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb","Type":"ContainerStarted","Data":"eaedd3a8c9d199547785b5b4249cfd02a0cac5306612bc42ac9ec6994e8302ad"} Feb 26 13:38:35 crc kubenswrapper[4724]: I0226 13:38:35.611635 4724 generic.go:334] "Generic (PLEG): container finished" podID="79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" containerID="eaedd3a8c9d199547785b5b4249cfd02a0cac5306612bc42ac9ec6994e8302ad" exitCode=0 Feb 26 13:38:35 crc kubenswrapper[4724]: I0226 13:38:35.611720 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvlmb" event={"ID":"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb","Type":"ContainerDied","Data":"eaedd3a8c9d199547785b5b4249cfd02a0cac5306612bc42ac9ec6994e8302ad"} Feb 26 13:38:36 crc kubenswrapper[4724]: I0226 13:38:36.605230 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:36 crc kubenswrapper[4724]: I0226 13:38:36.625970 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvlmb" event={"ID":"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb","Type":"ContainerStarted","Data":"3174547df141cbc4da0272c76b10dab3ad96ac8c7a04b433ef199643749024e6"} Feb 26 13:38:36 crc kubenswrapper[4724]: I0226 13:38:36.663033 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cvlmb" podStartSLOduration=2.190963245 podStartE2EDuration="6.663010353s" podCreationTimestamp="2026-02-26 13:38:30 +0000 UTC" firstStartedPulling="2026-02-26 13:38:31.570254117 +0000 UTC m=+9178.225993232" lastFinishedPulling="2026-02-26 13:38:36.042301225 +0000 UTC m=+9182.698040340" observedRunningTime="2026-02-26 13:38:36.65299445 +0000 UTC m=+9183.308733565" watchObservedRunningTime="2026-02-26 13:38:36.663010353 +0000 UTC m=+9183.318749468" Feb 26 13:38:36 crc kubenswrapper[4724]: I0226 13:38:36.675272 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:38 crc kubenswrapper[4724]: I0226 13:38:38.365073 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wblsj"] Feb 26 13:38:38 crc kubenswrapper[4724]: I0226 13:38:38.365381 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wblsj" podUID="17487af6-b06e-431b-b449-d22200ca12a3" containerName="registry-server" containerID="cri-o://f170c5f0aa1350cae2c8ddc48f0e130bd2953eeb3a48da7a412e7fc66f6dc4ce" gracePeriod=2 Feb 26 13:38:38 crc kubenswrapper[4724]: I0226 13:38:38.666540 4724 generic.go:334] "Generic (PLEG): container finished" podID="17487af6-b06e-431b-b449-d22200ca12a3" containerID="f170c5f0aa1350cae2c8ddc48f0e130bd2953eeb3a48da7a412e7fc66f6dc4ce" exitCode=0 Feb 26 13:38:38 crc kubenswrapper[4724]: I0226 13:38:38.667249 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wblsj" event={"ID":"17487af6-b06e-431b-b449-d22200ca12a3","Type":"ContainerDied","Data":"f170c5f0aa1350cae2c8ddc48f0e130bd2953eeb3a48da7a412e7fc66f6dc4ce"} Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.289108 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.426862 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17487af6-b06e-431b-b449-d22200ca12a3-catalog-content\") pod \"17487af6-b06e-431b-b449-d22200ca12a3\" (UID: \"17487af6-b06e-431b-b449-d22200ca12a3\") " Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.427024 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17487af6-b06e-431b-b449-d22200ca12a3-utilities\") pod \"17487af6-b06e-431b-b449-d22200ca12a3\" (UID: \"17487af6-b06e-431b-b449-d22200ca12a3\") " Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.427104 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn5qg\" (UniqueName: \"kubernetes.io/projected/17487af6-b06e-431b-b449-d22200ca12a3-kube-api-access-hn5qg\") pod \"17487af6-b06e-431b-b449-d22200ca12a3\" (UID: \"17487af6-b06e-431b-b449-d22200ca12a3\") " Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.427497 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17487af6-b06e-431b-b449-d22200ca12a3-utilities" (OuterVolumeSpecName: "utilities") pod "17487af6-b06e-431b-b449-d22200ca12a3" (UID: "17487af6-b06e-431b-b449-d22200ca12a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.428090 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17487af6-b06e-431b-b449-d22200ca12a3-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.436315 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17487af6-b06e-431b-b449-d22200ca12a3-kube-api-access-hn5qg" (OuterVolumeSpecName: "kube-api-access-hn5qg") pod "17487af6-b06e-431b-b449-d22200ca12a3" (UID: "17487af6-b06e-431b-b449-d22200ca12a3"). InnerVolumeSpecName "kube-api-access-hn5qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.458976 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17487af6-b06e-431b-b449-d22200ca12a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17487af6-b06e-431b-b449-d22200ca12a3" (UID: "17487af6-b06e-431b-b449-d22200ca12a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.530277 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17487af6-b06e-431b-b449-d22200ca12a3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.530315 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn5qg\" (UniqueName: \"kubernetes.io/projected/17487af6-b06e-431b-b449-d22200ca12a3-kube-api-access-hn5qg\") on node \"crc\" DevicePath \"\"" Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.684833 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wblsj" event={"ID":"17487af6-b06e-431b-b449-d22200ca12a3","Type":"ContainerDied","Data":"98a86a36326bd4bffe221ba8a5eb3f085abf56b44dce2af0742ecce597dbbaba"} Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.684935 4724 scope.go:117] "RemoveContainer" containerID="f170c5f0aa1350cae2c8ddc48f0e130bd2953eeb3a48da7a412e7fc66f6dc4ce" Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.684948 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wblsj" Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.718371 4724 scope.go:117] "RemoveContainer" containerID="4e86614882b1eca6a71c96ac29acf040c41236646a4f4d44504f8ad695a5b61e" Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.742555 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wblsj"] Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.749225 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wblsj"] Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.780548 4724 scope.go:117] "RemoveContainer" containerID="d3baa66ab11ab16d0bc6d1c6ff038b5c6985c5def612422b87db79bb82be929d" Feb 26 13:38:39 crc kubenswrapper[4724]: I0226 13:38:39.988317 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17487af6-b06e-431b-b449-d22200ca12a3" path="/var/lib/kubelet/pods/17487af6-b06e-431b-b449-d22200ca12a3/volumes" Feb 26 13:38:40 crc kubenswrapper[4724]: I0226 13:38:40.518515 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:38:40 crc kubenswrapper[4724]: I0226 13:38:40.518597 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:38:41 crc kubenswrapper[4724]: I0226 13:38:41.573857 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cvlmb" podUID="79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" containerName="registry-server" probeResult="failure" output=< Feb 26 13:38:41 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:38:41 crc kubenswrapper[4724]: > Feb 26 13:38:43 crc kubenswrapper[4724]: I0226 13:38:43.983883 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:38:43 crc kubenswrapper[4724]: E0226 13:38:43.985093 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:38:51 crc kubenswrapper[4724]: I0226 13:38:51.816998 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cvlmb" podUID="79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" containerName="registry-server" probeResult="failure" output=< Feb 26 13:38:51 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:38:51 crc kubenswrapper[4724]: > Feb 26 13:38:55 crc kubenswrapper[4724]: I0226 13:38:55.099670 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:38:55 crc kubenswrapper[4724]: E0226 13:38:55.100652 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:39:00 crc kubenswrapper[4724]: I0226 13:39:00.579359 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:39:00 crc kubenswrapper[4724]: I0226 13:39:00.642762 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:39:01 crc kubenswrapper[4724]: I0226 13:39:01.392050 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cvlmb"] Feb 26 13:39:01 crc kubenswrapper[4724]: I0226 13:39:01.965676 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cvlmb" podUID="79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" containerName="registry-server" containerID="cri-o://3174547df141cbc4da0272c76b10dab3ad96ac8c7a04b433ef199643749024e6" gracePeriod=2 Feb 26 13:39:02 crc kubenswrapper[4724]: I0226 13:39:02.434838 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:39:02 crc kubenswrapper[4724]: I0226 13:39:02.494503 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-catalog-content\") pod \"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb\" (UID: \"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb\") " Feb 26 13:39:02 crc kubenswrapper[4724]: I0226 13:39:02.494600 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbpf2\" (UniqueName: \"kubernetes.io/projected/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-kube-api-access-rbpf2\") pod \"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb\" (UID: \"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb\") " Feb 26 13:39:02 crc kubenswrapper[4724]: I0226 13:39:02.494628 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-utilities\") pod \"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb\" (UID: \"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb\") " Feb 26 13:39:02 crc kubenswrapper[4724]: I0226 13:39:02.495464 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-utilities" (OuterVolumeSpecName: "utilities") pod "79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" (UID: "79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:39:02 crc kubenswrapper[4724]: I0226 13:39:02.502502 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-kube-api-access-rbpf2" (OuterVolumeSpecName: "kube-api-access-rbpf2") pod "79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" (UID: "79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb"). InnerVolumeSpecName "kube-api-access-rbpf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:39:02 crc kubenswrapper[4724]: I0226 13:39:02.550038 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" (UID: "79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:39:02 crc kubenswrapper[4724]: I0226 13:39:02.596416 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:39:02 crc kubenswrapper[4724]: I0226 13:39:02.596452 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbpf2\" (UniqueName: \"kubernetes.io/projected/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-kube-api-access-rbpf2\") on node \"crc\" DevicePath \"\"" Feb 26 13:39:02 crc kubenswrapper[4724]: I0226 13:39:02.596462 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:39:02 crc kubenswrapper[4724]: I0226 13:39:02.981497 4724 generic.go:334] "Generic (PLEG): container finished" podID="79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" containerID="3174547df141cbc4da0272c76b10dab3ad96ac8c7a04b433ef199643749024e6" exitCode=0 Feb 26 13:39:02 crc kubenswrapper[4724]: I0226 13:39:02.981608 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvlmb" Feb 26 13:39:02 crc kubenswrapper[4724]: I0226 13:39:02.981596 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvlmb" event={"ID":"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb","Type":"ContainerDied","Data":"3174547df141cbc4da0272c76b10dab3ad96ac8c7a04b433ef199643749024e6"} Feb 26 13:39:02 crc kubenswrapper[4724]: I0226 13:39:02.981780 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvlmb" event={"ID":"79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb","Type":"ContainerDied","Data":"e56b0397e2c5e7493d6d13ef82277b2b9ef6ffc0a8921ef0be07199a35fde94a"} Feb 26 13:39:02 crc kubenswrapper[4724]: I0226 13:39:02.981828 4724 scope.go:117] "RemoveContainer" containerID="3174547df141cbc4da0272c76b10dab3ad96ac8c7a04b433ef199643749024e6" Feb 26 13:39:03 crc kubenswrapper[4724]: I0226 13:39:03.020020 4724 scope.go:117] "RemoveContainer" containerID="eaedd3a8c9d199547785b5b4249cfd02a0cac5306612bc42ac9ec6994e8302ad" Feb 26 13:39:03 crc kubenswrapper[4724]: I0226 13:39:03.028557 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cvlmb"] Feb 26 13:39:03 crc kubenswrapper[4724]: I0226 13:39:03.036874 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cvlmb"] Feb 26 13:39:03 crc kubenswrapper[4724]: I0226 13:39:03.043170 4724 scope.go:117] "RemoveContainer" containerID="028b113461693da2ee1c59e48fcd306da882e515ead61faf26dc41aa93d6d415" Feb 26 13:39:03 crc kubenswrapper[4724]: I0226 13:39:03.096046 4724 scope.go:117] "RemoveContainer" containerID="3174547df141cbc4da0272c76b10dab3ad96ac8c7a04b433ef199643749024e6" Feb 26 13:39:03 crc kubenswrapper[4724]: E0226 13:39:03.097099 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3174547df141cbc4da0272c76b10dab3ad96ac8c7a04b433ef199643749024e6\": container with ID starting with 3174547df141cbc4da0272c76b10dab3ad96ac8c7a04b433ef199643749024e6 not found: ID does not exist" containerID="3174547df141cbc4da0272c76b10dab3ad96ac8c7a04b433ef199643749024e6" Feb 26 13:39:03 crc kubenswrapper[4724]: I0226 13:39:03.097176 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3174547df141cbc4da0272c76b10dab3ad96ac8c7a04b433ef199643749024e6"} err="failed to get container status \"3174547df141cbc4da0272c76b10dab3ad96ac8c7a04b433ef199643749024e6\": rpc error: code = NotFound desc = could not find container \"3174547df141cbc4da0272c76b10dab3ad96ac8c7a04b433ef199643749024e6\": container with ID starting with 3174547df141cbc4da0272c76b10dab3ad96ac8c7a04b433ef199643749024e6 not found: ID does not exist" Feb 26 13:39:03 crc kubenswrapper[4724]: I0226 13:39:03.097230 4724 scope.go:117] "RemoveContainer" containerID="eaedd3a8c9d199547785b5b4249cfd02a0cac5306612bc42ac9ec6994e8302ad" Feb 26 13:39:03 crc kubenswrapper[4724]: E0226 13:39:03.097786 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaedd3a8c9d199547785b5b4249cfd02a0cac5306612bc42ac9ec6994e8302ad\": container with ID starting with eaedd3a8c9d199547785b5b4249cfd02a0cac5306612bc42ac9ec6994e8302ad not found: ID does not exist" containerID="eaedd3a8c9d199547785b5b4249cfd02a0cac5306612bc42ac9ec6994e8302ad" Feb 26 13:39:03 crc kubenswrapper[4724]: I0226 13:39:03.097831 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaedd3a8c9d199547785b5b4249cfd02a0cac5306612bc42ac9ec6994e8302ad"} err="failed to get container status \"eaedd3a8c9d199547785b5b4249cfd02a0cac5306612bc42ac9ec6994e8302ad\": rpc error: code = NotFound desc = could not find container \"eaedd3a8c9d199547785b5b4249cfd02a0cac5306612bc42ac9ec6994e8302ad\": container with ID starting with eaedd3a8c9d199547785b5b4249cfd02a0cac5306612bc42ac9ec6994e8302ad not found: ID does not exist" Feb 26 13:39:03 crc kubenswrapper[4724]: I0226 13:39:03.097858 4724 scope.go:117] "RemoveContainer" containerID="028b113461693da2ee1c59e48fcd306da882e515ead61faf26dc41aa93d6d415" Feb 26 13:39:03 crc kubenswrapper[4724]: E0226 13:39:03.098292 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"028b113461693da2ee1c59e48fcd306da882e515ead61faf26dc41aa93d6d415\": container with ID starting with 028b113461693da2ee1c59e48fcd306da882e515ead61faf26dc41aa93d6d415 not found: ID does not exist" containerID="028b113461693da2ee1c59e48fcd306da882e515ead61faf26dc41aa93d6d415" Feb 26 13:39:03 crc kubenswrapper[4724]: I0226 13:39:03.098341 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"028b113461693da2ee1c59e48fcd306da882e515ead61faf26dc41aa93d6d415"} err="failed to get container status \"028b113461693da2ee1c59e48fcd306da882e515ead61faf26dc41aa93d6d415\": rpc error: code = NotFound desc = could not find container \"028b113461693da2ee1c59e48fcd306da882e515ead61faf26dc41aa93d6d415\": container with ID starting with 028b113461693da2ee1c59e48fcd306da882e515ead61faf26dc41aa93d6d415 not found: ID does not exist" Feb 26 13:39:03 crc kubenswrapper[4724]: I0226 13:39:03.987382 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" path="/var/lib/kubelet/pods/79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb/volumes" Feb 26 13:39:05 crc kubenswrapper[4724]: I0226 13:39:05.975753 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:39:05 crc kubenswrapper[4724]: E0226 13:39:05.976703 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:39:17 crc kubenswrapper[4724]: I0226 13:39:17.976455 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:39:17 crc kubenswrapper[4724]: E0226 13:39:17.977768 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:39:32 crc kubenswrapper[4724]: I0226 13:39:32.976880 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:39:32 crc kubenswrapper[4724]: E0226 13:39:32.978465 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:39:44 crc kubenswrapper[4724]: I0226 13:39:44.975895 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:39:44 crc kubenswrapper[4724]: E0226 13:39:44.976735 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:39:55 crc kubenswrapper[4724]: I0226 13:39:55.975678 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:39:55 crc kubenswrapper[4724]: E0226 13:39:55.976643 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.146029 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535220-tb697"] Feb 26 13:40:00 crc kubenswrapper[4724]: E0226 13:40:00.149588 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" containerName="extract-content" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.149697 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" containerName="extract-content" Feb 26 13:40:00 crc kubenswrapper[4724]: E0226 13:40:00.150520 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17487af6-b06e-431b-b449-d22200ca12a3" containerName="extract-utilities" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.150620 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="17487af6-b06e-431b-b449-d22200ca12a3" containerName="extract-utilities" Feb 26 13:40:00 crc kubenswrapper[4724]: E0226 13:40:00.150686 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" containerName="extract-utilities" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.150744 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" containerName="extract-utilities" Feb 26 13:40:00 crc kubenswrapper[4724]: E0226 13:40:00.150836 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17487af6-b06e-431b-b449-d22200ca12a3" containerName="extract-content" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.150904 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="17487af6-b06e-431b-b449-d22200ca12a3" containerName="extract-content" Feb 26 13:40:00 crc kubenswrapper[4724]: E0226 13:40:00.150964 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17487af6-b06e-431b-b449-d22200ca12a3" containerName="registry-server" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.151022 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="17487af6-b06e-431b-b449-d22200ca12a3" containerName="registry-server" Feb 26 13:40:00 crc kubenswrapper[4724]: E0226 13:40:00.151786 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" containerName="registry-server" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.151863 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" containerName="registry-server" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.152245 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e4e3fc-8b1c-4cf7-80e7-b14c134dccbb" containerName="registry-server" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.153066 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="17487af6-b06e-431b-b449-d22200ca12a3" containerName="registry-server" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.154032 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535220-tb697" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.157310 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.157443 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.159000 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.193283 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535220-tb697"] Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.280569 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t57bx\" (UniqueName: \"kubernetes.io/projected/1e47d1c3-335a-45fe-b310-7f758e7fc85c-kube-api-access-t57bx\") pod \"auto-csr-approver-29535220-tb697\" (UID: \"1e47d1c3-335a-45fe-b310-7f758e7fc85c\") " pod="openshift-infra/auto-csr-approver-29535220-tb697" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.382499 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t57bx\" (UniqueName: \"kubernetes.io/projected/1e47d1c3-335a-45fe-b310-7f758e7fc85c-kube-api-access-t57bx\") pod \"auto-csr-approver-29535220-tb697\" (UID: \"1e47d1c3-335a-45fe-b310-7f758e7fc85c\") " pod="openshift-infra/auto-csr-approver-29535220-tb697" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.406515 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t57bx\" (UniqueName: \"kubernetes.io/projected/1e47d1c3-335a-45fe-b310-7f758e7fc85c-kube-api-access-t57bx\") pod \"auto-csr-approver-29535220-tb697\" (UID: \"1e47d1c3-335a-45fe-b310-7f758e7fc85c\") " pod="openshift-infra/auto-csr-approver-29535220-tb697" Feb 26 13:40:00 crc kubenswrapper[4724]: I0226 13:40:00.479429 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535220-tb697" Feb 26 13:40:01 crc kubenswrapper[4724]: I0226 13:40:01.267621 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535220-tb697"] Feb 26 13:40:01 crc kubenswrapper[4724]: I0226 13:40:01.552272 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535220-tb697" event={"ID":"1e47d1c3-335a-45fe-b310-7f758e7fc85c","Type":"ContainerStarted","Data":"48bbfedc930d6374b6ab9e09554072b46a7010d1bd5b08fd56c4342db426845f"} Feb 26 13:40:03 crc kubenswrapper[4724]: I0226 13:40:03.581929 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535220-tb697" event={"ID":"1e47d1c3-335a-45fe-b310-7f758e7fc85c","Type":"ContainerStarted","Data":"c5e9b7dd8b55ec6c2b27a35b323e2fb94842a125bd41d0440de2665df49d948f"} Feb 26 13:40:03 crc kubenswrapper[4724]: I0226 13:40:03.607641 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535220-tb697" podStartSLOduration=2.516760612 podStartE2EDuration="3.607585218s" podCreationTimestamp="2026-02-26 13:40:00 +0000 UTC" firstStartedPulling="2026-02-26 13:40:01.276167805 +0000 UTC m=+9267.931906920" lastFinishedPulling="2026-02-26 13:40:02.366992401 +0000 UTC m=+9269.022731526" observedRunningTime="2026-02-26 13:40:03.595587535 +0000 UTC m=+9270.251326650" watchObservedRunningTime="2026-02-26 13:40:03.607585218 +0000 UTC m=+9270.263324353" Feb 26 13:40:04 crc kubenswrapper[4724]: I0226 13:40:04.592223 4724 generic.go:334] "Generic (PLEG): container finished" podID="1e47d1c3-335a-45fe-b310-7f758e7fc85c" containerID="c5e9b7dd8b55ec6c2b27a35b323e2fb94842a125bd41d0440de2665df49d948f" exitCode=0 Feb 26 13:40:04 crc kubenswrapper[4724]: I0226 13:40:04.592264 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535220-tb697" event={"ID":"1e47d1c3-335a-45fe-b310-7f758e7fc85c","Type":"ContainerDied","Data":"c5e9b7dd8b55ec6c2b27a35b323e2fb94842a125bd41d0440de2665df49d948f"} Feb 26 13:40:06 crc kubenswrapper[4724]: I0226 13:40:06.174775 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535220-tb697" Feb 26 13:40:06 crc kubenswrapper[4724]: I0226 13:40:06.275229 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t57bx\" (UniqueName: \"kubernetes.io/projected/1e47d1c3-335a-45fe-b310-7f758e7fc85c-kube-api-access-t57bx\") pod \"1e47d1c3-335a-45fe-b310-7f758e7fc85c\" (UID: \"1e47d1c3-335a-45fe-b310-7f758e7fc85c\") " Feb 26 13:40:06 crc kubenswrapper[4724]: I0226 13:40:06.289636 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e47d1c3-335a-45fe-b310-7f758e7fc85c-kube-api-access-t57bx" (OuterVolumeSpecName: "kube-api-access-t57bx") pod "1e47d1c3-335a-45fe-b310-7f758e7fc85c" (UID: "1e47d1c3-335a-45fe-b310-7f758e7fc85c"). InnerVolumeSpecName "kube-api-access-t57bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:40:06 crc kubenswrapper[4724]: I0226 13:40:06.377732 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t57bx\" (UniqueName: \"kubernetes.io/projected/1e47d1c3-335a-45fe-b310-7f758e7fc85c-kube-api-access-t57bx\") on node \"crc\" DevicePath \"\"" Feb 26 13:40:06 crc kubenswrapper[4724]: I0226 13:40:06.615426 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535220-tb697" event={"ID":"1e47d1c3-335a-45fe-b310-7f758e7fc85c","Type":"ContainerDied","Data":"48bbfedc930d6374b6ab9e09554072b46a7010d1bd5b08fd56c4342db426845f"} Feb 26 13:40:06 crc kubenswrapper[4724]: I0226 13:40:06.615480 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48bbfedc930d6374b6ab9e09554072b46a7010d1bd5b08fd56c4342db426845f" Feb 26 13:40:06 crc kubenswrapper[4724]: I0226 13:40:06.615550 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535220-tb697" Feb 26 13:40:06 crc kubenswrapper[4724]: I0226 13:40:06.683245 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535214-28kr8"] Feb 26 13:40:06 crc kubenswrapper[4724]: I0226 13:40:06.692892 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535214-28kr8"] Feb 26 13:40:07 crc kubenswrapper[4724]: I0226 13:40:07.986006 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e875781-84c5-41e3-9b07-a6956f211aa6" path="/var/lib/kubelet/pods/1e875781-84c5-41e3-9b07-a6956f211aa6/volumes" Feb 26 13:40:09 crc kubenswrapper[4724]: I0226 13:40:09.977053 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:40:09 crc kubenswrapper[4724]: E0226 13:40:09.977437 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:40:25 crc kubenswrapper[4724]: I0226 13:40:25.134815 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:40:26 crc kubenswrapper[4724]: I0226 13:40:26.847045 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"097f7d307b384e7c5bfa9a6744e84c3c14eef3ca5d1ac323f91926e85d12bb8f"} Feb 26 13:40:28 crc kubenswrapper[4724]: I0226 13:40:28.389158 4724 scope.go:117] "RemoveContainer" containerID="82b96c00a98ee29621dfb70fdee5c331bb64b361d96c16bfda0deea5dbfeb311" Feb 26 13:42:00 crc kubenswrapper[4724]: I0226 13:42:00.157588 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535222-fhwv4"] Feb 26 13:42:00 crc kubenswrapper[4724]: E0226 13:42:00.158750 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e47d1c3-335a-45fe-b310-7f758e7fc85c" containerName="oc" Feb 26 13:42:00 crc kubenswrapper[4724]: I0226 13:42:00.158766 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e47d1c3-335a-45fe-b310-7f758e7fc85c" containerName="oc" Feb 26 13:42:00 crc kubenswrapper[4724]: I0226 13:42:00.158966 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e47d1c3-335a-45fe-b310-7f758e7fc85c" containerName="oc" Feb 26 13:42:00 crc kubenswrapper[4724]: I0226 13:42:00.159831 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535222-fhwv4" Feb 26 13:42:00 crc kubenswrapper[4724]: I0226 13:42:00.163707 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:42:00 crc kubenswrapper[4724]: I0226 13:42:00.163918 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:42:00 crc kubenswrapper[4724]: I0226 13:42:00.164156 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:42:00 crc kubenswrapper[4724]: I0226 13:42:00.176749 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535222-fhwv4"] Feb 26 13:42:00 crc kubenswrapper[4724]: I0226 13:42:00.278127 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpjx2\" (UniqueName: \"kubernetes.io/projected/dea751cb-6818-4eaf-863d-4c26500a5dd3-kube-api-access-mpjx2\") pod \"auto-csr-approver-29535222-fhwv4\" (UID: \"dea751cb-6818-4eaf-863d-4c26500a5dd3\") " pod="openshift-infra/auto-csr-approver-29535222-fhwv4" Feb 26 13:42:00 crc kubenswrapper[4724]: I0226 13:42:00.380812 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpjx2\" (UniqueName: \"kubernetes.io/projected/dea751cb-6818-4eaf-863d-4c26500a5dd3-kube-api-access-mpjx2\") pod \"auto-csr-approver-29535222-fhwv4\" (UID: \"dea751cb-6818-4eaf-863d-4c26500a5dd3\") " pod="openshift-infra/auto-csr-approver-29535222-fhwv4" Feb 26 13:42:00 crc kubenswrapper[4724]: I0226 13:42:00.408441 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpjx2\" (UniqueName: \"kubernetes.io/projected/dea751cb-6818-4eaf-863d-4c26500a5dd3-kube-api-access-mpjx2\") pod \"auto-csr-approver-29535222-fhwv4\" (UID: \"dea751cb-6818-4eaf-863d-4c26500a5dd3\") " pod="openshift-infra/auto-csr-approver-29535222-fhwv4" Feb 26 13:42:00 crc kubenswrapper[4724]: I0226 13:42:00.500516 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535222-fhwv4" Feb 26 13:42:01 crc kubenswrapper[4724]: I0226 13:42:01.016007 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535222-fhwv4"] Feb 26 13:42:01 crc kubenswrapper[4724]: I0226 13:42:01.430767 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535222-fhwv4" event={"ID":"dea751cb-6818-4eaf-863d-4c26500a5dd3","Type":"ContainerStarted","Data":"49ed984d31c4c26ff40160f3fbdcb350dbdd432238d7b4ec6cfb15f057b3d680"} Feb 26 13:42:02 crc kubenswrapper[4724]: I0226 13:42:02.444667 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535222-fhwv4" event={"ID":"dea751cb-6818-4eaf-863d-4c26500a5dd3","Type":"ContainerStarted","Data":"0f8f0f5cc2c7f6b7eaeb518923a9e5cb4e3f9bcfec37ac516c6c8c8fc3c86185"} Feb 26 13:42:02 crc kubenswrapper[4724]: I0226 13:42:02.464826 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535222-fhwv4" podStartSLOduration=1.433264716 podStartE2EDuration="2.464805996s" podCreationTimestamp="2026-02-26 13:42:00 +0000 UTC" firstStartedPulling="2026-02-26 13:42:01.015682394 +0000 UTC m=+9387.671421519" lastFinishedPulling="2026-02-26 13:42:02.047223674 +0000 UTC m=+9388.702962799" observedRunningTime="2026-02-26 13:42:02.463674897 +0000 UTC m=+9389.119414012" watchObservedRunningTime="2026-02-26 13:42:02.464805996 +0000 UTC m=+9389.120545121" Feb 26 13:42:03 crc kubenswrapper[4724]: I0226 13:42:03.457972 4724 generic.go:334] "Generic (PLEG): container finished" podID="dea751cb-6818-4eaf-863d-4c26500a5dd3" containerID="0f8f0f5cc2c7f6b7eaeb518923a9e5cb4e3f9bcfec37ac516c6c8c8fc3c86185" exitCode=0 Feb 26 13:42:03 crc kubenswrapper[4724]: I0226 13:42:03.458103 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535222-fhwv4" event={"ID":"dea751cb-6818-4eaf-863d-4c26500a5dd3","Type":"ContainerDied","Data":"0f8f0f5cc2c7f6b7eaeb518923a9e5cb4e3f9bcfec37ac516c6c8c8fc3c86185"} Feb 26 13:42:05 crc kubenswrapper[4724]: I0226 13:42:05.021046 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535222-fhwv4" Feb 26 13:42:05 crc kubenswrapper[4724]: I0226 13:42:05.098643 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpjx2\" (UniqueName: \"kubernetes.io/projected/dea751cb-6818-4eaf-863d-4c26500a5dd3-kube-api-access-mpjx2\") pod \"dea751cb-6818-4eaf-863d-4c26500a5dd3\" (UID: \"dea751cb-6818-4eaf-863d-4c26500a5dd3\") " Feb 26 13:42:05 crc kubenswrapper[4724]: I0226 13:42:05.109092 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea751cb-6818-4eaf-863d-4c26500a5dd3-kube-api-access-mpjx2" (OuterVolumeSpecName: "kube-api-access-mpjx2") pod "dea751cb-6818-4eaf-863d-4c26500a5dd3" (UID: "dea751cb-6818-4eaf-863d-4c26500a5dd3"). InnerVolumeSpecName "kube-api-access-mpjx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:42:05 crc kubenswrapper[4724]: I0226 13:42:05.202542 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpjx2\" (UniqueName: \"kubernetes.io/projected/dea751cb-6818-4eaf-863d-4c26500a5dd3-kube-api-access-mpjx2\") on node \"crc\" DevicePath \"\"" Feb 26 13:42:05 crc kubenswrapper[4724]: I0226 13:42:05.483014 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535222-fhwv4" event={"ID":"dea751cb-6818-4eaf-863d-4c26500a5dd3","Type":"ContainerDied","Data":"49ed984d31c4c26ff40160f3fbdcb350dbdd432238d7b4ec6cfb15f057b3d680"} Feb 26 13:42:05 crc kubenswrapper[4724]: I0226 13:42:05.483622 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49ed984d31c4c26ff40160f3fbdcb350dbdd432238d7b4ec6cfb15f057b3d680" Feb 26 13:42:05 crc kubenswrapper[4724]: I0226 13:42:05.483097 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535222-fhwv4" Feb 26 13:42:05 crc kubenswrapper[4724]: I0226 13:42:05.568071 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535216-s6shz"] Feb 26 13:42:05 crc kubenswrapper[4724]: I0226 13:42:05.583433 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535216-s6shz"] Feb 26 13:42:05 crc kubenswrapper[4724]: I0226 13:42:05.991135 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ba7667e-6b8a-4c40-8211-4ac22e1460ec" path="/var/lib/kubelet/pods/0ba7667e-6b8a-4c40-8211-4ac22e1460ec/volumes" Feb 26 13:42:28 crc kubenswrapper[4724]: I0226 13:42:28.495960 4724 scope.go:117] "RemoveContainer" containerID="e8d2d35b0f4d01235d2d25d0375568cb3f0b8ae28258223d76c61a5aa6744b49" Feb 26 13:42:29 crc kubenswrapper[4724]: I0226 13:42:29.953011 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2v49h"] Feb 26 13:42:29 crc kubenswrapper[4724]: E0226 13:42:29.953879 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea751cb-6818-4eaf-863d-4c26500a5dd3" containerName="oc" Feb 26 13:42:29 crc kubenswrapper[4724]: I0226 13:42:29.953897 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea751cb-6818-4eaf-863d-4c26500a5dd3" containerName="oc" Feb 26 13:42:29 crc kubenswrapper[4724]: I0226 13:42:29.954158 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea751cb-6818-4eaf-863d-4c26500a5dd3" containerName="oc" Feb 26 13:42:29 crc kubenswrapper[4724]: I0226 13:42:29.955665 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:42:29 crc kubenswrapper[4724]: I0226 13:42:29.987458 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2v49h"] Feb 26 13:42:30 crc kubenswrapper[4724]: I0226 13:42:30.041215 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-catalog-content\") pod \"community-operators-2v49h\" (UID: \"9cac4f06-b60b-4e95-8cb3-5d7b91986de6\") " pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:42:30 crc kubenswrapper[4724]: I0226 13:42:30.041317 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-utilities\") pod \"community-operators-2v49h\" (UID: \"9cac4f06-b60b-4e95-8cb3-5d7b91986de6\") " pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:42:30 crc kubenswrapper[4724]: I0226 13:42:30.041428 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dl2f\" (UniqueName: \"kubernetes.io/projected/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-kube-api-access-2dl2f\") pod \"community-operators-2v49h\" (UID: \"9cac4f06-b60b-4e95-8cb3-5d7b91986de6\") " pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:42:30 crc kubenswrapper[4724]: I0226 13:42:30.147500 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-catalog-content\") pod \"community-operators-2v49h\" (UID: \"9cac4f06-b60b-4e95-8cb3-5d7b91986de6\") " pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:42:30 crc kubenswrapper[4724]: I0226 13:42:30.147626 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-utilities\") pod \"community-operators-2v49h\" (UID: \"9cac4f06-b60b-4e95-8cb3-5d7b91986de6\") " pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:42:30 crc kubenswrapper[4724]: I0226 13:42:30.147730 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dl2f\" (UniqueName: \"kubernetes.io/projected/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-kube-api-access-2dl2f\") pod \"community-operators-2v49h\" (UID: \"9cac4f06-b60b-4e95-8cb3-5d7b91986de6\") " pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:42:30 crc kubenswrapper[4724]: I0226 13:42:30.148530 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-catalog-content\") pod \"community-operators-2v49h\" (UID: \"9cac4f06-b60b-4e95-8cb3-5d7b91986de6\") " pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:42:30 crc kubenswrapper[4724]: I0226 13:42:30.151602 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-utilities\") pod \"community-operators-2v49h\" (UID: \"9cac4f06-b60b-4e95-8cb3-5d7b91986de6\") " pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:42:30 crc kubenswrapper[4724]: I0226 13:42:30.180308 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dl2f\" (UniqueName: \"kubernetes.io/projected/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-kube-api-access-2dl2f\") pod \"community-operators-2v49h\" (UID: \"9cac4f06-b60b-4e95-8cb3-5d7b91986de6\") " pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:42:30 crc kubenswrapper[4724]: I0226 13:42:30.278414 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:42:30 crc kubenswrapper[4724]: I0226 13:42:30.779601 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2v49h"] Feb 26 13:42:31 crc kubenswrapper[4724]: I0226 13:42:31.728387 4724 generic.go:334] "Generic (PLEG): container finished" podID="9cac4f06-b60b-4e95-8cb3-5d7b91986de6" containerID="d46e38e33412c1c5fbc714537eed8c1bab65e7632a8f0bc8f5e50af9889e2109" exitCode=0 Feb 26 13:42:31 crc kubenswrapper[4724]: I0226 13:42:31.728477 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v49h" event={"ID":"9cac4f06-b60b-4e95-8cb3-5d7b91986de6","Type":"ContainerDied","Data":"d46e38e33412c1c5fbc714537eed8c1bab65e7632a8f0bc8f5e50af9889e2109"} Feb 26 13:42:31 crc kubenswrapper[4724]: I0226 13:42:31.731478 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v49h" event={"ID":"9cac4f06-b60b-4e95-8cb3-5d7b91986de6","Type":"ContainerStarted","Data":"84ef106bd833a9e3a50c826ebf3aaa19b8e483a79292aa00410ef5db12003711"} Feb 26 13:42:32 crc kubenswrapper[4724]: I0226 13:42:32.136416 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m8m7q"] Feb 26 13:42:32 crc kubenswrapper[4724]: I0226 13:42:32.139148 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:42:32 crc kubenswrapper[4724]: I0226 13:42:32.165239 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m8m7q"] Feb 26 13:42:32 crc kubenswrapper[4724]: I0226 13:42:32.285694 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60e9727c-001d-4195-8a79-f16805ff18e4-catalog-content\") pod \"redhat-operators-m8m7q\" (UID: \"60e9727c-001d-4195-8a79-f16805ff18e4\") " pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:42:32 crc kubenswrapper[4724]: I0226 13:42:32.286292 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60e9727c-001d-4195-8a79-f16805ff18e4-utilities\") pod \"redhat-operators-m8m7q\" (UID: \"60e9727c-001d-4195-8a79-f16805ff18e4\") " pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:42:32 crc kubenswrapper[4724]: I0226 13:42:32.286444 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8d9m\" (UniqueName: \"kubernetes.io/projected/60e9727c-001d-4195-8a79-f16805ff18e4-kube-api-access-l8d9m\") pod \"redhat-operators-m8m7q\" (UID: \"60e9727c-001d-4195-8a79-f16805ff18e4\") " pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:42:32 crc kubenswrapper[4724]: I0226 13:42:32.388438 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60e9727c-001d-4195-8a79-f16805ff18e4-utilities\") pod \"redhat-operators-m8m7q\" (UID: \"60e9727c-001d-4195-8a79-f16805ff18e4\") " pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:42:32 crc kubenswrapper[4724]: I0226 13:42:32.388996 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60e9727c-001d-4195-8a79-f16805ff18e4-utilities\") pod \"redhat-operators-m8m7q\" (UID: \"60e9727c-001d-4195-8a79-f16805ff18e4\") " pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:42:32 crc kubenswrapper[4724]: I0226 13:42:32.389104 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8d9m\" (UniqueName: \"kubernetes.io/projected/60e9727c-001d-4195-8a79-f16805ff18e4-kube-api-access-l8d9m\") pod \"redhat-operators-m8m7q\" (UID: \"60e9727c-001d-4195-8a79-f16805ff18e4\") " pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:42:32 crc kubenswrapper[4724]: I0226 13:42:32.389660 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60e9727c-001d-4195-8a79-f16805ff18e4-catalog-content\") pod \"redhat-operators-m8m7q\" (UID: \"60e9727c-001d-4195-8a79-f16805ff18e4\") " pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:42:32 crc kubenswrapper[4724]: I0226 13:42:32.390082 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60e9727c-001d-4195-8a79-f16805ff18e4-catalog-content\") pod \"redhat-operators-m8m7q\" (UID: \"60e9727c-001d-4195-8a79-f16805ff18e4\") " pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:42:32 crc kubenswrapper[4724]: I0226 13:42:32.437467 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8d9m\" (UniqueName: \"kubernetes.io/projected/60e9727c-001d-4195-8a79-f16805ff18e4-kube-api-access-l8d9m\") pod \"redhat-operators-m8m7q\" (UID: \"60e9727c-001d-4195-8a79-f16805ff18e4\") " pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:42:32 crc kubenswrapper[4724]: I0226 13:42:32.465060 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:42:32 crc kubenswrapper[4724]: I0226 13:42:32.745487 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v49h" event={"ID":"9cac4f06-b60b-4e95-8cb3-5d7b91986de6","Type":"ContainerStarted","Data":"7982f4d6f0d64378a6ed9fd85992ab15ca389603d77176378b9982fadd726f1c"} Feb 26 13:42:33 crc kubenswrapper[4724]: I0226 13:42:33.072514 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m8m7q"] Feb 26 13:42:33 crc kubenswrapper[4724]: I0226 13:42:33.766065 4724 generic.go:334] "Generic (PLEG): container finished" podID="60e9727c-001d-4195-8a79-f16805ff18e4" containerID="9e11bb76f5bf0f877102f17f0d8ead6342336d3b51921ea5d4e7e7fd880be204" exitCode=0 Feb 26 13:42:33 crc kubenswrapper[4724]: I0226 13:42:33.766277 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8m7q" event={"ID":"60e9727c-001d-4195-8a79-f16805ff18e4","Type":"ContainerDied","Data":"9e11bb76f5bf0f877102f17f0d8ead6342336d3b51921ea5d4e7e7fd880be204"} Feb 26 13:42:33 crc kubenswrapper[4724]: I0226 13:42:33.767167 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8m7q" event={"ID":"60e9727c-001d-4195-8a79-f16805ff18e4","Type":"ContainerStarted","Data":"e8eab7e015d44649c54d174beec133fe0b4b5e7031c05873d5cb2fdc80663a8b"} Feb 26 13:42:35 crc kubenswrapper[4724]: I0226 13:42:35.826273 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8m7q" event={"ID":"60e9727c-001d-4195-8a79-f16805ff18e4","Type":"ContainerStarted","Data":"e657a3833bada923d7706b3a331133b64135429a4e04a8afa63941a7641bd45c"} Feb 26 13:42:35 crc kubenswrapper[4724]: I0226 13:42:35.828527 4724 generic.go:334] "Generic (PLEG): container finished" podID="9cac4f06-b60b-4e95-8cb3-5d7b91986de6" containerID="7982f4d6f0d64378a6ed9fd85992ab15ca389603d77176378b9982fadd726f1c" exitCode=0 Feb 26 13:42:35 crc kubenswrapper[4724]: I0226 13:42:35.828553 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v49h" event={"ID":"9cac4f06-b60b-4e95-8cb3-5d7b91986de6","Type":"ContainerDied","Data":"7982f4d6f0d64378a6ed9fd85992ab15ca389603d77176378b9982fadd726f1c"} Feb 26 13:42:36 crc kubenswrapper[4724]: I0226 13:42:36.846616 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v49h" event={"ID":"9cac4f06-b60b-4e95-8cb3-5d7b91986de6","Type":"ContainerStarted","Data":"6dd86991101589598c423151f2f49f7e5f14b17a3af97d145b0975d26d0e911d"} Feb 26 13:42:40 crc kubenswrapper[4724]: I0226 13:42:40.278640 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:42:40 crc kubenswrapper[4724]: I0226 13:42:40.279064 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:42:41 crc kubenswrapper[4724]: I0226 13:42:41.369573 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2v49h" podUID="9cac4f06-b60b-4e95-8cb3-5d7b91986de6" containerName="registry-server" probeResult="failure" output=< Feb 26 13:42:41 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:42:41 crc kubenswrapper[4724]: > Feb 26 13:42:46 crc kubenswrapper[4724]: I0226 13:42:46.906658 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:42:46 crc kubenswrapper[4724]: I0226 13:42:46.907192 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:42:46 crc kubenswrapper[4724]: I0226 13:42:46.956346 4724 generic.go:334] "Generic (PLEG): container finished" podID="60e9727c-001d-4195-8a79-f16805ff18e4" containerID="e657a3833bada923d7706b3a331133b64135429a4e04a8afa63941a7641bd45c" exitCode=0 Feb 26 13:42:46 crc kubenswrapper[4724]: I0226 13:42:46.956380 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8m7q" event={"ID":"60e9727c-001d-4195-8a79-f16805ff18e4","Type":"ContainerDied","Data":"e657a3833bada923d7706b3a331133b64135429a4e04a8afa63941a7641bd45c"} Feb 26 13:42:46 crc kubenswrapper[4724]: I0226 13:42:46.989299 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2v49h" podStartSLOduration=13.253338617 podStartE2EDuration="17.989276278s" podCreationTimestamp="2026-02-26 13:42:29 +0000 UTC" firstStartedPulling="2026-02-26 13:42:31.732371341 +0000 UTC m=+9418.388110466" lastFinishedPulling="2026-02-26 13:42:36.468309012 +0000 UTC m=+9423.124048127" observedRunningTime="2026-02-26 13:42:36.874639529 +0000 UTC m=+9423.530378644" watchObservedRunningTime="2026-02-26 13:42:46.989276278 +0000 UTC m=+9433.645015393" Feb 26 13:42:48 crc kubenswrapper[4724]: I0226 13:42:48.977417 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8m7q" event={"ID":"60e9727c-001d-4195-8a79-f16805ff18e4","Type":"ContainerStarted","Data":"cc91373ddfa5e291858ff03ebaaa5e963209ebbba258a468dc819a08ea68b595"} Feb 26 13:42:49 crc kubenswrapper[4724]: I0226 13:42:49.004117 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m8m7q" podStartSLOduration=2.610012662 podStartE2EDuration="17.004083939s" podCreationTimestamp="2026-02-26 13:42:32 +0000 UTC" firstStartedPulling="2026-02-26 13:42:33.769098354 +0000 UTC m=+9420.424837469" lastFinishedPulling="2026-02-26 13:42:48.163169631 +0000 UTC m=+9434.818908746" observedRunningTime="2026-02-26 13:42:49.001638107 +0000 UTC m=+9435.657377262" watchObservedRunningTime="2026-02-26 13:42:49.004083939 +0000 UTC m=+9435.659823054" Feb 26 13:42:51 crc kubenswrapper[4724]: I0226 13:42:51.351071 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2v49h" podUID="9cac4f06-b60b-4e95-8cb3-5d7b91986de6" containerName="registry-server" probeResult="failure" output=< Feb 26 13:42:51 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:42:51 crc kubenswrapper[4724]: > Feb 26 13:42:52 crc kubenswrapper[4724]: I0226 13:42:52.466013 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:42:52 crc kubenswrapper[4724]: I0226 13:42:52.466348 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:42:53 crc kubenswrapper[4724]: I0226 13:42:53.518489 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m8m7q" podUID="60e9727c-001d-4195-8a79-f16805ff18e4" containerName="registry-server" probeResult="failure" output=< Feb 26 13:42:53 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:42:53 crc kubenswrapper[4724]: > Feb 26 13:43:01 crc kubenswrapper[4724]: I0226 13:43:01.328615 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2v49h" podUID="9cac4f06-b60b-4e95-8cb3-5d7b91986de6" containerName="registry-server" probeResult="failure" output=< Feb 26 13:43:01 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:43:01 crc kubenswrapper[4724]: > Feb 26 13:43:03 crc kubenswrapper[4724]: I0226 13:43:03.518439 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m8m7q" podUID="60e9727c-001d-4195-8a79-f16805ff18e4" containerName="registry-server" probeResult="failure" output=< Feb 26 13:43:03 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:43:03 crc kubenswrapper[4724]: > Feb 26 13:43:10 crc kubenswrapper[4724]: I0226 13:43:10.334396 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:43:10 crc kubenswrapper[4724]: I0226 13:43:10.397774 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:43:12 crc kubenswrapper[4724]: I0226 13:43:12.406477 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2v49h"] Feb 26 13:43:12 crc kubenswrapper[4724]: I0226 13:43:12.407004 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2v49h" podUID="9cac4f06-b60b-4e95-8cb3-5d7b91986de6" containerName="registry-server" containerID="cri-o://6dd86991101589598c423151f2f49f7e5f14b17a3af97d145b0975d26d0e911d" gracePeriod=2 Feb 26 13:43:13 crc kubenswrapper[4724]: I0226 13:43:13.224295 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v49h" event={"ID":"9cac4f06-b60b-4e95-8cb3-5d7b91986de6","Type":"ContainerDied","Data":"6dd86991101589598c423151f2f49f7e5f14b17a3af97d145b0975d26d0e911d"} Feb 26 13:43:13 crc kubenswrapper[4724]: I0226 13:43:13.224378 4724 generic.go:334] "Generic (PLEG): container finished" podID="9cac4f06-b60b-4e95-8cb3-5d7b91986de6" containerID="6dd86991101589598c423151f2f49f7e5f14b17a3af97d145b0975d26d0e911d" exitCode=0 Feb 26 13:43:13 crc kubenswrapper[4724]: I0226 13:43:13.411063 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:43:13 crc kubenswrapper[4724]: I0226 13:43:13.515749 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m8m7q" podUID="60e9727c-001d-4195-8a79-f16805ff18e4" containerName="registry-server" probeResult="failure" output=< Feb 26 13:43:13 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:43:13 crc kubenswrapper[4724]: > Feb 26 13:43:13 crc kubenswrapper[4724]: I0226 13:43:13.519724 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-utilities\") pod \"9cac4f06-b60b-4e95-8cb3-5d7b91986de6\" (UID: \"9cac4f06-b60b-4e95-8cb3-5d7b91986de6\") " Feb 26 13:43:13 crc kubenswrapper[4724]: I0226 13:43:13.519873 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-catalog-content\") pod \"9cac4f06-b60b-4e95-8cb3-5d7b91986de6\" (UID: \"9cac4f06-b60b-4e95-8cb3-5d7b91986de6\") " Feb 26 13:43:13 crc kubenswrapper[4724]: I0226 13:43:13.520147 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dl2f\" (UniqueName: \"kubernetes.io/projected/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-kube-api-access-2dl2f\") pod \"9cac4f06-b60b-4e95-8cb3-5d7b91986de6\" (UID: \"9cac4f06-b60b-4e95-8cb3-5d7b91986de6\") " Feb 26 13:43:13 crc kubenswrapper[4724]: I0226 13:43:13.520221 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-utilities" (OuterVolumeSpecName: "utilities") pod "9cac4f06-b60b-4e95-8cb3-5d7b91986de6" (UID: "9cac4f06-b60b-4e95-8cb3-5d7b91986de6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:43:13 crc kubenswrapper[4724]: I0226 13:43:13.520847 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:43:13 crc kubenswrapper[4724]: I0226 13:43:13.542051 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-kube-api-access-2dl2f" (OuterVolumeSpecName: "kube-api-access-2dl2f") pod "9cac4f06-b60b-4e95-8cb3-5d7b91986de6" (UID: "9cac4f06-b60b-4e95-8cb3-5d7b91986de6"). InnerVolumeSpecName "kube-api-access-2dl2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:43:13 crc kubenswrapper[4724]: I0226 13:43:13.581569 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cac4f06-b60b-4e95-8cb3-5d7b91986de6" (UID: "9cac4f06-b60b-4e95-8cb3-5d7b91986de6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:43:13 crc kubenswrapper[4724]: I0226 13:43:13.622996 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:43:13 crc kubenswrapper[4724]: I0226 13:43:13.623038 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dl2f\" (UniqueName: \"kubernetes.io/projected/9cac4f06-b60b-4e95-8cb3-5d7b91986de6-kube-api-access-2dl2f\") on node \"crc\" DevicePath \"\"" Feb 26 13:43:14 crc kubenswrapper[4724]: I0226 13:43:14.237779 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v49h" event={"ID":"9cac4f06-b60b-4e95-8cb3-5d7b91986de6","Type":"ContainerDied","Data":"84ef106bd833a9e3a50c826ebf3aaa19b8e483a79292aa00410ef5db12003711"} Feb 26 13:43:14 crc kubenswrapper[4724]: I0226 13:43:14.237849 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2v49h" Feb 26 13:43:14 crc kubenswrapper[4724]: I0226 13:43:14.238227 4724 scope.go:117] "RemoveContainer" containerID="6dd86991101589598c423151f2f49f7e5f14b17a3af97d145b0975d26d0e911d" Feb 26 13:43:14 crc kubenswrapper[4724]: I0226 13:43:14.271770 4724 scope.go:117] "RemoveContainer" containerID="7982f4d6f0d64378a6ed9fd85992ab15ca389603d77176378b9982fadd726f1c" Feb 26 13:43:14 crc kubenswrapper[4724]: I0226 13:43:14.281662 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2v49h"] Feb 26 13:43:14 crc kubenswrapper[4724]: I0226 13:43:14.298823 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2v49h"] Feb 26 13:43:14 crc kubenswrapper[4724]: I0226 13:43:14.305114 4724 scope.go:117] "RemoveContainer" containerID="d46e38e33412c1c5fbc714537eed8c1bab65e7632a8f0bc8f5e50af9889e2109" Feb 26 13:43:15 crc kubenswrapper[4724]: I0226 13:43:15.989018 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cac4f06-b60b-4e95-8cb3-5d7b91986de6" path="/var/lib/kubelet/pods/9cac4f06-b60b-4e95-8cb3-5d7b91986de6/volumes" Feb 26 13:43:16 crc kubenswrapper[4724]: I0226 13:43:16.905895 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:43:16 crc kubenswrapper[4724]: I0226 13:43:16.906328 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:43:23 crc kubenswrapper[4724]: I0226 13:43:23.572937 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m8m7q" podUID="60e9727c-001d-4195-8a79-f16805ff18e4" containerName="registry-server" probeResult="failure" output=< Feb 26 13:43:23 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:43:23 crc kubenswrapper[4724]: > Feb 26 13:43:33 crc kubenswrapper[4724]: I0226 13:43:33.535741 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m8m7q" podUID="60e9727c-001d-4195-8a79-f16805ff18e4" containerName="registry-server" probeResult="failure" output=< Feb 26 13:43:33 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:43:33 crc kubenswrapper[4724]: > Feb 26 13:43:43 crc kubenswrapper[4724]: I0226 13:43:43.517977 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m8m7q" podUID="60e9727c-001d-4195-8a79-f16805ff18e4" containerName="registry-server" probeResult="failure" output=< Feb 26 13:43:43 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:43:43 crc kubenswrapper[4724]: > Feb 26 13:43:46 crc kubenswrapper[4724]: I0226 13:43:46.905913 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:43:46 crc kubenswrapper[4724]: I0226 13:43:46.906263 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:43:46 crc kubenswrapper[4724]: I0226 13:43:46.906314 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 13:43:46 crc kubenswrapper[4724]: I0226 13:43:46.908024 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"097f7d307b384e7c5bfa9a6744e84c3c14eef3ca5d1ac323f91926e85d12bb8f"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 13:43:46 crc kubenswrapper[4724]: I0226 13:43:46.908109 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://097f7d307b384e7c5bfa9a6744e84c3c14eef3ca5d1ac323f91926e85d12bb8f" gracePeriod=600 Feb 26 13:43:47 crc kubenswrapper[4724]: I0226 13:43:47.580072 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="097f7d307b384e7c5bfa9a6744e84c3c14eef3ca5d1ac323f91926e85d12bb8f" exitCode=0 Feb 26 13:43:47 crc kubenswrapper[4724]: I0226 13:43:47.580169 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"097f7d307b384e7c5bfa9a6744e84c3c14eef3ca5d1ac323f91926e85d12bb8f"} Feb 26 13:43:47 crc kubenswrapper[4724]: I0226 13:43:47.580465 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b"} Feb 26 13:43:47 crc kubenswrapper[4724]: I0226 13:43:47.580502 4724 scope.go:117] "RemoveContainer" containerID="5c27f92973a09fe7e3f2062677317af178b516471f7aedefa9a395e9cd92b1f9" Feb 26 13:43:52 crc kubenswrapper[4724]: I0226 13:43:52.285795 4724 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rtjt6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.72:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 13:43:52 crc kubenswrapper[4724]: I0226 13:43:52.286319 4724 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rtjt6" podUID="2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.72:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 13:43:52 crc kubenswrapper[4724]: I0226 13:43:52.533450 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:43:52 crc kubenswrapper[4724]: I0226 13:43:52.592286 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:43:52 crc kubenswrapper[4724]: I0226 13:43:52.802444 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m8m7q"] Feb 26 13:43:54 crc kubenswrapper[4724]: I0226 13:43:54.304331 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m8m7q" podUID="60e9727c-001d-4195-8a79-f16805ff18e4" containerName="registry-server" containerID="cri-o://cc91373ddfa5e291858ff03ebaaa5e963209ebbba258a468dc819a08ea68b595" gracePeriod=2 Feb 26 13:43:54 crc kubenswrapper[4724]: I0226 13:43:54.969658 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.017963 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60e9727c-001d-4195-8a79-f16805ff18e4-catalog-content\") pod \"60e9727c-001d-4195-8a79-f16805ff18e4\" (UID: \"60e9727c-001d-4195-8a79-f16805ff18e4\") " Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.018016 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60e9727c-001d-4195-8a79-f16805ff18e4-utilities\") pod \"60e9727c-001d-4195-8a79-f16805ff18e4\" (UID: \"60e9727c-001d-4195-8a79-f16805ff18e4\") " Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.018236 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8d9m\" (UniqueName: \"kubernetes.io/projected/60e9727c-001d-4195-8a79-f16805ff18e4-kube-api-access-l8d9m\") pod \"60e9727c-001d-4195-8a79-f16805ff18e4\" (UID: \"60e9727c-001d-4195-8a79-f16805ff18e4\") " Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.021526 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60e9727c-001d-4195-8a79-f16805ff18e4-utilities" (OuterVolumeSpecName: "utilities") pod "60e9727c-001d-4195-8a79-f16805ff18e4" (UID: "60e9727c-001d-4195-8a79-f16805ff18e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.036919 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60e9727c-001d-4195-8a79-f16805ff18e4-kube-api-access-l8d9m" (OuterVolumeSpecName: "kube-api-access-l8d9m") pod "60e9727c-001d-4195-8a79-f16805ff18e4" (UID: "60e9727c-001d-4195-8a79-f16805ff18e4"). InnerVolumeSpecName "kube-api-access-l8d9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.122932 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8d9m\" (UniqueName: \"kubernetes.io/projected/60e9727c-001d-4195-8a79-f16805ff18e4-kube-api-access-l8d9m\") on node \"crc\" DevicePath \"\"" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.122971 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60e9727c-001d-4195-8a79-f16805ff18e4-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.186475 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60e9727c-001d-4195-8a79-f16805ff18e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60e9727c-001d-4195-8a79-f16805ff18e4" (UID: "60e9727c-001d-4195-8a79-f16805ff18e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.224449 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60e9727c-001d-4195-8a79-f16805ff18e4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.318971 4724 generic.go:334] "Generic (PLEG): container finished" podID="60e9727c-001d-4195-8a79-f16805ff18e4" containerID="cc91373ddfa5e291858ff03ebaaa5e963209ebbba258a468dc819a08ea68b595" exitCode=0 Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.319522 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8m7q" event={"ID":"60e9727c-001d-4195-8a79-f16805ff18e4","Type":"ContainerDied","Data":"cc91373ddfa5e291858ff03ebaaa5e963209ebbba258a468dc819a08ea68b595"} Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.319964 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8m7q" event={"ID":"60e9727c-001d-4195-8a79-f16805ff18e4","Type":"ContainerDied","Data":"e8eab7e015d44649c54d174beec133fe0b4b5e7031c05873d5cb2fdc80663a8b"} Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.320085 4724 scope.go:117] "RemoveContainer" containerID="cc91373ddfa5e291858ff03ebaaa5e963209ebbba258a468dc819a08ea68b595" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.319589 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8m7q" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.342953 4724 scope.go:117] "RemoveContainer" containerID="e657a3833bada923d7706b3a331133b64135429a4e04a8afa63941a7641bd45c" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.373442 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m8m7q"] Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.377897 4724 scope.go:117] "RemoveContainer" containerID="9e11bb76f5bf0f877102f17f0d8ead6342336d3b51921ea5d4e7e7fd880be204" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.384237 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m8m7q"] Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.415391 4724 scope.go:117] "RemoveContainer" containerID="cc91373ddfa5e291858ff03ebaaa5e963209ebbba258a468dc819a08ea68b595" Feb 26 13:43:55 crc kubenswrapper[4724]: E0226 13:43:55.421337 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc91373ddfa5e291858ff03ebaaa5e963209ebbba258a468dc819a08ea68b595\": container with ID starting with cc91373ddfa5e291858ff03ebaaa5e963209ebbba258a468dc819a08ea68b595 not found: ID does not exist" containerID="cc91373ddfa5e291858ff03ebaaa5e963209ebbba258a468dc819a08ea68b595" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.421702 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc91373ddfa5e291858ff03ebaaa5e963209ebbba258a468dc819a08ea68b595"} err="failed to get container status \"cc91373ddfa5e291858ff03ebaaa5e963209ebbba258a468dc819a08ea68b595\": rpc error: code = NotFound desc = could not find container \"cc91373ddfa5e291858ff03ebaaa5e963209ebbba258a468dc819a08ea68b595\": container with ID starting with cc91373ddfa5e291858ff03ebaaa5e963209ebbba258a468dc819a08ea68b595 not found: ID does not exist" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.421739 4724 scope.go:117] "RemoveContainer" containerID="e657a3833bada923d7706b3a331133b64135429a4e04a8afa63941a7641bd45c" Feb 26 13:43:55 crc kubenswrapper[4724]: E0226 13:43:55.422096 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e657a3833bada923d7706b3a331133b64135429a4e04a8afa63941a7641bd45c\": container with ID starting with e657a3833bada923d7706b3a331133b64135429a4e04a8afa63941a7641bd45c not found: ID does not exist" containerID="e657a3833bada923d7706b3a331133b64135429a4e04a8afa63941a7641bd45c" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.422120 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e657a3833bada923d7706b3a331133b64135429a4e04a8afa63941a7641bd45c"} err="failed to get container status \"e657a3833bada923d7706b3a331133b64135429a4e04a8afa63941a7641bd45c\": rpc error: code = NotFound desc = could not find container \"e657a3833bada923d7706b3a331133b64135429a4e04a8afa63941a7641bd45c\": container with ID starting with e657a3833bada923d7706b3a331133b64135429a4e04a8afa63941a7641bd45c not found: ID does not exist" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.422164 4724 scope.go:117] "RemoveContainer" containerID="9e11bb76f5bf0f877102f17f0d8ead6342336d3b51921ea5d4e7e7fd880be204" Feb 26 13:43:55 crc kubenswrapper[4724]: E0226 13:43:55.422933 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e11bb76f5bf0f877102f17f0d8ead6342336d3b51921ea5d4e7e7fd880be204\": container with ID starting with 9e11bb76f5bf0f877102f17f0d8ead6342336d3b51921ea5d4e7e7fd880be204 not found: ID does not exist" containerID="9e11bb76f5bf0f877102f17f0d8ead6342336d3b51921ea5d4e7e7fd880be204" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.423016 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e11bb76f5bf0f877102f17f0d8ead6342336d3b51921ea5d4e7e7fd880be204"} err="failed to get container status \"9e11bb76f5bf0f877102f17f0d8ead6342336d3b51921ea5d4e7e7fd880be204\": rpc error: code = NotFound desc = could not find container \"9e11bb76f5bf0f877102f17f0d8ead6342336d3b51921ea5d4e7e7fd880be204\": container with ID starting with 9e11bb76f5bf0f877102f17f0d8ead6342336d3b51921ea5d4e7e7fd880be204 not found: ID does not exist" Feb 26 13:43:55 crc kubenswrapper[4724]: I0226 13:43:55.984625 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60e9727c-001d-4195-8a79-f16805ff18e4" path="/var/lib/kubelet/pods/60e9727c-001d-4195-8a79-f16805ff18e4/volumes" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.249019 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535224-8lc69"] Feb 26 13:44:00 crc kubenswrapper[4724]: E0226 13:44:00.260180 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e9727c-001d-4195-8a79-f16805ff18e4" containerName="registry-server" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.260515 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e9727c-001d-4195-8a79-f16805ff18e4" containerName="registry-server" Feb 26 13:44:00 crc kubenswrapper[4724]: E0226 13:44:00.260614 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cac4f06-b60b-4e95-8cb3-5d7b91986de6" containerName="registry-server" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.260683 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cac4f06-b60b-4e95-8cb3-5d7b91986de6" containerName="registry-server" Feb 26 13:44:00 crc kubenswrapper[4724]: E0226 13:44:00.260763 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cac4f06-b60b-4e95-8cb3-5d7b91986de6" containerName="extract-utilities" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.260851 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cac4f06-b60b-4e95-8cb3-5d7b91986de6" containerName="extract-utilities" Feb 26 13:44:00 crc kubenswrapper[4724]: E0226 13:44:00.260922 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cac4f06-b60b-4e95-8cb3-5d7b91986de6" containerName="extract-content" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.260991 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cac4f06-b60b-4e95-8cb3-5d7b91986de6" containerName="extract-content" Feb 26 13:44:00 crc kubenswrapper[4724]: E0226 13:44:00.261067 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e9727c-001d-4195-8a79-f16805ff18e4" containerName="extract-content" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.261133 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e9727c-001d-4195-8a79-f16805ff18e4" containerName="extract-content" Feb 26 13:44:00 crc kubenswrapper[4724]: E0226 13:44:00.261326 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e9727c-001d-4195-8a79-f16805ff18e4" containerName="extract-utilities" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.261435 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e9727c-001d-4195-8a79-f16805ff18e4" containerName="extract-utilities" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.262758 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="60e9727c-001d-4195-8a79-f16805ff18e4" containerName="registry-server" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.262945 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cac4f06-b60b-4e95-8cb3-5d7b91986de6" containerName="registry-server" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.268662 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535224-8lc69" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.276789 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535224-8lc69"] Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.289035 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.289033 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.289044 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.327065 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk842\" (UniqueName: \"kubernetes.io/projected/86a6482d-04fc-4a4b-855f-2aabba305b90-kube-api-access-gk842\") pod \"auto-csr-approver-29535224-8lc69\" (UID: \"86a6482d-04fc-4a4b-855f-2aabba305b90\") " pod="openshift-infra/auto-csr-approver-29535224-8lc69" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.429749 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk842\" (UniqueName: \"kubernetes.io/projected/86a6482d-04fc-4a4b-855f-2aabba305b90-kube-api-access-gk842\") pod \"auto-csr-approver-29535224-8lc69\" (UID: \"86a6482d-04fc-4a4b-855f-2aabba305b90\") " pod="openshift-infra/auto-csr-approver-29535224-8lc69" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.467018 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk842\" (UniqueName: \"kubernetes.io/projected/86a6482d-04fc-4a4b-855f-2aabba305b90-kube-api-access-gk842\") pod \"auto-csr-approver-29535224-8lc69\" (UID: \"86a6482d-04fc-4a4b-855f-2aabba305b90\") " pod="openshift-infra/auto-csr-approver-29535224-8lc69" Feb 26 13:44:00 crc kubenswrapper[4724]: I0226 13:44:00.607616 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535224-8lc69" Feb 26 13:44:01 crc kubenswrapper[4724]: I0226 13:44:01.267879 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535224-8lc69"] Feb 26 13:44:01 crc kubenswrapper[4724]: I0226 13:44:01.423954 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 13:44:02 crc kubenswrapper[4724]: I0226 13:44:02.377153 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535224-8lc69" event={"ID":"86a6482d-04fc-4a4b-855f-2aabba305b90","Type":"ContainerStarted","Data":"12beebbccd87902db348ca7d7aba6af3b306b2e96706167e15646d7960831d50"} Feb 26 13:44:04 crc kubenswrapper[4724]: I0226 13:44:04.400912 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535224-8lc69" event={"ID":"86a6482d-04fc-4a4b-855f-2aabba305b90","Type":"ContainerStarted","Data":"dd382777f4e7fcb01fde832cf3a90488f2c74dee341cc99a735b6bd0bffbf6cc"} Feb 26 13:44:04 crc kubenswrapper[4724]: I0226 13:44:04.430212 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535224-8lc69" podStartSLOduration=2.922223131 podStartE2EDuration="4.430191276s" podCreationTimestamp="2026-02-26 13:44:00 +0000 UTC" firstStartedPulling="2026-02-26 13:44:01.419836605 +0000 UTC m=+9508.075575720" lastFinishedPulling="2026-02-26 13:44:02.92780475 +0000 UTC m=+9509.583543865" observedRunningTime="2026-02-26 13:44:04.422477922 +0000 UTC m=+9511.078217037" watchObservedRunningTime="2026-02-26 13:44:04.430191276 +0000 UTC m=+9511.085930391" Feb 26 13:44:06 crc kubenswrapper[4724]: I0226 13:44:06.420379 4724 generic.go:334] "Generic (PLEG): container finished" podID="86a6482d-04fc-4a4b-855f-2aabba305b90" containerID="dd382777f4e7fcb01fde832cf3a90488f2c74dee341cc99a735b6bd0bffbf6cc" exitCode=0 Feb 26 13:44:06 crc kubenswrapper[4724]: I0226 13:44:06.420470 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535224-8lc69" event={"ID":"86a6482d-04fc-4a4b-855f-2aabba305b90","Type":"ContainerDied","Data":"dd382777f4e7fcb01fde832cf3a90488f2c74dee341cc99a735b6bd0bffbf6cc"} Feb 26 13:44:07 crc kubenswrapper[4724]: I0226 13:44:07.886346 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535224-8lc69" Feb 26 13:44:08 crc kubenswrapper[4724]: I0226 13:44:08.019602 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk842\" (UniqueName: \"kubernetes.io/projected/86a6482d-04fc-4a4b-855f-2aabba305b90-kube-api-access-gk842\") pod \"86a6482d-04fc-4a4b-855f-2aabba305b90\" (UID: \"86a6482d-04fc-4a4b-855f-2aabba305b90\") " Feb 26 13:44:08 crc kubenswrapper[4724]: I0226 13:44:08.026521 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a6482d-04fc-4a4b-855f-2aabba305b90-kube-api-access-gk842" (OuterVolumeSpecName: "kube-api-access-gk842") pod "86a6482d-04fc-4a4b-855f-2aabba305b90" (UID: "86a6482d-04fc-4a4b-855f-2aabba305b90"). InnerVolumeSpecName "kube-api-access-gk842". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:44:08 crc kubenswrapper[4724]: I0226 13:44:08.122821 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk842\" (UniqueName: \"kubernetes.io/projected/86a6482d-04fc-4a4b-855f-2aabba305b90-kube-api-access-gk842\") on node \"crc\" DevicePath \"\"" Feb 26 13:44:08 crc kubenswrapper[4724]: I0226 13:44:08.437453 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535224-8lc69" event={"ID":"86a6482d-04fc-4a4b-855f-2aabba305b90","Type":"ContainerDied","Data":"12beebbccd87902db348ca7d7aba6af3b306b2e96706167e15646d7960831d50"} Feb 26 13:44:08 crc kubenswrapper[4724]: I0226 13:44:08.437707 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12beebbccd87902db348ca7d7aba6af3b306b2e96706167e15646d7960831d50" Feb 26 13:44:08 crc kubenswrapper[4724]: I0226 13:44:08.437472 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535224-8lc69" Feb 26 13:44:08 crc kubenswrapper[4724]: I0226 13:44:08.593037 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535218-7ssnh"] Feb 26 13:44:08 crc kubenswrapper[4724]: I0226 13:44:08.603488 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535218-7ssnh"] Feb 26 13:44:09 crc kubenswrapper[4724]: I0226 13:44:09.988548 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8907499-9343-4e22-b6a9-6c2b936d3a61" path="/var/lib/kubelet/pods/e8907499-9343-4e22-b6a9-6c2b936d3a61/volumes" Feb 26 13:44:28 crc kubenswrapper[4724]: I0226 13:44:28.698363 4724 scope.go:117] "RemoveContainer" containerID="0634962f1435f0c915d7f7133a38d34885861c08253767b0689d0fe34e16bac6" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.177505 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx"] Feb 26 13:45:00 crc kubenswrapper[4724]: E0226 13:45:00.179707 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a6482d-04fc-4a4b-855f-2aabba305b90" containerName="oc" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.179741 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a6482d-04fc-4a4b-855f-2aabba305b90" containerName="oc" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.180030 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="86a6482d-04fc-4a4b-855f-2aabba305b90" containerName="oc" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.180971 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.184671 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.185612 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.213830 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx"] Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.275278 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e24f3f4-e351-45ec-b54c-61eff2e0db52-secret-volume\") pod \"collect-profiles-29535225-qwlqx\" (UID: \"4e24f3f4-e351-45ec-b54c-61eff2e0db52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.275391 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e24f3f4-e351-45ec-b54c-61eff2e0db52-config-volume\") pod \"collect-profiles-29535225-qwlqx\" (UID: \"4e24f3f4-e351-45ec-b54c-61eff2e0db52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.275422 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx8vm\" (UniqueName: \"kubernetes.io/projected/4e24f3f4-e351-45ec-b54c-61eff2e0db52-kube-api-access-tx8vm\") pod \"collect-profiles-29535225-qwlqx\" (UID: \"4e24f3f4-e351-45ec-b54c-61eff2e0db52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.377469 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e24f3f4-e351-45ec-b54c-61eff2e0db52-secret-volume\") pod \"collect-profiles-29535225-qwlqx\" (UID: \"4e24f3f4-e351-45ec-b54c-61eff2e0db52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.377564 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e24f3f4-e351-45ec-b54c-61eff2e0db52-config-volume\") pod \"collect-profiles-29535225-qwlqx\" (UID: \"4e24f3f4-e351-45ec-b54c-61eff2e0db52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.377585 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx8vm\" (UniqueName: \"kubernetes.io/projected/4e24f3f4-e351-45ec-b54c-61eff2e0db52-kube-api-access-tx8vm\") pod \"collect-profiles-29535225-qwlqx\" (UID: \"4e24f3f4-e351-45ec-b54c-61eff2e0db52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.378975 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e24f3f4-e351-45ec-b54c-61eff2e0db52-config-volume\") pod \"collect-profiles-29535225-qwlqx\" (UID: \"4e24f3f4-e351-45ec-b54c-61eff2e0db52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.390951 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e24f3f4-e351-45ec-b54c-61eff2e0db52-secret-volume\") pod \"collect-profiles-29535225-qwlqx\" (UID: \"4e24f3f4-e351-45ec-b54c-61eff2e0db52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.397005 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx8vm\" (UniqueName: \"kubernetes.io/projected/4e24f3f4-e351-45ec-b54c-61eff2e0db52-kube-api-access-tx8vm\") pod \"collect-profiles-29535225-qwlqx\" (UID: \"4e24f3f4-e351-45ec-b54c-61eff2e0db52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" Feb 26 13:45:00 crc kubenswrapper[4724]: I0226 13:45:00.521960 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" Feb 26 13:45:01 crc kubenswrapper[4724]: I0226 13:45:01.015971 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx"] Feb 26 13:45:01 crc kubenswrapper[4724]: I0226 13:45:01.294933 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" event={"ID":"4e24f3f4-e351-45ec-b54c-61eff2e0db52","Type":"ContainerStarted","Data":"da8750ca448354b04863672565a541dab7d713e7075de2f26efdf0153c2b15c0"} Feb 26 13:45:02 crc kubenswrapper[4724]: I0226 13:45:02.307165 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" event={"ID":"4e24f3f4-e351-45ec-b54c-61eff2e0db52","Type":"ContainerStarted","Data":"49e79f8a0dab22bf44243342edaad1a388ba3deab46da2a5474608a44279997b"} Feb 26 13:45:02 crc kubenswrapper[4724]: I0226 13:45:02.341710 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" podStartSLOduration=2.341685365 podStartE2EDuration="2.341685365s" podCreationTimestamp="2026-02-26 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 13:45:02.336159826 +0000 UTC m=+9568.991898961" watchObservedRunningTime="2026-02-26 13:45:02.341685365 +0000 UTC m=+9568.997424480" Feb 26 13:45:03 crc kubenswrapper[4724]: I0226 13:45:03.319689 4724 generic.go:334] "Generic (PLEG): container finished" podID="4e24f3f4-e351-45ec-b54c-61eff2e0db52" containerID="49e79f8a0dab22bf44243342edaad1a388ba3deab46da2a5474608a44279997b" exitCode=0 Feb 26 13:45:03 crc kubenswrapper[4724]: I0226 13:45:03.319733 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" event={"ID":"4e24f3f4-e351-45ec-b54c-61eff2e0db52","Type":"ContainerDied","Data":"49e79f8a0dab22bf44243342edaad1a388ba3deab46da2a5474608a44279997b"} Feb 26 13:45:04 crc kubenswrapper[4724]: I0226 13:45:04.749946 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" Feb 26 13:45:04 crc kubenswrapper[4724]: I0226 13:45:04.882919 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e24f3f4-e351-45ec-b54c-61eff2e0db52-config-volume\") pod \"4e24f3f4-e351-45ec-b54c-61eff2e0db52\" (UID: \"4e24f3f4-e351-45ec-b54c-61eff2e0db52\") " Feb 26 13:45:04 crc kubenswrapper[4724]: I0226 13:45:04.883519 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx8vm\" (UniqueName: \"kubernetes.io/projected/4e24f3f4-e351-45ec-b54c-61eff2e0db52-kube-api-access-tx8vm\") pod \"4e24f3f4-e351-45ec-b54c-61eff2e0db52\" (UID: \"4e24f3f4-e351-45ec-b54c-61eff2e0db52\") " Feb 26 13:45:04 crc kubenswrapper[4724]: I0226 13:45:04.883787 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e24f3f4-e351-45ec-b54c-61eff2e0db52-secret-volume\") pod \"4e24f3f4-e351-45ec-b54c-61eff2e0db52\" (UID: \"4e24f3f4-e351-45ec-b54c-61eff2e0db52\") " Feb 26 13:45:04 crc kubenswrapper[4724]: I0226 13:45:04.883848 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e24f3f4-e351-45ec-b54c-61eff2e0db52-config-volume" (OuterVolumeSpecName: "config-volume") pod "4e24f3f4-e351-45ec-b54c-61eff2e0db52" (UID: "4e24f3f4-e351-45ec-b54c-61eff2e0db52"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 13:45:04 crc kubenswrapper[4724]: I0226 13:45:04.884462 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e24f3f4-e351-45ec-b54c-61eff2e0db52-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 13:45:04 crc kubenswrapper[4724]: I0226 13:45:04.893748 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e24f3f4-e351-45ec-b54c-61eff2e0db52-kube-api-access-tx8vm" (OuterVolumeSpecName: "kube-api-access-tx8vm") pod "4e24f3f4-e351-45ec-b54c-61eff2e0db52" (UID: "4e24f3f4-e351-45ec-b54c-61eff2e0db52"). InnerVolumeSpecName "kube-api-access-tx8vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:45:04 crc kubenswrapper[4724]: I0226 13:45:04.894504 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e24f3f4-e351-45ec-b54c-61eff2e0db52-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4e24f3f4-e351-45ec-b54c-61eff2e0db52" (UID: "4e24f3f4-e351-45ec-b54c-61eff2e0db52"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 13:45:04 crc kubenswrapper[4724]: I0226 13:45:04.986917 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e24f3f4-e351-45ec-b54c-61eff2e0db52-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 13:45:04 crc kubenswrapper[4724]: I0226 13:45:04.986971 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx8vm\" (UniqueName: \"kubernetes.io/projected/4e24f3f4-e351-45ec-b54c-61eff2e0db52-kube-api-access-tx8vm\") on node \"crc\" DevicePath \"\"" Feb 26 13:45:05 crc kubenswrapper[4724]: I0226 13:45:05.345649 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" event={"ID":"4e24f3f4-e351-45ec-b54c-61eff2e0db52","Type":"ContainerDied","Data":"da8750ca448354b04863672565a541dab7d713e7075de2f26efdf0153c2b15c0"} Feb 26 13:45:05 crc kubenswrapper[4724]: I0226 13:45:05.345690 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da8750ca448354b04863672565a541dab7d713e7075de2f26efdf0153c2b15c0" Feb 26 13:45:05 crc kubenswrapper[4724]: I0226 13:45:05.345756 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx" Feb 26 13:45:05 crc kubenswrapper[4724]: I0226 13:45:05.467696 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4"] Feb 26 13:45:05 crc kubenswrapper[4724]: I0226 13:45:05.483037 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535180-4dkb4"] Feb 26 13:45:05 crc kubenswrapper[4724]: I0226 13:45:05.987325 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c6aac2e-5c33-4b6c-88ed-92c0426aae93" path="/var/lib/kubelet/pods/9c6aac2e-5c33-4b6c-88ed-92c0426aae93/volumes" Feb 26 13:45:29 crc kubenswrapper[4724]: I0226 13:45:29.287949 4724 scope.go:117] "RemoveContainer" containerID="db7310188a0904fc6778787df6a59a48c57de8b1692194496c00108913d10db6" Feb 26 13:46:00 crc kubenswrapper[4724]: I0226 13:46:00.159146 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535226-kr8bt"] Feb 26 13:46:00 crc kubenswrapper[4724]: E0226 13:46:00.160122 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e24f3f4-e351-45ec-b54c-61eff2e0db52" containerName="collect-profiles" Feb 26 13:46:00 crc kubenswrapper[4724]: I0226 13:46:00.160142 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e24f3f4-e351-45ec-b54c-61eff2e0db52" containerName="collect-profiles" Feb 26 13:46:00 crc kubenswrapper[4724]: I0226 13:46:00.160468 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e24f3f4-e351-45ec-b54c-61eff2e0db52" containerName="collect-profiles" Feb 26 13:46:00 crc kubenswrapper[4724]: I0226 13:46:00.161564 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535226-kr8bt" Feb 26 13:46:00 crc kubenswrapper[4724]: I0226 13:46:00.164349 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:46:00 crc kubenswrapper[4724]: I0226 13:46:00.164796 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:46:00 crc kubenswrapper[4724]: I0226 13:46:00.164962 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:46:00 crc kubenswrapper[4724]: I0226 13:46:00.175529 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535226-kr8bt"] Feb 26 13:46:00 crc kubenswrapper[4724]: I0226 13:46:00.345186 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nth6\" (UniqueName: \"kubernetes.io/projected/b8139296-d918-4ab2-8ab6-e95b63b34c65-kube-api-access-2nth6\") pod \"auto-csr-approver-29535226-kr8bt\" (UID: \"b8139296-d918-4ab2-8ab6-e95b63b34c65\") " pod="openshift-infra/auto-csr-approver-29535226-kr8bt" Feb 26 13:46:00 crc kubenswrapper[4724]: I0226 13:46:00.448072 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nth6\" (UniqueName: \"kubernetes.io/projected/b8139296-d918-4ab2-8ab6-e95b63b34c65-kube-api-access-2nth6\") pod \"auto-csr-approver-29535226-kr8bt\" (UID: \"b8139296-d918-4ab2-8ab6-e95b63b34c65\") " pod="openshift-infra/auto-csr-approver-29535226-kr8bt" Feb 26 13:46:00 crc kubenswrapper[4724]: I0226 13:46:00.478076 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nth6\" (UniqueName: \"kubernetes.io/projected/b8139296-d918-4ab2-8ab6-e95b63b34c65-kube-api-access-2nth6\") pod \"auto-csr-approver-29535226-kr8bt\" (UID: \"b8139296-d918-4ab2-8ab6-e95b63b34c65\") " pod="openshift-infra/auto-csr-approver-29535226-kr8bt" Feb 26 13:46:00 crc kubenswrapper[4724]: I0226 13:46:00.502637 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535226-kr8bt" Feb 26 13:46:01 crc kubenswrapper[4724]: I0226 13:46:01.159876 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535226-kr8bt"] Feb 26 13:46:01 crc kubenswrapper[4724]: I0226 13:46:01.992659 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535226-kr8bt" event={"ID":"b8139296-d918-4ab2-8ab6-e95b63b34c65","Type":"ContainerStarted","Data":"6197cd9d18a6748735e4bc7ebdfc2d21c1576d216bcd36518598d316b2fe5b95"} Feb 26 13:46:03 crc kubenswrapper[4724]: I0226 13:46:03.002548 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535226-kr8bt" event={"ID":"b8139296-d918-4ab2-8ab6-e95b63b34c65","Type":"ContainerStarted","Data":"f62b8e9923c46333e7c1f34312d8fec5e5cf86c9b6f38cb18fee8f0586201093"} Feb 26 13:46:03 crc kubenswrapper[4724]: I0226 13:46:03.019591 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535226-kr8bt" podStartSLOduration=1.649925218 podStartE2EDuration="3.019550022s" podCreationTimestamp="2026-02-26 13:46:00 +0000 UTC" firstStartedPulling="2026-02-26 13:46:01.174598269 +0000 UTC m=+9627.830337384" lastFinishedPulling="2026-02-26 13:46:02.544223073 +0000 UTC m=+9629.199962188" observedRunningTime="2026-02-26 13:46:03.016643509 +0000 UTC m=+9629.672382624" watchObservedRunningTime="2026-02-26 13:46:03.019550022 +0000 UTC m=+9629.675289137" Feb 26 13:46:04 crc kubenswrapper[4724]: I0226 13:46:04.013777 4724 generic.go:334] "Generic (PLEG): container finished" podID="b8139296-d918-4ab2-8ab6-e95b63b34c65" containerID="f62b8e9923c46333e7c1f34312d8fec5e5cf86c9b6f38cb18fee8f0586201093" exitCode=0 Feb 26 13:46:04 crc kubenswrapper[4724]: I0226 13:46:04.014094 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535226-kr8bt" event={"ID":"b8139296-d918-4ab2-8ab6-e95b63b34c65","Type":"ContainerDied","Data":"f62b8e9923c46333e7c1f34312d8fec5e5cf86c9b6f38cb18fee8f0586201093"} Feb 26 13:46:05 crc kubenswrapper[4724]: I0226 13:46:05.485769 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535226-kr8bt" Feb 26 13:46:05 crc kubenswrapper[4724]: I0226 13:46:05.551601 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nth6\" (UniqueName: \"kubernetes.io/projected/b8139296-d918-4ab2-8ab6-e95b63b34c65-kube-api-access-2nth6\") pod \"b8139296-d918-4ab2-8ab6-e95b63b34c65\" (UID: \"b8139296-d918-4ab2-8ab6-e95b63b34c65\") " Feb 26 13:46:05 crc kubenswrapper[4724]: I0226 13:46:05.557102 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8139296-d918-4ab2-8ab6-e95b63b34c65-kube-api-access-2nth6" (OuterVolumeSpecName: "kube-api-access-2nth6") pod "b8139296-d918-4ab2-8ab6-e95b63b34c65" (UID: "b8139296-d918-4ab2-8ab6-e95b63b34c65"). InnerVolumeSpecName "kube-api-access-2nth6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:46:05 crc kubenswrapper[4724]: I0226 13:46:05.653532 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nth6\" (UniqueName: \"kubernetes.io/projected/b8139296-d918-4ab2-8ab6-e95b63b34c65-kube-api-access-2nth6\") on node \"crc\" DevicePath \"\"" Feb 26 13:46:06 crc kubenswrapper[4724]: I0226 13:46:06.035374 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535226-kr8bt" event={"ID":"b8139296-d918-4ab2-8ab6-e95b63b34c65","Type":"ContainerDied","Data":"6197cd9d18a6748735e4bc7ebdfc2d21c1576d216bcd36518598d316b2fe5b95"} Feb 26 13:46:06 crc kubenswrapper[4724]: I0226 13:46:06.035415 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6197cd9d18a6748735e4bc7ebdfc2d21c1576d216bcd36518598d316b2fe5b95" Feb 26 13:46:06 crc kubenswrapper[4724]: I0226 13:46:06.035471 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535226-kr8bt" Feb 26 13:46:06 crc kubenswrapper[4724]: I0226 13:46:06.114442 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535220-tb697"] Feb 26 13:46:06 crc kubenswrapper[4724]: I0226 13:46:06.123297 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535220-tb697"] Feb 26 13:46:07 crc kubenswrapper[4724]: I0226 13:46:07.985491 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e47d1c3-335a-45fe-b310-7f758e7fc85c" path="/var/lib/kubelet/pods/1e47d1c3-335a-45fe-b310-7f758e7fc85c/volumes" Feb 26 13:46:16 crc kubenswrapper[4724]: I0226 13:46:16.906948 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:46:16 crc kubenswrapper[4724]: I0226 13:46:16.908001 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:46:29 crc kubenswrapper[4724]: I0226 13:46:29.402372 4724 scope.go:117] "RemoveContainer" containerID="c5e9b7dd8b55ec6c2b27a35b323e2fb94842a125bd41d0440de2665df49d948f" Feb 26 13:46:46 crc kubenswrapper[4724]: I0226 13:46:46.906874 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:46:46 crc kubenswrapper[4724]: I0226 13:46:46.907878 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:47:16 crc kubenswrapper[4724]: I0226 13:47:16.907009 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:47:16 crc kubenswrapper[4724]: I0226 13:47:16.908264 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:47:16 crc kubenswrapper[4724]: I0226 13:47:16.908403 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 13:47:16 crc kubenswrapper[4724]: I0226 13:47:16.909712 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 13:47:16 crc kubenswrapper[4724]: I0226 13:47:16.909780 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" gracePeriod=600 Feb 26 13:47:17 crc kubenswrapper[4724]: E0226 13:47:17.056453 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:47:17 crc kubenswrapper[4724]: I0226 13:47:17.787587 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" exitCode=0 Feb 26 13:47:17 crc kubenswrapper[4724]: I0226 13:47:17.787689 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b"} Feb 26 13:47:17 crc kubenswrapper[4724]: I0226 13:47:17.787792 4724 scope.go:117] "RemoveContainer" containerID="097f7d307b384e7c5bfa9a6744e84c3c14eef3ca5d1ac323f91926e85d12bb8f" Feb 26 13:47:17 crc kubenswrapper[4724]: I0226 13:47:17.788821 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:47:17 crc kubenswrapper[4724]: E0226 13:47:17.789260 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:47:28 crc kubenswrapper[4724]: I0226 13:47:28.977471 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:47:28 crc kubenswrapper[4724]: E0226 13:47:28.978900 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:47:39 crc kubenswrapper[4724]: I0226 13:47:39.975647 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:47:39 crc kubenswrapper[4724]: E0226 13:47:39.976483 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:47:43 crc kubenswrapper[4724]: E0226 13:47:43.081400 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Feb 26 13:47:54 crc kubenswrapper[4724]: I0226 13:47:54.975447 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:47:54 crc kubenswrapper[4724]: E0226 13:47:54.976360 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:48:00 crc kubenswrapper[4724]: I0226 13:48:00.164397 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535228-6hnl7"] Feb 26 13:48:00 crc kubenswrapper[4724]: E0226 13:48:00.165306 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8139296-d918-4ab2-8ab6-e95b63b34c65" containerName="oc" Feb 26 13:48:00 crc kubenswrapper[4724]: I0226 13:48:00.165329 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8139296-d918-4ab2-8ab6-e95b63b34c65" containerName="oc" Feb 26 13:48:00 crc kubenswrapper[4724]: I0226 13:48:00.165619 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8139296-d918-4ab2-8ab6-e95b63b34c65" containerName="oc" Feb 26 13:48:00 crc kubenswrapper[4724]: I0226 13:48:00.166419 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535228-6hnl7" Feb 26 13:48:00 crc kubenswrapper[4724]: I0226 13:48:00.169437 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:48:00 crc kubenswrapper[4724]: I0226 13:48:00.169789 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:48:00 crc kubenswrapper[4724]: I0226 13:48:00.170450 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:48:00 crc kubenswrapper[4724]: I0226 13:48:00.192100 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535228-6hnl7"] Feb 26 13:48:00 crc kubenswrapper[4724]: I0226 13:48:00.327660 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4cld\" (UniqueName: \"kubernetes.io/projected/1871320c-3557-4e2d-aa76-f50734b03731-kube-api-access-m4cld\") pod \"auto-csr-approver-29535228-6hnl7\" (UID: \"1871320c-3557-4e2d-aa76-f50734b03731\") " pod="openshift-infra/auto-csr-approver-29535228-6hnl7" Feb 26 13:48:00 crc kubenswrapper[4724]: I0226 13:48:00.430159 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4cld\" (UniqueName: \"kubernetes.io/projected/1871320c-3557-4e2d-aa76-f50734b03731-kube-api-access-m4cld\") pod \"auto-csr-approver-29535228-6hnl7\" (UID: \"1871320c-3557-4e2d-aa76-f50734b03731\") " pod="openshift-infra/auto-csr-approver-29535228-6hnl7" Feb 26 13:48:00 crc kubenswrapper[4724]: I0226 13:48:00.464593 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4cld\" (UniqueName: \"kubernetes.io/projected/1871320c-3557-4e2d-aa76-f50734b03731-kube-api-access-m4cld\") pod \"auto-csr-approver-29535228-6hnl7\" (UID: \"1871320c-3557-4e2d-aa76-f50734b03731\") " pod="openshift-infra/auto-csr-approver-29535228-6hnl7" Feb 26 13:48:00 crc kubenswrapper[4724]: I0226 13:48:00.512825 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535228-6hnl7" Feb 26 13:48:01 crc kubenswrapper[4724]: I0226 13:48:01.550014 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535228-6hnl7"] Feb 26 13:48:02 crc kubenswrapper[4724]: I0226 13:48:02.232095 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535228-6hnl7" event={"ID":"1871320c-3557-4e2d-aa76-f50734b03731","Type":"ContainerStarted","Data":"217ec8522ca97c12703978520350ab31f1571a600e7244477b0fc435aa8a290d"} Feb 26 13:48:04 crc kubenswrapper[4724]: I0226 13:48:04.260797 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535228-6hnl7" event={"ID":"1871320c-3557-4e2d-aa76-f50734b03731","Type":"ContainerStarted","Data":"8e2d0a6674e1c3abecff4783af79c6e89b69a67e5891d65b8a909e36d232b2fb"} Feb 26 13:48:04 crc kubenswrapper[4724]: I0226 13:48:04.284653 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535228-6hnl7" podStartSLOduration=2.936749195 podStartE2EDuration="4.284605776s" podCreationTimestamp="2026-02-26 13:48:00 +0000 UTC" firstStartedPulling="2026-02-26 13:48:01.562038224 +0000 UTC m=+9748.217777339" lastFinishedPulling="2026-02-26 13:48:02.909894805 +0000 UTC m=+9749.565633920" observedRunningTime="2026-02-26 13:48:04.280312197 +0000 UTC m=+9750.936051332" watchObservedRunningTime="2026-02-26 13:48:04.284605776 +0000 UTC m=+9750.940344921" Feb 26 13:48:06 crc kubenswrapper[4724]: I0226 13:48:06.304352 4724 generic.go:334] "Generic (PLEG): container finished" podID="1871320c-3557-4e2d-aa76-f50734b03731" containerID="8e2d0a6674e1c3abecff4783af79c6e89b69a67e5891d65b8a909e36d232b2fb" exitCode=0 Feb 26 13:48:06 crc kubenswrapper[4724]: I0226 13:48:06.305730 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535228-6hnl7" event={"ID":"1871320c-3557-4e2d-aa76-f50734b03731","Type":"ContainerDied","Data":"8e2d0a6674e1c3abecff4783af79c6e89b69a67e5891d65b8a909e36d232b2fb"} Feb 26 13:48:07 crc kubenswrapper[4724]: I0226 13:48:07.976113 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:48:07 crc kubenswrapper[4724]: E0226 13:48:07.976554 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:48:08 crc kubenswrapper[4724]: I0226 13:48:08.846771 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535228-6hnl7" Feb 26 13:48:09 crc kubenswrapper[4724]: I0226 13:48:09.035787 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4cld\" (UniqueName: \"kubernetes.io/projected/1871320c-3557-4e2d-aa76-f50734b03731-kube-api-access-m4cld\") pod \"1871320c-3557-4e2d-aa76-f50734b03731\" (UID: \"1871320c-3557-4e2d-aa76-f50734b03731\") " Feb 26 13:48:09 crc kubenswrapper[4724]: I0226 13:48:09.052700 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1871320c-3557-4e2d-aa76-f50734b03731-kube-api-access-m4cld" (OuterVolumeSpecName: "kube-api-access-m4cld") pod "1871320c-3557-4e2d-aa76-f50734b03731" (UID: "1871320c-3557-4e2d-aa76-f50734b03731"). InnerVolumeSpecName "kube-api-access-m4cld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:48:09 crc kubenswrapper[4724]: I0226 13:48:09.139205 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4cld\" (UniqueName: \"kubernetes.io/projected/1871320c-3557-4e2d-aa76-f50734b03731-kube-api-access-m4cld\") on node \"crc\" DevicePath \"\"" Feb 26 13:48:09 crc kubenswrapper[4724]: I0226 13:48:09.340392 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535228-6hnl7" event={"ID":"1871320c-3557-4e2d-aa76-f50734b03731","Type":"ContainerDied","Data":"217ec8522ca97c12703978520350ab31f1571a600e7244477b0fc435aa8a290d"} Feb 26 13:48:09 crc kubenswrapper[4724]: I0226 13:48:09.340775 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535228-6hnl7" Feb 26 13:48:09 crc kubenswrapper[4724]: I0226 13:48:09.341031 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="217ec8522ca97c12703978520350ab31f1571a600e7244477b0fc435aa8a290d" Feb 26 13:48:09 crc kubenswrapper[4724]: I0226 13:48:09.943532 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535222-fhwv4"] Feb 26 13:48:09 crc kubenswrapper[4724]: I0226 13:48:09.952663 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535222-fhwv4"] Feb 26 13:48:09 crc kubenswrapper[4724]: I0226 13:48:09.990931 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea751cb-6818-4eaf-863d-4c26500a5dd3" path="/var/lib/kubelet/pods/dea751cb-6818-4eaf-863d-4c26500a5dd3/volumes" Feb 26 13:48:20 crc kubenswrapper[4724]: I0226 13:48:20.336482 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8t59c"] Feb 26 13:48:20 crc kubenswrapper[4724]: E0226 13:48:20.338029 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1871320c-3557-4e2d-aa76-f50734b03731" containerName="oc" Feb 26 13:48:20 crc kubenswrapper[4724]: I0226 13:48:20.338047 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1871320c-3557-4e2d-aa76-f50734b03731" containerName="oc" Feb 26 13:48:20 crc kubenswrapper[4724]: I0226 13:48:20.338313 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1871320c-3557-4e2d-aa76-f50734b03731" containerName="oc" Feb 26 13:48:20 crc kubenswrapper[4724]: I0226 13:48:20.344529 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:20 crc kubenswrapper[4724]: I0226 13:48:20.353100 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8t59c"] Feb 26 13:48:20 crc kubenswrapper[4724]: I0226 13:48:20.416631 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd5fc701-212e-452b-a480-16ba63f053df-utilities\") pod \"redhat-marketplace-8t59c\" (UID: \"cd5fc701-212e-452b-a480-16ba63f053df\") " pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:20 crc kubenswrapper[4724]: I0226 13:48:20.416735 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd5fc701-212e-452b-a480-16ba63f053df-catalog-content\") pod \"redhat-marketplace-8t59c\" (UID: \"cd5fc701-212e-452b-a480-16ba63f053df\") " pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:20 crc kubenswrapper[4724]: I0226 13:48:20.416954 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl7qw\" (UniqueName: \"kubernetes.io/projected/cd5fc701-212e-452b-a480-16ba63f053df-kube-api-access-wl7qw\") pod \"redhat-marketplace-8t59c\" (UID: \"cd5fc701-212e-452b-a480-16ba63f053df\") " pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:20 crc kubenswrapper[4724]: I0226 13:48:20.519484 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd5fc701-212e-452b-a480-16ba63f053df-utilities\") pod \"redhat-marketplace-8t59c\" (UID: \"cd5fc701-212e-452b-a480-16ba63f053df\") " pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:20 crc kubenswrapper[4724]: I0226 13:48:20.519564 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd5fc701-212e-452b-a480-16ba63f053df-catalog-content\") pod \"redhat-marketplace-8t59c\" (UID: \"cd5fc701-212e-452b-a480-16ba63f053df\") " pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:20 crc kubenswrapper[4724]: I0226 13:48:20.519654 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl7qw\" (UniqueName: \"kubernetes.io/projected/cd5fc701-212e-452b-a480-16ba63f053df-kube-api-access-wl7qw\") pod \"redhat-marketplace-8t59c\" (UID: \"cd5fc701-212e-452b-a480-16ba63f053df\") " pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:20 crc kubenswrapper[4724]: I0226 13:48:20.520363 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd5fc701-212e-452b-a480-16ba63f053df-utilities\") pod \"redhat-marketplace-8t59c\" (UID: \"cd5fc701-212e-452b-a480-16ba63f053df\") " pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:20 crc kubenswrapper[4724]: I0226 13:48:20.520510 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd5fc701-212e-452b-a480-16ba63f053df-catalog-content\") pod \"redhat-marketplace-8t59c\" (UID: \"cd5fc701-212e-452b-a480-16ba63f053df\") " pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:20 crc kubenswrapper[4724]: I0226 13:48:20.560133 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl7qw\" (UniqueName: \"kubernetes.io/projected/cd5fc701-212e-452b-a480-16ba63f053df-kube-api-access-wl7qw\") pod \"redhat-marketplace-8t59c\" (UID: \"cd5fc701-212e-452b-a480-16ba63f053df\") " pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:20 crc kubenswrapper[4724]: I0226 13:48:20.705421 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:21 crc kubenswrapper[4724]: I0226 13:48:21.802931 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8t59c"] Feb 26 13:48:22 crc kubenswrapper[4724]: I0226 13:48:22.647315 4724 generic.go:334] "Generic (PLEG): container finished" podID="cd5fc701-212e-452b-a480-16ba63f053df" containerID="700c5e89d79277842ea576d893191399e1a56a973c8fe8b7f5b20baff238d14f" exitCode=0 Feb 26 13:48:22 crc kubenswrapper[4724]: I0226 13:48:22.647359 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8t59c" event={"ID":"cd5fc701-212e-452b-a480-16ba63f053df","Type":"ContainerDied","Data":"700c5e89d79277842ea576d893191399e1a56a973c8fe8b7f5b20baff238d14f"} Feb 26 13:48:22 crc kubenswrapper[4724]: I0226 13:48:22.647760 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8t59c" event={"ID":"cd5fc701-212e-452b-a480-16ba63f053df","Type":"ContainerStarted","Data":"befb4b980af3fade921377503864bc0014647c5f552270c090a67011936f9b62"} Feb 26 13:48:22 crc kubenswrapper[4724]: I0226 13:48:22.976480 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:48:22 crc kubenswrapper[4724]: E0226 13:48:22.976890 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:48:24 crc kubenswrapper[4724]: I0226 13:48:24.686207 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8t59c" event={"ID":"cd5fc701-212e-452b-a480-16ba63f053df","Type":"ContainerStarted","Data":"0819b24a2191246f6dd884c6202445a10355b586150b7997dc456903ddd92f32"} Feb 26 13:48:25 crc kubenswrapper[4724]: I0226 13:48:25.699218 4724 generic.go:334] "Generic (PLEG): container finished" podID="cd5fc701-212e-452b-a480-16ba63f053df" containerID="0819b24a2191246f6dd884c6202445a10355b586150b7997dc456903ddd92f32" exitCode=0 Feb 26 13:48:25 crc kubenswrapper[4724]: I0226 13:48:25.699291 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8t59c" event={"ID":"cd5fc701-212e-452b-a480-16ba63f053df","Type":"ContainerDied","Data":"0819b24a2191246f6dd884c6202445a10355b586150b7997dc456903ddd92f32"} Feb 26 13:48:26 crc kubenswrapper[4724]: I0226 13:48:26.712700 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8t59c" event={"ID":"cd5fc701-212e-452b-a480-16ba63f053df","Type":"ContainerStarted","Data":"94e3e59d4b26e9694383124be26df2820ac1c07c2175ab04ef897de4f4422140"} Feb 26 13:48:26 crc kubenswrapper[4724]: I0226 13:48:26.741145 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8t59c" podStartSLOduration=3.081016401 podStartE2EDuration="6.741092666s" podCreationTimestamp="2026-02-26 13:48:20 +0000 UTC" firstStartedPulling="2026-02-26 13:48:22.649315643 +0000 UTC m=+9769.305054758" lastFinishedPulling="2026-02-26 13:48:26.309391898 +0000 UTC m=+9772.965131023" observedRunningTime="2026-02-26 13:48:26.733355841 +0000 UTC m=+9773.389094986" watchObservedRunningTime="2026-02-26 13:48:26.741092666 +0000 UTC m=+9773.396831781" Feb 26 13:48:29 crc kubenswrapper[4724]: I0226 13:48:29.519746 4724 scope.go:117] "RemoveContainer" containerID="0f8f0f5cc2c7f6b7eaeb518923a9e5cb4e3f9bcfec37ac516c6c8c8fc3c86185" Feb 26 13:48:30 crc kubenswrapper[4724]: I0226 13:48:30.707519 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:30 crc kubenswrapper[4724]: I0226 13:48:30.707576 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:30 crc kubenswrapper[4724]: I0226 13:48:30.882261 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:37 crc kubenswrapper[4724]: I0226 13:48:37.976802 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:48:37 crc kubenswrapper[4724]: E0226 13:48:37.977952 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:48:40 crc kubenswrapper[4724]: I0226 13:48:40.762753 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:40 crc kubenswrapper[4724]: I0226 13:48:40.823145 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8t59c"] Feb 26 13:48:40 crc kubenswrapper[4724]: I0226 13:48:40.873746 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8t59c" podUID="cd5fc701-212e-452b-a480-16ba63f053df" containerName="registry-server" containerID="cri-o://94e3e59d4b26e9694383124be26df2820ac1c07c2175ab04ef897de4f4422140" gracePeriod=2 Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.538319 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.716586 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd5fc701-212e-452b-a480-16ba63f053df-utilities\") pod \"cd5fc701-212e-452b-a480-16ba63f053df\" (UID: \"cd5fc701-212e-452b-a480-16ba63f053df\") " Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.716796 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl7qw\" (UniqueName: \"kubernetes.io/projected/cd5fc701-212e-452b-a480-16ba63f053df-kube-api-access-wl7qw\") pod \"cd5fc701-212e-452b-a480-16ba63f053df\" (UID: \"cd5fc701-212e-452b-a480-16ba63f053df\") " Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.716981 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd5fc701-212e-452b-a480-16ba63f053df-catalog-content\") pod \"cd5fc701-212e-452b-a480-16ba63f053df\" (UID: \"cd5fc701-212e-452b-a480-16ba63f053df\") " Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.718303 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd5fc701-212e-452b-a480-16ba63f053df-utilities" (OuterVolumeSpecName: "utilities") pod "cd5fc701-212e-452b-a480-16ba63f053df" (UID: "cd5fc701-212e-452b-a480-16ba63f053df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.731584 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd5fc701-212e-452b-a480-16ba63f053df-kube-api-access-wl7qw" (OuterVolumeSpecName: "kube-api-access-wl7qw") pod "cd5fc701-212e-452b-a480-16ba63f053df" (UID: "cd5fc701-212e-452b-a480-16ba63f053df"). InnerVolumeSpecName "kube-api-access-wl7qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.739091 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd5fc701-212e-452b-a480-16ba63f053df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd5fc701-212e-452b-a480-16ba63f053df" (UID: "cd5fc701-212e-452b-a480-16ba63f053df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.820292 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl7qw\" (UniqueName: \"kubernetes.io/projected/cd5fc701-212e-452b-a480-16ba63f053df-kube-api-access-wl7qw\") on node \"crc\" DevicePath \"\"" Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.820346 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd5fc701-212e-452b-a480-16ba63f053df-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.820356 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd5fc701-212e-452b-a480-16ba63f053df-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.893454 4724 generic.go:334] "Generic (PLEG): container finished" podID="cd5fc701-212e-452b-a480-16ba63f053df" containerID="94e3e59d4b26e9694383124be26df2820ac1c07c2175ab04ef897de4f4422140" exitCode=0 Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.893541 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8t59c" event={"ID":"cd5fc701-212e-452b-a480-16ba63f053df","Type":"ContainerDied","Data":"94e3e59d4b26e9694383124be26df2820ac1c07c2175ab04ef897de4f4422140"} Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.893676 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8t59c" event={"ID":"cd5fc701-212e-452b-a480-16ba63f053df","Type":"ContainerDied","Data":"befb4b980af3fade921377503864bc0014647c5f552270c090a67011936f9b62"} Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.893671 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8t59c" Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.893709 4724 scope.go:117] "RemoveContainer" containerID="94e3e59d4b26e9694383124be26df2820ac1c07c2175ab04ef897de4f4422140" Feb 26 13:48:41 crc kubenswrapper[4724]: I0226 13:48:41.962850 4724 scope.go:117] "RemoveContainer" containerID="0819b24a2191246f6dd884c6202445a10355b586150b7997dc456903ddd92f32" Feb 26 13:48:42 crc kubenswrapper[4724]: I0226 13:48:42.021425 4724 scope.go:117] "RemoveContainer" containerID="700c5e89d79277842ea576d893191399e1a56a973c8fe8b7f5b20baff238d14f" Feb 26 13:48:42 crc kubenswrapper[4724]: I0226 13:48:42.028450 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8t59c"] Feb 26 13:48:42 crc kubenswrapper[4724]: I0226 13:48:42.041105 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8t59c"] Feb 26 13:48:42 crc kubenswrapper[4724]: I0226 13:48:42.074751 4724 scope.go:117] "RemoveContainer" containerID="94e3e59d4b26e9694383124be26df2820ac1c07c2175ab04ef897de4f4422140" Feb 26 13:48:42 crc kubenswrapper[4724]: E0226 13:48:42.075637 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94e3e59d4b26e9694383124be26df2820ac1c07c2175ab04ef897de4f4422140\": container with ID starting with 94e3e59d4b26e9694383124be26df2820ac1c07c2175ab04ef897de4f4422140 not found: ID does not exist" containerID="94e3e59d4b26e9694383124be26df2820ac1c07c2175ab04ef897de4f4422140" Feb 26 13:48:42 crc kubenswrapper[4724]: I0226 13:48:42.075689 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e3e59d4b26e9694383124be26df2820ac1c07c2175ab04ef897de4f4422140"} err="failed to get container status \"94e3e59d4b26e9694383124be26df2820ac1c07c2175ab04ef897de4f4422140\": rpc error: code = NotFound desc = could not find container \"94e3e59d4b26e9694383124be26df2820ac1c07c2175ab04ef897de4f4422140\": container with ID starting with 94e3e59d4b26e9694383124be26df2820ac1c07c2175ab04ef897de4f4422140 not found: ID does not exist" Feb 26 13:48:42 crc kubenswrapper[4724]: I0226 13:48:42.075723 4724 scope.go:117] "RemoveContainer" containerID="0819b24a2191246f6dd884c6202445a10355b586150b7997dc456903ddd92f32" Feb 26 13:48:42 crc kubenswrapper[4724]: E0226 13:48:42.076157 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0819b24a2191246f6dd884c6202445a10355b586150b7997dc456903ddd92f32\": container with ID starting with 0819b24a2191246f6dd884c6202445a10355b586150b7997dc456903ddd92f32 not found: ID does not exist" containerID="0819b24a2191246f6dd884c6202445a10355b586150b7997dc456903ddd92f32" Feb 26 13:48:42 crc kubenswrapper[4724]: I0226 13:48:42.076198 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0819b24a2191246f6dd884c6202445a10355b586150b7997dc456903ddd92f32"} err="failed to get container status \"0819b24a2191246f6dd884c6202445a10355b586150b7997dc456903ddd92f32\": rpc error: code = NotFound desc = could not find container \"0819b24a2191246f6dd884c6202445a10355b586150b7997dc456903ddd92f32\": container with ID starting with 0819b24a2191246f6dd884c6202445a10355b586150b7997dc456903ddd92f32 not found: ID does not exist" Feb 26 13:48:42 crc kubenswrapper[4724]: I0226 13:48:42.076216 4724 scope.go:117] "RemoveContainer" containerID="700c5e89d79277842ea576d893191399e1a56a973c8fe8b7f5b20baff238d14f" Feb 26 13:48:42 crc kubenswrapper[4724]: E0226 13:48:42.077003 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"700c5e89d79277842ea576d893191399e1a56a973c8fe8b7f5b20baff238d14f\": container with ID starting with 700c5e89d79277842ea576d893191399e1a56a973c8fe8b7f5b20baff238d14f not found: ID does not exist" containerID="700c5e89d79277842ea576d893191399e1a56a973c8fe8b7f5b20baff238d14f" Feb 26 13:48:42 crc kubenswrapper[4724]: I0226 13:48:42.077038 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"700c5e89d79277842ea576d893191399e1a56a973c8fe8b7f5b20baff238d14f"} err="failed to get container status \"700c5e89d79277842ea576d893191399e1a56a973c8fe8b7f5b20baff238d14f\": rpc error: code = NotFound desc = could not find container \"700c5e89d79277842ea576d893191399e1a56a973c8fe8b7f5b20baff238d14f\": container with ID starting with 700c5e89d79277842ea576d893191399e1a56a973c8fe8b7f5b20baff238d14f not found: ID does not exist" Feb 26 13:48:43 crc kubenswrapper[4724]: I0226 13:48:43.988845 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd5fc701-212e-452b-a480-16ba63f053df" path="/var/lib/kubelet/pods/cd5fc701-212e-452b-a480-16ba63f053df/volumes" Feb 26 13:48:51 crc kubenswrapper[4724]: I0226 13:48:51.976574 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:48:51 crc kubenswrapper[4724]: E0226 13:48:51.977741 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.136837 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gvthd"] Feb 26 13:49:03 crc kubenswrapper[4724]: E0226 13:49:03.138867 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5fc701-212e-452b-a480-16ba63f053df" containerName="extract-content" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.138891 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5fc701-212e-452b-a480-16ba63f053df" containerName="extract-content" Feb 26 13:49:03 crc kubenswrapper[4724]: E0226 13:49:03.138918 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5fc701-212e-452b-a480-16ba63f053df" containerName="registry-server" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.138926 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5fc701-212e-452b-a480-16ba63f053df" containerName="registry-server" Feb 26 13:49:03 crc kubenswrapper[4724]: E0226 13:49:03.138944 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5fc701-212e-452b-a480-16ba63f053df" containerName="extract-utilities" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.138952 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5fc701-212e-452b-a480-16ba63f053df" containerName="extract-utilities" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.139279 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5fc701-212e-452b-a480-16ba63f053df" containerName="registry-server" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.141352 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.149873 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gvthd"] Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.186607 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdsvl\" (UniqueName: \"kubernetes.io/projected/0c411b47-525e-4836-b275-9c95d26a0882-kube-api-access-jdsvl\") pod \"certified-operators-gvthd\" (UID: \"0c411b47-525e-4836-b275-9c95d26a0882\") " pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.187129 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c411b47-525e-4836-b275-9c95d26a0882-utilities\") pod \"certified-operators-gvthd\" (UID: \"0c411b47-525e-4836-b275-9c95d26a0882\") " pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.187317 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c411b47-525e-4836-b275-9c95d26a0882-catalog-content\") pod \"certified-operators-gvthd\" (UID: \"0c411b47-525e-4836-b275-9c95d26a0882\") " pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.290335 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdsvl\" (UniqueName: \"kubernetes.io/projected/0c411b47-525e-4836-b275-9c95d26a0882-kube-api-access-jdsvl\") pod \"certified-operators-gvthd\" (UID: \"0c411b47-525e-4836-b275-9c95d26a0882\") " pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.290458 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c411b47-525e-4836-b275-9c95d26a0882-utilities\") pod \"certified-operators-gvthd\" (UID: \"0c411b47-525e-4836-b275-9c95d26a0882\") " pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.290533 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c411b47-525e-4836-b275-9c95d26a0882-catalog-content\") pod \"certified-operators-gvthd\" (UID: \"0c411b47-525e-4836-b275-9c95d26a0882\") " pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.291122 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c411b47-525e-4836-b275-9c95d26a0882-catalog-content\") pod \"certified-operators-gvthd\" (UID: \"0c411b47-525e-4836-b275-9c95d26a0882\") " pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.291439 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c411b47-525e-4836-b275-9c95d26a0882-utilities\") pod \"certified-operators-gvthd\" (UID: \"0c411b47-525e-4836-b275-9c95d26a0882\") " pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.335563 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdsvl\" (UniqueName: \"kubernetes.io/projected/0c411b47-525e-4836-b275-9c95d26a0882-kube-api-access-jdsvl\") pod \"certified-operators-gvthd\" (UID: \"0c411b47-525e-4836-b275-9c95d26a0882\") " pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:03 crc kubenswrapper[4724]: I0226 13:49:03.481766 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:04 crc kubenswrapper[4724]: I0226 13:49:04.081648 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gvthd"] Feb 26 13:49:05 crc kubenswrapper[4724]: I0226 13:49:05.166897 4724 generic.go:334] "Generic (PLEG): container finished" podID="0c411b47-525e-4836-b275-9c95d26a0882" containerID="f11160af0c18ad135da602134b882f804955b56e62ff2f5be86c6e4066adbc4d" exitCode=0 Feb 26 13:49:05 crc kubenswrapper[4724]: I0226 13:49:05.166963 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvthd" event={"ID":"0c411b47-525e-4836-b275-9c95d26a0882","Type":"ContainerDied","Data":"f11160af0c18ad135da602134b882f804955b56e62ff2f5be86c6e4066adbc4d"} Feb 26 13:49:05 crc kubenswrapper[4724]: I0226 13:49:05.167542 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvthd" event={"ID":"0c411b47-525e-4836-b275-9c95d26a0882","Type":"ContainerStarted","Data":"c1f95a240990e2698076e78a0dca03d2d96ea7824f8d34defc43683433fb1150"} Feb 26 13:49:05 crc kubenswrapper[4724]: I0226 13:49:05.171113 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 13:49:06 crc kubenswrapper[4724]: I0226 13:49:06.179888 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvthd" event={"ID":"0c411b47-525e-4836-b275-9c95d26a0882","Type":"ContainerStarted","Data":"0249ea2fde496e69eba3c60677eca1fbb1b93fb271a51c5a74e67772588e0113"} Feb 26 13:49:06 crc kubenswrapper[4724]: I0226 13:49:06.975961 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:49:06 crc kubenswrapper[4724]: E0226 13:49:06.976541 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:49:09 crc kubenswrapper[4724]: I0226 13:49:09.219752 4724 generic.go:334] "Generic (PLEG): container finished" podID="0c411b47-525e-4836-b275-9c95d26a0882" containerID="0249ea2fde496e69eba3c60677eca1fbb1b93fb271a51c5a74e67772588e0113" exitCode=0 Feb 26 13:49:09 crc kubenswrapper[4724]: I0226 13:49:09.219814 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvthd" event={"ID":"0c411b47-525e-4836-b275-9c95d26a0882","Type":"ContainerDied","Data":"0249ea2fde496e69eba3c60677eca1fbb1b93fb271a51c5a74e67772588e0113"} Feb 26 13:49:10 crc kubenswrapper[4724]: I0226 13:49:10.239992 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvthd" event={"ID":"0c411b47-525e-4836-b275-9c95d26a0882","Type":"ContainerStarted","Data":"43948dcacd192bd8eeaa1a764259d89f55c4cd5e3a14e60a7dbf437ed905f3f4"} Feb 26 13:49:10 crc kubenswrapper[4724]: I0226 13:49:10.270288 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gvthd" podStartSLOduration=2.770434513 podStartE2EDuration="7.270263468s" podCreationTimestamp="2026-02-26 13:49:03 +0000 UTC" firstStartedPulling="2026-02-26 13:49:05.170647496 +0000 UTC m=+9811.826386611" lastFinishedPulling="2026-02-26 13:49:09.670476451 +0000 UTC m=+9816.326215566" observedRunningTime="2026-02-26 13:49:10.26120503 +0000 UTC m=+9816.916944145" watchObservedRunningTime="2026-02-26 13:49:10.270263468 +0000 UTC m=+9816.926002603" Feb 26 13:49:13 crc kubenswrapper[4724]: I0226 13:49:13.482652 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:13 crc kubenswrapper[4724]: I0226 13:49:13.483295 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:14 crc kubenswrapper[4724]: I0226 13:49:14.531738 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gvthd" podUID="0c411b47-525e-4836-b275-9c95d26a0882" containerName="registry-server" probeResult="failure" output=< Feb 26 13:49:14 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:49:14 crc kubenswrapper[4724]: > Feb 26 13:49:19 crc kubenswrapper[4724]: I0226 13:49:19.978962 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:49:19 crc kubenswrapper[4724]: E0226 13:49:19.979587 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:49:24 crc kubenswrapper[4724]: I0226 13:49:24.530431 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gvthd" podUID="0c411b47-525e-4836-b275-9c95d26a0882" containerName="registry-server" probeResult="failure" output=< Feb 26 13:49:24 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:49:24 crc kubenswrapper[4724]: > Feb 26 13:49:31 crc kubenswrapper[4724]: I0226 13:49:31.976047 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:49:31 crc kubenswrapper[4724]: E0226 13:49:31.977056 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:49:33 crc kubenswrapper[4724]: I0226 13:49:33.547612 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:33 crc kubenswrapper[4724]: I0226 13:49:33.608195 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:34 crc kubenswrapper[4724]: I0226 13:49:34.325612 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gvthd"] Feb 26 13:49:35 crc kubenswrapper[4724]: I0226 13:49:35.499249 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gvthd" podUID="0c411b47-525e-4836-b275-9c95d26a0882" containerName="registry-server" containerID="cri-o://43948dcacd192bd8eeaa1a764259d89f55c4cd5e3a14e60a7dbf437ed905f3f4" gracePeriod=2 Feb 26 13:49:35 crc kubenswrapper[4724]: I0226 13:49:35.972315 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.115594 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c411b47-525e-4836-b275-9c95d26a0882-catalog-content\") pod \"0c411b47-525e-4836-b275-9c95d26a0882\" (UID: \"0c411b47-525e-4836-b275-9c95d26a0882\") " Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.117915 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c411b47-525e-4836-b275-9c95d26a0882-utilities\") pod \"0c411b47-525e-4836-b275-9c95d26a0882\" (UID: \"0c411b47-525e-4836-b275-9c95d26a0882\") " Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.118358 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c411b47-525e-4836-b275-9c95d26a0882-utilities" (OuterVolumeSpecName: "utilities") pod "0c411b47-525e-4836-b275-9c95d26a0882" (UID: "0c411b47-525e-4836-b275-9c95d26a0882"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.118658 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdsvl\" (UniqueName: \"kubernetes.io/projected/0c411b47-525e-4836-b275-9c95d26a0882-kube-api-access-jdsvl\") pod \"0c411b47-525e-4836-b275-9c95d26a0882\" (UID: \"0c411b47-525e-4836-b275-9c95d26a0882\") " Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.120667 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c411b47-525e-4836-b275-9c95d26a0882-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.130397 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c411b47-525e-4836-b275-9c95d26a0882-kube-api-access-jdsvl" (OuterVolumeSpecName: "kube-api-access-jdsvl") pod "0c411b47-525e-4836-b275-9c95d26a0882" (UID: "0c411b47-525e-4836-b275-9c95d26a0882"). InnerVolumeSpecName "kube-api-access-jdsvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.176268 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c411b47-525e-4836-b275-9c95d26a0882-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c411b47-525e-4836-b275-9c95d26a0882" (UID: "0c411b47-525e-4836-b275-9c95d26a0882"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.231886 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdsvl\" (UniqueName: \"kubernetes.io/projected/0c411b47-525e-4836-b275-9c95d26a0882-kube-api-access-jdsvl\") on node \"crc\" DevicePath \"\"" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.231946 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c411b47-525e-4836-b275-9c95d26a0882-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.511761 4724 generic.go:334] "Generic (PLEG): container finished" podID="0c411b47-525e-4836-b275-9c95d26a0882" containerID="43948dcacd192bd8eeaa1a764259d89f55c4cd5e3a14e60a7dbf437ed905f3f4" exitCode=0 Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.511837 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvthd" event={"ID":"0c411b47-525e-4836-b275-9c95d26a0882","Type":"ContainerDied","Data":"43948dcacd192bd8eeaa1a764259d89f55c4cd5e3a14e60a7dbf437ed905f3f4"} Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.511880 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gvthd" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.511927 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvthd" event={"ID":"0c411b47-525e-4836-b275-9c95d26a0882","Type":"ContainerDied","Data":"c1f95a240990e2698076e78a0dca03d2d96ea7824f8d34defc43683433fb1150"} Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.511957 4724 scope.go:117] "RemoveContainer" containerID="43948dcacd192bd8eeaa1a764259d89f55c4cd5e3a14e60a7dbf437ed905f3f4" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.551673 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gvthd"] Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.555619 4724 scope.go:117] "RemoveContainer" containerID="0249ea2fde496e69eba3c60677eca1fbb1b93fb271a51c5a74e67772588e0113" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.566017 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gvthd"] Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.589274 4724 scope.go:117] "RemoveContainer" containerID="f11160af0c18ad135da602134b882f804955b56e62ff2f5be86c6e4066adbc4d" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.650207 4724 scope.go:117] "RemoveContainer" containerID="43948dcacd192bd8eeaa1a764259d89f55c4cd5e3a14e60a7dbf437ed905f3f4" Feb 26 13:49:36 crc kubenswrapper[4724]: E0226 13:49:36.650985 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43948dcacd192bd8eeaa1a764259d89f55c4cd5e3a14e60a7dbf437ed905f3f4\": container with ID starting with 43948dcacd192bd8eeaa1a764259d89f55c4cd5e3a14e60a7dbf437ed905f3f4 not found: ID does not exist" containerID="43948dcacd192bd8eeaa1a764259d89f55c4cd5e3a14e60a7dbf437ed905f3f4" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.651025 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43948dcacd192bd8eeaa1a764259d89f55c4cd5e3a14e60a7dbf437ed905f3f4"} err="failed to get container status \"43948dcacd192bd8eeaa1a764259d89f55c4cd5e3a14e60a7dbf437ed905f3f4\": rpc error: code = NotFound desc = could not find container \"43948dcacd192bd8eeaa1a764259d89f55c4cd5e3a14e60a7dbf437ed905f3f4\": container with ID starting with 43948dcacd192bd8eeaa1a764259d89f55c4cd5e3a14e60a7dbf437ed905f3f4 not found: ID does not exist" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.651051 4724 scope.go:117] "RemoveContainer" containerID="0249ea2fde496e69eba3c60677eca1fbb1b93fb271a51c5a74e67772588e0113" Feb 26 13:49:36 crc kubenswrapper[4724]: E0226 13:49:36.651465 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0249ea2fde496e69eba3c60677eca1fbb1b93fb271a51c5a74e67772588e0113\": container with ID starting with 0249ea2fde496e69eba3c60677eca1fbb1b93fb271a51c5a74e67772588e0113 not found: ID does not exist" containerID="0249ea2fde496e69eba3c60677eca1fbb1b93fb271a51c5a74e67772588e0113" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.651494 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0249ea2fde496e69eba3c60677eca1fbb1b93fb271a51c5a74e67772588e0113"} err="failed to get container status \"0249ea2fde496e69eba3c60677eca1fbb1b93fb271a51c5a74e67772588e0113\": rpc error: code = NotFound desc = could not find container \"0249ea2fde496e69eba3c60677eca1fbb1b93fb271a51c5a74e67772588e0113\": container with ID starting with 0249ea2fde496e69eba3c60677eca1fbb1b93fb271a51c5a74e67772588e0113 not found: ID does not exist" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.651509 4724 scope.go:117] "RemoveContainer" containerID="f11160af0c18ad135da602134b882f804955b56e62ff2f5be86c6e4066adbc4d" Feb 26 13:49:36 crc kubenswrapper[4724]: E0226 13:49:36.652133 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f11160af0c18ad135da602134b882f804955b56e62ff2f5be86c6e4066adbc4d\": container with ID starting with f11160af0c18ad135da602134b882f804955b56e62ff2f5be86c6e4066adbc4d not found: ID does not exist" containerID="f11160af0c18ad135da602134b882f804955b56e62ff2f5be86c6e4066adbc4d" Feb 26 13:49:36 crc kubenswrapper[4724]: I0226 13:49:36.652171 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f11160af0c18ad135da602134b882f804955b56e62ff2f5be86c6e4066adbc4d"} err="failed to get container status \"f11160af0c18ad135da602134b882f804955b56e62ff2f5be86c6e4066adbc4d\": rpc error: code = NotFound desc = could not find container \"f11160af0c18ad135da602134b882f804955b56e62ff2f5be86c6e4066adbc4d\": container with ID starting with f11160af0c18ad135da602134b882f804955b56e62ff2f5be86c6e4066adbc4d not found: ID does not exist" Feb 26 13:49:37 crc kubenswrapper[4724]: I0226 13:49:37.990506 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c411b47-525e-4836-b275-9c95d26a0882" path="/var/lib/kubelet/pods/0c411b47-525e-4836-b275-9c95d26a0882/volumes" Feb 26 13:49:46 crc kubenswrapper[4724]: I0226 13:49:46.976472 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:49:46 crc kubenswrapper[4724]: E0226 13:49:46.977652 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:49:58 crc kubenswrapper[4724]: I0226 13:49:58.975584 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:49:58 crc kubenswrapper[4724]: E0226 13:49:58.976430 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:50:00 crc kubenswrapper[4724]: I0226 13:50:00.153614 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535230-89n8p"] Feb 26 13:50:00 crc kubenswrapper[4724]: E0226 13:50:00.154997 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c411b47-525e-4836-b275-9c95d26a0882" containerName="extract-utilities" Feb 26 13:50:00 crc kubenswrapper[4724]: I0226 13:50:00.155118 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c411b47-525e-4836-b275-9c95d26a0882" containerName="extract-utilities" Feb 26 13:50:00 crc kubenswrapper[4724]: E0226 13:50:00.155276 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c411b47-525e-4836-b275-9c95d26a0882" containerName="registry-server" Feb 26 13:50:00 crc kubenswrapper[4724]: I0226 13:50:00.155370 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c411b47-525e-4836-b275-9c95d26a0882" containerName="registry-server" Feb 26 13:50:00 crc kubenswrapper[4724]: E0226 13:50:00.155463 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c411b47-525e-4836-b275-9c95d26a0882" containerName="extract-content" Feb 26 13:50:00 crc kubenswrapper[4724]: I0226 13:50:00.155536 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c411b47-525e-4836-b275-9c95d26a0882" containerName="extract-content" Feb 26 13:50:00 crc kubenswrapper[4724]: I0226 13:50:00.155897 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c411b47-525e-4836-b275-9c95d26a0882" containerName="registry-server" Feb 26 13:50:00 crc kubenswrapper[4724]: I0226 13:50:00.160262 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535230-89n8p" Feb 26 13:50:00 crc kubenswrapper[4724]: I0226 13:50:00.164681 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:50:00 crc kubenswrapper[4724]: I0226 13:50:00.164956 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:50:00 crc kubenswrapper[4724]: I0226 13:50:00.165630 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535230-89n8p"] Feb 26 13:50:00 crc kubenswrapper[4724]: I0226 13:50:00.165651 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:50:00 crc kubenswrapper[4724]: I0226 13:50:00.285227 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwb24\" (UniqueName: \"kubernetes.io/projected/2003f32e-3582-4e5d-a0ae-bbf3aa146536-kube-api-access-jwb24\") pod \"auto-csr-approver-29535230-89n8p\" (UID: \"2003f32e-3582-4e5d-a0ae-bbf3aa146536\") " pod="openshift-infra/auto-csr-approver-29535230-89n8p" Feb 26 13:50:00 crc kubenswrapper[4724]: I0226 13:50:00.387226 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwb24\" (UniqueName: \"kubernetes.io/projected/2003f32e-3582-4e5d-a0ae-bbf3aa146536-kube-api-access-jwb24\") pod \"auto-csr-approver-29535230-89n8p\" (UID: \"2003f32e-3582-4e5d-a0ae-bbf3aa146536\") " pod="openshift-infra/auto-csr-approver-29535230-89n8p" Feb 26 13:50:00 crc kubenswrapper[4724]: I0226 13:50:00.409523 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwb24\" (UniqueName: \"kubernetes.io/projected/2003f32e-3582-4e5d-a0ae-bbf3aa146536-kube-api-access-jwb24\") pod \"auto-csr-approver-29535230-89n8p\" (UID: \"2003f32e-3582-4e5d-a0ae-bbf3aa146536\") " pod="openshift-infra/auto-csr-approver-29535230-89n8p" Feb 26 13:50:00 crc kubenswrapper[4724]: I0226 13:50:00.492549 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535230-89n8p" Feb 26 13:50:01 crc kubenswrapper[4724]: I0226 13:50:01.039113 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535230-89n8p"] Feb 26 13:50:01 crc kubenswrapper[4724]: I0226 13:50:01.770347 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535230-89n8p" event={"ID":"2003f32e-3582-4e5d-a0ae-bbf3aa146536","Type":"ContainerStarted","Data":"cb1ac980e743f78cac14170598f670a8200288624c85aecf9bc1ec2fcc64e855"} Feb 26 13:50:02 crc kubenswrapper[4724]: I0226 13:50:02.783056 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535230-89n8p" event={"ID":"2003f32e-3582-4e5d-a0ae-bbf3aa146536","Type":"ContainerStarted","Data":"210be6f0568bc693371f3278ee0772229decf941b9fcfd3db328f2b331063781"} Feb 26 13:50:02 crc kubenswrapper[4724]: I0226 13:50:02.818895 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535230-89n8p" podStartSLOduration=1.407640809 podStartE2EDuration="2.818831462s" podCreationTimestamp="2026-02-26 13:50:00 +0000 UTC" firstStartedPulling="2026-02-26 13:50:01.027615865 +0000 UTC m=+9867.683354980" lastFinishedPulling="2026-02-26 13:50:02.438806518 +0000 UTC m=+9869.094545633" observedRunningTime="2026-02-26 13:50:02.800475158 +0000 UTC m=+9869.456214283" watchObservedRunningTime="2026-02-26 13:50:02.818831462 +0000 UTC m=+9869.474570577" Feb 26 13:50:04 crc kubenswrapper[4724]: I0226 13:50:04.931213 4724 generic.go:334] "Generic (PLEG): container finished" podID="2003f32e-3582-4e5d-a0ae-bbf3aa146536" containerID="210be6f0568bc693371f3278ee0772229decf941b9fcfd3db328f2b331063781" exitCode=0 Feb 26 13:50:04 crc kubenswrapper[4724]: I0226 13:50:04.931301 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535230-89n8p" event={"ID":"2003f32e-3582-4e5d-a0ae-bbf3aa146536","Type":"ContainerDied","Data":"210be6f0568bc693371f3278ee0772229decf941b9fcfd3db328f2b331063781"} Feb 26 13:50:06 crc kubenswrapper[4724]: I0226 13:50:06.362674 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535230-89n8p" Feb 26 13:50:06 crc kubenswrapper[4724]: I0226 13:50:06.543585 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwb24\" (UniqueName: \"kubernetes.io/projected/2003f32e-3582-4e5d-a0ae-bbf3aa146536-kube-api-access-jwb24\") pod \"2003f32e-3582-4e5d-a0ae-bbf3aa146536\" (UID: \"2003f32e-3582-4e5d-a0ae-bbf3aa146536\") " Feb 26 13:50:06 crc kubenswrapper[4724]: I0226 13:50:06.550530 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2003f32e-3582-4e5d-a0ae-bbf3aa146536-kube-api-access-jwb24" (OuterVolumeSpecName: "kube-api-access-jwb24") pod "2003f32e-3582-4e5d-a0ae-bbf3aa146536" (UID: "2003f32e-3582-4e5d-a0ae-bbf3aa146536"). InnerVolumeSpecName "kube-api-access-jwb24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:50:06 crc kubenswrapper[4724]: I0226 13:50:06.645918 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwb24\" (UniqueName: \"kubernetes.io/projected/2003f32e-3582-4e5d-a0ae-bbf3aa146536-kube-api-access-jwb24\") on node \"crc\" DevicePath \"\"" Feb 26 13:50:06 crc kubenswrapper[4724]: I0226 13:50:06.954141 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535230-89n8p" event={"ID":"2003f32e-3582-4e5d-a0ae-bbf3aa146536","Type":"ContainerDied","Data":"cb1ac980e743f78cac14170598f670a8200288624c85aecf9bc1ec2fcc64e855"} Feb 26 13:50:06 crc kubenswrapper[4724]: I0226 13:50:06.954204 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb1ac980e743f78cac14170598f670a8200288624c85aecf9bc1ec2fcc64e855" Feb 26 13:50:06 crc kubenswrapper[4724]: I0226 13:50:06.954567 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535230-89n8p" Feb 26 13:50:07 crc kubenswrapper[4724]: I0226 13:50:07.026348 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535224-8lc69"] Feb 26 13:50:07 crc kubenswrapper[4724]: I0226 13:50:07.035711 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535224-8lc69"] Feb 26 13:50:07 crc kubenswrapper[4724]: E0226 13:50:07.055165 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2003f32e_3582_4e5d_a0ae_bbf3aa146536.slice\": RecentStats: unable to find data in memory cache]" Feb 26 13:50:07 crc kubenswrapper[4724]: I0226 13:50:07.994130 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86a6482d-04fc-4a4b-855f-2aabba305b90" path="/var/lib/kubelet/pods/86a6482d-04fc-4a4b-855f-2aabba305b90/volumes" Feb 26 13:50:13 crc kubenswrapper[4724]: I0226 13:50:13.982080 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:50:13 crc kubenswrapper[4724]: E0226 13:50:13.987363 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:50:24 crc kubenswrapper[4724]: I0226 13:50:24.975691 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:50:24 crc kubenswrapper[4724]: E0226 13:50:24.976599 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:50:29 crc kubenswrapper[4724]: I0226 13:50:29.681874 4724 scope.go:117] "RemoveContainer" containerID="dd382777f4e7fcb01fde832cf3a90488f2c74dee341cc99a735b6bd0bffbf6cc" Feb 26 13:50:39 crc kubenswrapper[4724]: I0226 13:50:39.976172 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:50:39 crc kubenswrapper[4724]: E0226 13:50:39.976980 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:50:53 crc kubenswrapper[4724]: I0226 13:50:53.983869 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:50:53 crc kubenswrapper[4724]: E0226 13:50:53.984669 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:51:09 crc kubenswrapper[4724]: I0226 13:51:09.976059 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:51:09 crc kubenswrapper[4724]: E0226 13:51:09.976957 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:51:23 crc kubenswrapper[4724]: I0226 13:51:23.985945 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:51:23 crc kubenswrapper[4724]: E0226 13:51:23.986805 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:51:34 crc kubenswrapper[4724]: I0226 13:51:34.976215 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:51:34 crc kubenswrapper[4724]: E0226 13:51:34.977068 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:51:45 crc kubenswrapper[4724]: I0226 13:51:45.975356 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:51:45 crc kubenswrapper[4724]: E0226 13:51:45.976200 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:51:59 crc kubenswrapper[4724]: I0226 13:51:59.976902 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:51:59 crc kubenswrapper[4724]: E0226 13:51:59.977796 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:52:00 crc kubenswrapper[4724]: I0226 13:52:00.180503 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535232-lsdkz"] Feb 26 13:52:00 crc kubenswrapper[4724]: E0226 13:52:00.181015 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2003f32e-3582-4e5d-a0ae-bbf3aa146536" containerName="oc" Feb 26 13:52:00 crc kubenswrapper[4724]: I0226 13:52:00.181033 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2003f32e-3582-4e5d-a0ae-bbf3aa146536" containerName="oc" Feb 26 13:52:00 crc kubenswrapper[4724]: I0226 13:52:00.181262 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="2003f32e-3582-4e5d-a0ae-bbf3aa146536" containerName="oc" Feb 26 13:52:00 crc kubenswrapper[4724]: I0226 13:52:00.189293 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535232-lsdkz" Feb 26 13:52:00 crc kubenswrapper[4724]: I0226 13:52:00.195393 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:52:00 crc kubenswrapper[4724]: I0226 13:52:00.195569 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:52:00 crc kubenswrapper[4724]: I0226 13:52:00.195974 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:52:00 crc kubenswrapper[4724]: I0226 13:52:00.213474 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535232-lsdkz"] Feb 26 13:52:00 crc kubenswrapper[4724]: I0226 13:52:00.267697 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k55fk\" (UniqueName: \"kubernetes.io/projected/89714aff-88c5-4509-9792-9410d85e784b-kube-api-access-k55fk\") pod \"auto-csr-approver-29535232-lsdkz\" (UID: \"89714aff-88c5-4509-9792-9410d85e784b\") " pod="openshift-infra/auto-csr-approver-29535232-lsdkz" Feb 26 13:52:00 crc kubenswrapper[4724]: I0226 13:52:00.370393 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k55fk\" (UniqueName: \"kubernetes.io/projected/89714aff-88c5-4509-9792-9410d85e784b-kube-api-access-k55fk\") pod \"auto-csr-approver-29535232-lsdkz\" (UID: \"89714aff-88c5-4509-9792-9410d85e784b\") " pod="openshift-infra/auto-csr-approver-29535232-lsdkz" Feb 26 13:52:00 crc kubenswrapper[4724]: I0226 13:52:00.401225 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k55fk\" (UniqueName: \"kubernetes.io/projected/89714aff-88c5-4509-9792-9410d85e784b-kube-api-access-k55fk\") pod \"auto-csr-approver-29535232-lsdkz\" (UID: \"89714aff-88c5-4509-9792-9410d85e784b\") " pod="openshift-infra/auto-csr-approver-29535232-lsdkz" Feb 26 13:52:00 crc kubenswrapper[4724]: I0226 13:52:00.535985 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535232-lsdkz" Feb 26 13:52:01 crc kubenswrapper[4724]: I0226 13:52:01.050546 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535232-lsdkz"] Feb 26 13:52:02 crc kubenswrapper[4724]: I0226 13:52:02.076652 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535232-lsdkz" event={"ID":"89714aff-88c5-4509-9792-9410d85e784b","Type":"ContainerStarted","Data":"e7f1bf4d401b92c07158f5b69bc8e7eec40c117b3fc9ad0822dc67c08db2666d"} Feb 26 13:52:03 crc kubenswrapper[4724]: I0226 13:52:03.111038 4724 generic.go:334] "Generic (PLEG): container finished" podID="89714aff-88c5-4509-9792-9410d85e784b" containerID="4bca270691629de1da0a1026715ede626226a4f93e7ca085e56beb16e3c0ba1d" exitCode=0 Feb 26 13:52:03 crc kubenswrapper[4724]: I0226 13:52:03.111374 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535232-lsdkz" event={"ID":"89714aff-88c5-4509-9792-9410d85e784b","Type":"ContainerDied","Data":"4bca270691629de1da0a1026715ede626226a4f93e7ca085e56beb16e3c0ba1d"} Feb 26 13:52:04 crc kubenswrapper[4724]: I0226 13:52:04.519195 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535232-lsdkz" Feb 26 13:52:04 crc kubenswrapper[4724]: I0226 13:52:04.561935 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k55fk\" (UniqueName: \"kubernetes.io/projected/89714aff-88c5-4509-9792-9410d85e784b-kube-api-access-k55fk\") pod \"89714aff-88c5-4509-9792-9410d85e784b\" (UID: \"89714aff-88c5-4509-9792-9410d85e784b\") " Feb 26 13:52:04 crc kubenswrapper[4724]: I0226 13:52:04.573464 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89714aff-88c5-4509-9792-9410d85e784b-kube-api-access-k55fk" (OuterVolumeSpecName: "kube-api-access-k55fk") pod "89714aff-88c5-4509-9792-9410d85e784b" (UID: "89714aff-88c5-4509-9792-9410d85e784b"). InnerVolumeSpecName "kube-api-access-k55fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:52:04 crc kubenswrapper[4724]: I0226 13:52:04.666066 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k55fk\" (UniqueName: \"kubernetes.io/projected/89714aff-88c5-4509-9792-9410d85e784b-kube-api-access-k55fk\") on node \"crc\" DevicePath \"\"" Feb 26 13:52:05 crc kubenswrapper[4724]: I0226 13:52:05.138895 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535232-lsdkz" event={"ID":"89714aff-88c5-4509-9792-9410d85e784b","Type":"ContainerDied","Data":"e7f1bf4d401b92c07158f5b69bc8e7eec40c117b3fc9ad0822dc67c08db2666d"} Feb 26 13:52:05 crc kubenswrapper[4724]: I0226 13:52:05.140263 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7f1bf4d401b92c07158f5b69bc8e7eec40c117b3fc9ad0822dc67c08db2666d" Feb 26 13:52:05 crc kubenswrapper[4724]: I0226 13:52:05.138972 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535232-lsdkz" Feb 26 13:52:05 crc kubenswrapper[4724]: I0226 13:52:05.609818 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535226-kr8bt"] Feb 26 13:52:05 crc kubenswrapper[4724]: I0226 13:52:05.619753 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535226-kr8bt"] Feb 26 13:52:05 crc kubenswrapper[4724]: I0226 13:52:05.989399 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8139296-d918-4ab2-8ab6-e95b63b34c65" path="/var/lib/kubelet/pods/b8139296-d918-4ab2-8ab6-e95b63b34c65/volumes" Feb 26 13:52:13 crc kubenswrapper[4724]: I0226 13:52:13.983315 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:52:13 crc kubenswrapper[4724]: E0226 13:52:13.986651 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:52:24 crc kubenswrapper[4724]: I0226 13:52:24.975073 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:52:25 crc kubenswrapper[4724]: I0226 13:52:25.335490 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"2b17d85d6c6df34095807be107f1568f69425ad599d566862bbb00bf5e94b604"} Feb 26 13:52:29 crc kubenswrapper[4724]: I0226 13:52:29.811996 4724 scope.go:117] "RemoveContainer" containerID="f62b8e9923c46333e7c1f34312d8fec5e5cf86c9b6f38cb18fee8f0586201093" Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.099130 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xpspf"] Feb 26 13:52:55 crc kubenswrapper[4724]: E0226 13:52:55.100149 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89714aff-88c5-4509-9792-9410d85e784b" containerName="oc" Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.100166 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="89714aff-88c5-4509-9792-9410d85e784b" containerName="oc" Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.100383 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="89714aff-88c5-4509-9792-9410d85e784b" containerName="oc" Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.101787 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xpspf" Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.117452 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xpspf"] Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.181937 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e6bf93-9eb4-4b41-9428-39cf8e781456-utilities\") pod \"community-operators-xpspf\" (UID: \"95e6bf93-9eb4-4b41-9428-39cf8e781456\") " pod="openshift-marketplace/community-operators-xpspf" Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.182395 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m86td\" (UniqueName: \"kubernetes.io/projected/95e6bf93-9eb4-4b41-9428-39cf8e781456-kube-api-access-m86td\") pod \"community-operators-xpspf\" (UID: \"95e6bf93-9eb4-4b41-9428-39cf8e781456\") " pod="openshift-marketplace/community-operators-xpspf" Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.182440 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e6bf93-9eb4-4b41-9428-39cf8e781456-catalog-content\") pod \"community-operators-xpspf\" (UID: \"95e6bf93-9eb4-4b41-9428-39cf8e781456\") " pod="openshift-marketplace/community-operators-xpspf" Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.285005 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e6bf93-9eb4-4b41-9428-39cf8e781456-utilities\") pod \"community-operators-xpspf\" (UID: \"95e6bf93-9eb4-4b41-9428-39cf8e781456\") " pod="openshift-marketplace/community-operators-xpspf" Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.285129 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m86td\" (UniqueName: \"kubernetes.io/projected/95e6bf93-9eb4-4b41-9428-39cf8e781456-kube-api-access-m86td\") pod \"community-operators-xpspf\" (UID: \"95e6bf93-9eb4-4b41-9428-39cf8e781456\") " pod="openshift-marketplace/community-operators-xpspf" Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.285163 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e6bf93-9eb4-4b41-9428-39cf8e781456-catalog-content\") pod \"community-operators-xpspf\" (UID: \"95e6bf93-9eb4-4b41-9428-39cf8e781456\") " pod="openshift-marketplace/community-operators-xpspf" Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.285502 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e6bf93-9eb4-4b41-9428-39cf8e781456-utilities\") pod \"community-operators-xpspf\" (UID: \"95e6bf93-9eb4-4b41-9428-39cf8e781456\") " pod="openshift-marketplace/community-operators-xpspf" Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.285604 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e6bf93-9eb4-4b41-9428-39cf8e781456-catalog-content\") pod \"community-operators-xpspf\" (UID: \"95e6bf93-9eb4-4b41-9428-39cf8e781456\") " pod="openshift-marketplace/community-operators-xpspf" Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.308210 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m86td\" (UniqueName: \"kubernetes.io/projected/95e6bf93-9eb4-4b41-9428-39cf8e781456-kube-api-access-m86td\") pod \"community-operators-xpspf\" (UID: \"95e6bf93-9eb4-4b41-9428-39cf8e781456\") " pod="openshift-marketplace/community-operators-xpspf" Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.446604 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xpspf" Feb 26 13:52:55 crc kubenswrapper[4724]: I0226 13:52:55.968304 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xpspf"] Feb 26 13:52:56 crc kubenswrapper[4724]: I0226 13:52:56.623994 4724 generic.go:334] "Generic (PLEG): container finished" podID="95e6bf93-9eb4-4b41-9428-39cf8e781456" containerID="93397ddc418447fb01531801760bf93f9fed9d9c84edc09950fe99b942625e29" exitCode=0 Feb 26 13:52:56 crc kubenswrapper[4724]: I0226 13:52:56.624339 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpspf" event={"ID":"95e6bf93-9eb4-4b41-9428-39cf8e781456","Type":"ContainerDied","Data":"93397ddc418447fb01531801760bf93f9fed9d9c84edc09950fe99b942625e29"} Feb 26 13:52:56 crc kubenswrapper[4724]: I0226 13:52:56.624380 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpspf" event={"ID":"95e6bf93-9eb4-4b41-9428-39cf8e781456","Type":"ContainerStarted","Data":"4b950eaf4a7b1155b3144007e8b1a65cbb7ad739a7ef4088ff806dbaa218ac50"} Feb 26 13:53:05 crc kubenswrapper[4724]: I0226 13:53:05.724713 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpspf" event={"ID":"95e6bf93-9eb4-4b41-9428-39cf8e781456","Type":"ContainerStarted","Data":"b0b2a93a7ca0a72daa5bac430e92a34b8eb8af9a1c2b4a0c431956600c8a2ddb"} Feb 26 13:53:08 crc kubenswrapper[4724]: I0226 13:53:08.758775 4724 generic.go:334] "Generic (PLEG): container finished" podID="95e6bf93-9eb4-4b41-9428-39cf8e781456" containerID="b0b2a93a7ca0a72daa5bac430e92a34b8eb8af9a1c2b4a0c431956600c8a2ddb" exitCode=0 Feb 26 13:53:08 crc kubenswrapper[4724]: I0226 13:53:08.758863 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpspf" event={"ID":"95e6bf93-9eb4-4b41-9428-39cf8e781456","Type":"ContainerDied","Data":"b0b2a93a7ca0a72daa5bac430e92a34b8eb8af9a1c2b4a0c431956600c8a2ddb"} Feb 26 13:53:09 crc kubenswrapper[4724]: I0226 13:53:09.778658 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpspf" event={"ID":"95e6bf93-9eb4-4b41-9428-39cf8e781456","Type":"ContainerStarted","Data":"e7b0fb5ba572dfb486386998c65d7b1fbc84414a9f98b7e101047f41d7a36fec"} Feb 26 13:53:09 crc kubenswrapper[4724]: I0226 13:53:09.804776 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xpspf" podStartSLOduration=2.21258261 podStartE2EDuration="14.804741019s" podCreationTimestamp="2026-02-26 13:52:55 +0000 UTC" firstStartedPulling="2026-02-26 13:52:56.626340146 +0000 UTC m=+10043.282079261" lastFinishedPulling="2026-02-26 13:53:09.218498555 +0000 UTC m=+10055.874237670" observedRunningTime="2026-02-26 13:53:09.7996306 +0000 UTC m=+10056.455369745" watchObservedRunningTime="2026-02-26 13:53:09.804741019 +0000 UTC m=+10056.460480144" Feb 26 13:53:15 crc kubenswrapper[4724]: I0226 13:53:15.447164 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xpspf" Feb 26 13:53:15 crc kubenswrapper[4724]: I0226 13:53:15.447708 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xpspf" Feb 26 13:53:15 crc kubenswrapper[4724]: I0226 13:53:15.505001 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xpspf" Feb 26 13:53:15 crc kubenswrapper[4724]: I0226 13:53:15.900088 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xpspf" Feb 26 13:53:15 crc kubenswrapper[4724]: I0226 13:53:15.992979 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xpspf"] Feb 26 13:53:16 crc kubenswrapper[4724]: I0226 13:53:16.078325 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6hw4f"] Feb 26 13:53:16 crc kubenswrapper[4724]: I0226 13:53:16.079442 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6hw4f" podUID="1390f0e7-ad55-44f1-9ef0-0a732c57cc28" containerName="registry-server" containerID="cri-o://673a1d66f5308f991c9b1ce81660bb0d7b6bfceea770cfcd67bb88075f7243c6" gracePeriod=2 Feb 26 13:53:16 crc kubenswrapper[4724]: I0226 13:53:16.853484 4724 generic.go:334] "Generic (PLEG): container finished" podID="1390f0e7-ad55-44f1-9ef0-0a732c57cc28" containerID="673a1d66f5308f991c9b1ce81660bb0d7b6bfceea770cfcd67bb88075f7243c6" exitCode=0 Feb 26 13:53:16 crc kubenswrapper[4724]: I0226 13:53:16.853562 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hw4f" event={"ID":"1390f0e7-ad55-44f1-9ef0-0a732c57cc28","Type":"ContainerDied","Data":"673a1d66f5308f991c9b1ce81660bb0d7b6bfceea770cfcd67bb88075f7243c6"} Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.400904 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hw4f" Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.504054 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-catalog-content\") pod \"1390f0e7-ad55-44f1-9ef0-0a732c57cc28\" (UID: \"1390f0e7-ad55-44f1-9ef0-0a732c57cc28\") " Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.504442 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qpbd\" (UniqueName: \"kubernetes.io/projected/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-kube-api-access-9qpbd\") pod \"1390f0e7-ad55-44f1-9ef0-0a732c57cc28\" (UID: \"1390f0e7-ad55-44f1-9ef0-0a732c57cc28\") " Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.504686 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-utilities\") pod \"1390f0e7-ad55-44f1-9ef0-0a732c57cc28\" (UID: \"1390f0e7-ad55-44f1-9ef0-0a732c57cc28\") " Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.505576 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-utilities" (OuterVolumeSpecName: "utilities") pod "1390f0e7-ad55-44f1-9ef0-0a732c57cc28" (UID: "1390f0e7-ad55-44f1-9ef0-0a732c57cc28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.525640 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-kube-api-access-9qpbd" (OuterVolumeSpecName: "kube-api-access-9qpbd") pod "1390f0e7-ad55-44f1-9ef0-0a732c57cc28" (UID: "1390f0e7-ad55-44f1-9ef0-0a732c57cc28"). InnerVolumeSpecName "kube-api-access-9qpbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.582974 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1390f0e7-ad55-44f1-9ef0-0a732c57cc28" (UID: "1390f0e7-ad55-44f1-9ef0-0a732c57cc28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.607889 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.607934 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.607948 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qpbd\" (UniqueName: \"kubernetes.io/projected/1390f0e7-ad55-44f1-9ef0-0a732c57cc28-kube-api-access-9qpbd\") on node \"crc\" DevicePath \"\"" Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.866834 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hw4f" Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.869088 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hw4f" event={"ID":"1390f0e7-ad55-44f1-9ef0-0a732c57cc28","Type":"ContainerDied","Data":"44a592933bb3afd64f8281d3d39f4fd425be2984484b1dd2a056cc06b82af48f"} Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.869158 4724 scope.go:117] "RemoveContainer" containerID="673a1d66f5308f991c9b1ce81660bb0d7b6bfceea770cfcd67bb88075f7243c6" Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.926948 4724 scope.go:117] "RemoveContainer" containerID="e5a22da4c1c5497c40d0239a8b9010a7b64d505ce15433765c72c7b970f75000" Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.929977 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6hw4f"] Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.947787 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6hw4f"] Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.984896 4724 scope.go:117] "RemoveContainer" containerID="b1764800ed13fd553e7e0bc366982ad8d2202defde84d07e318ca82e19d781e1" Feb 26 13:53:17 crc kubenswrapper[4724]: I0226 13:53:17.991034 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1390f0e7-ad55-44f1-9ef0-0a732c57cc28" path="/var/lib/kubelet/pods/1390f0e7-ad55-44f1-9ef0-0a732c57cc28/volumes" Feb 26 13:54:00 crc kubenswrapper[4724]: I0226 13:54:00.164467 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535234-5qtgg"] Feb 26 13:54:00 crc kubenswrapper[4724]: E0226 13:54:00.165440 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1390f0e7-ad55-44f1-9ef0-0a732c57cc28" containerName="registry-server" Feb 26 13:54:00 crc kubenswrapper[4724]: I0226 13:54:00.165458 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1390f0e7-ad55-44f1-9ef0-0a732c57cc28" containerName="registry-server" Feb 26 13:54:00 crc kubenswrapper[4724]: E0226 13:54:00.165489 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1390f0e7-ad55-44f1-9ef0-0a732c57cc28" containerName="extract-content" Feb 26 13:54:00 crc kubenswrapper[4724]: I0226 13:54:00.165498 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1390f0e7-ad55-44f1-9ef0-0a732c57cc28" containerName="extract-content" Feb 26 13:54:00 crc kubenswrapper[4724]: E0226 13:54:00.165544 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1390f0e7-ad55-44f1-9ef0-0a732c57cc28" containerName="extract-utilities" Feb 26 13:54:00 crc kubenswrapper[4724]: I0226 13:54:00.165553 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1390f0e7-ad55-44f1-9ef0-0a732c57cc28" containerName="extract-utilities" Feb 26 13:54:00 crc kubenswrapper[4724]: I0226 13:54:00.165773 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1390f0e7-ad55-44f1-9ef0-0a732c57cc28" containerName="registry-server" Feb 26 13:54:00 crc kubenswrapper[4724]: I0226 13:54:00.166645 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535234-5qtgg" Feb 26 13:54:00 crc kubenswrapper[4724]: I0226 13:54:00.170870 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:54:00 crc kubenswrapper[4724]: I0226 13:54:00.171128 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:54:00 crc kubenswrapper[4724]: I0226 13:54:00.171375 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:54:00 crc kubenswrapper[4724]: I0226 13:54:00.193929 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535234-5qtgg"] Feb 26 13:54:00 crc kubenswrapper[4724]: I0226 13:54:00.347009 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqb64\" (UniqueName: \"kubernetes.io/projected/621a966f-e03c-41bc-82dc-121342e2ea65-kube-api-access-bqb64\") pod \"auto-csr-approver-29535234-5qtgg\" (UID: \"621a966f-e03c-41bc-82dc-121342e2ea65\") " pod="openshift-infra/auto-csr-approver-29535234-5qtgg" Feb 26 13:54:00 crc kubenswrapper[4724]: I0226 13:54:00.449196 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqb64\" (UniqueName: \"kubernetes.io/projected/621a966f-e03c-41bc-82dc-121342e2ea65-kube-api-access-bqb64\") pod \"auto-csr-approver-29535234-5qtgg\" (UID: \"621a966f-e03c-41bc-82dc-121342e2ea65\") " pod="openshift-infra/auto-csr-approver-29535234-5qtgg" Feb 26 13:54:00 crc kubenswrapper[4724]: I0226 13:54:00.479095 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqb64\" (UniqueName: \"kubernetes.io/projected/621a966f-e03c-41bc-82dc-121342e2ea65-kube-api-access-bqb64\") pod \"auto-csr-approver-29535234-5qtgg\" (UID: \"621a966f-e03c-41bc-82dc-121342e2ea65\") " pod="openshift-infra/auto-csr-approver-29535234-5qtgg" Feb 26 13:54:00 crc kubenswrapper[4724]: I0226 13:54:00.527105 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535234-5qtgg" Feb 26 13:54:01 crc kubenswrapper[4724]: I0226 13:54:01.064641 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535234-5qtgg"] Feb 26 13:54:01 crc kubenswrapper[4724]: I0226 13:54:01.319918 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535234-5qtgg" event={"ID":"621a966f-e03c-41bc-82dc-121342e2ea65","Type":"ContainerStarted","Data":"8eec57cca06b4eb6f964dded8d5819355ae8da2e298e797ffe2de1e7c3fb0742"} Feb 26 13:54:04 crc kubenswrapper[4724]: I0226 13:54:04.348713 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535234-5qtgg" event={"ID":"621a966f-e03c-41bc-82dc-121342e2ea65","Type":"ContainerStarted","Data":"7c0cf80f1f46bc30166516b17a3dc65642704e8f72f8825d19c81fb37ca08901"} Feb 26 13:54:04 crc kubenswrapper[4724]: I0226 13:54:04.375486 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535234-5qtgg" podStartSLOduration=2.400501884 podStartE2EDuration="4.375464934s" podCreationTimestamp="2026-02-26 13:54:00 +0000 UTC" firstStartedPulling="2026-02-26 13:54:01.080388854 +0000 UTC m=+10107.736127979" lastFinishedPulling="2026-02-26 13:54:03.055351914 +0000 UTC m=+10109.711091029" observedRunningTime="2026-02-26 13:54:04.370577991 +0000 UTC m=+10111.026317126" watchObservedRunningTime="2026-02-26 13:54:04.375464934 +0000 UTC m=+10111.031204049" Feb 26 13:54:05 crc kubenswrapper[4724]: I0226 13:54:05.361703 4724 generic.go:334] "Generic (PLEG): container finished" podID="621a966f-e03c-41bc-82dc-121342e2ea65" containerID="7c0cf80f1f46bc30166516b17a3dc65642704e8f72f8825d19c81fb37ca08901" exitCode=0 Feb 26 13:54:05 crc kubenswrapper[4724]: I0226 13:54:05.361899 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535234-5qtgg" event={"ID":"621a966f-e03c-41bc-82dc-121342e2ea65","Type":"ContainerDied","Data":"7c0cf80f1f46bc30166516b17a3dc65642704e8f72f8825d19c81fb37ca08901"} Feb 26 13:54:07 crc kubenswrapper[4724]: I0226 13:54:07.412885 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535234-5qtgg" event={"ID":"621a966f-e03c-41bc-82dc-121342e2ea65","Type":"ContainerDied","Data":"8eec57cca06b4eb6f964dded8d5819355ae8da2e298e797ffe2de1e7c3fb0742"} Feb 26 13:54:07 crc kubenswrapper[4724]: I0226 13:54:07.413392 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eec57cca06b4eb6f964dded8d5819355ae8da2e298e797ffe2de1e7c3fb0742" Feb 26 13:54:07 crc kubenswrapper[4724]: I0226 13:54:07.421167 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535234-5qtgg" Feb 26 13:54:07 crc kubenswrapper[4724]: I0226 13:54:07.503326 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqb64\" (UniqueName: \"kubernetes.io/projected/621a966f-e03c-41bc-82dc-121342e2ea65-kube-api-access-bqb64\") pod \"621a966f-e03c-41bc-82dc-121342e2ea65\" (UID: \"621a966f-e03c-41bc-82dc-121342e2ea65\") " Feb 26 13:54:07 crc kubenswrapper[4724]: I0226 13:54:07.511339 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621a966f-e03c-41bc-82dc-121342e2ea65-kube-api-access-bqb64" (OuterVolumeSpecName: "kube-api-access-bqb64") pod "621a966f-e03c-41bc-82dc-121342e2ea65" (UID: "621a966f-e03c-41bc-82dc-121342e2ea65"). InnerVolumeSpecName "kube-api-access-bqb64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:54:07 crc kubenswrapper[4724]: I0226 13:54:07.606257 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqb64\" (UniqueName: \"kubernetes.io/projected/621a966f-e03c-41bc-82dc-121342e2ea65-kube-api-access-bqb64\") on node \"crc\" DevicePath \"\"" Feb 26 13:54:08 crc kubenswrapper[4724]: I0226 13:54:08.419987 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535234-5qtgg" Feb 26 13:54:08 crc kubenswrapper[4724]: I0226 13:54:08.506225 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535228-6hnl7"] Feb 26 13:54:08 crc kubenswrapper[4724]: I0226 13:54:08.514893 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535228-6hnl7"] Feb 26 13:54:09 crc kubenswrapper[4724]: I0226 13:54:09.988429 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1871320c-3557-4e2d-aa76-f50734b03731" path="/var/lib/kubelet/pods/1871320c-3557-4e2d-aa76-f50734b03731/volumes" Feb 26 13:54:29 crc kubenswrapper[4724]: I0226 13:54:29.936897 4724 scope.go:117] "RemoveContainer" containerID="8e2d0a6674e1c3abecff4783af79c6e89b69a67e5891d65b8a909e36d232b2fb" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.153870 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pwprp"] Feb 26 13:54:46 crc kubenswrapper[4724]: E0226 13:54:46.154834 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621a966f-e03c-41bc-82dc-121342e2ea65" containerName="oc" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.154849 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="621a966f-e03c-41bc-82dc-121342e2ea65" containerName="oc" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.155140 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="621a966f-e03c-41bc-82dc-121342e2ea65" containerName="oc" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.156773 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.165739 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pm4k\" (UniqueName: \"kubernetes.io/projected/46b3b9ea-0937-493f-a33b-106605e608ee-kube-api-access-2pm4k\") pod \"redhat-operators-pwprp\" (UID: \"46b3b9ea-0937-493f-a33b-106605e608ee\") " pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.166125 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46b3b9ea-0937-493f-a33b-106605e608ee-utilities\") pod \"redhat-operators-pwprp\" (UID: \"46b3b9ea-0937-493f-a33b-106605e608ee\") " pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.166447 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46b3b9ea-0937-493f-a33b-106605e608ee-catalog-content\") pod \"redhat-operators-pwprp\" (UID: \"46b3b9ea-0937-493f-a33b-106605e608ee\") " pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.175052 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwprp"] Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.268974 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46b3b9ea-0937-493f-a33b-106605e608ee-catalog-content\") pod \"redhat-operators-pwprp\" (UID: \"46b3b9ea-0937-493f-a33b-106605e608ee\") " pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.269107 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pm4k\" (UniqueName: \"kubernetes.io/projected/46b3b9ea-0937-493f-a33b-106605e608ee-kube-api-access-2pm4k\") pod \"redhat-operators-pwprp\" (UID: \"46b3b9ea-0937-493f-a33b-106605e608ee\") " pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.269156 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46b3b9ea-0937-493f-a33b-106605e608ee-utilities\") pod \"redhat-operators-pwprp\" (UID: \"46b3b9ea-0937-493f-a33b-106605e608ee\") " pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.269989 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46b3b9ea-0937-493f-a33b-106605e608ee-catalog-content\") pod \"redhat-operators-pwprp\" (UID: \"46b3b9ea-0937-493f-a33b-106605e608ee\") " pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.270530 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46b3b9ea-0937-493f-a33b-106605e608ee-utilities\") pod \"redhat-operators-pwprp\" (UID: \"46b3b9ea-0937-493f-a33b-106605e608ee\") " pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.300589 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pm4k\" (UniqueName: \"kubernetes.io/projected/46b3b9ea-0937-493f-a33b-106605e608ee-kube-api-access-2pm4k\") pod \"redhat-operators-pwprp\" (UID: \"46b3b9ea-0937-493f-a33b-106605e608ee\") " pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.480727 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.907275 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.908449 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:54:46 crc kubenswrapper[4724]: I0226 13:54:46.967164 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwprp"] Feb 26 13:54:47 crc kubenswrapper[4724]: I0226 13:54:47.790258 4724 generic.go:334] "Generic (PLEG): container finished" podID="46b3b9ea-0937-493f-a33b-106605e608ee" containerID="1ed0ea0478fa2538f660c7535e88122b73dd1729a435ce3115667596d7cdeb6f" exitCode=0 Feb 26 13:54:47 crc kubenswrapper[4724]: I0226 13:54:47.790554 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwprp" event={"ID":"46b3b9ea-0937-493f-a33b-106605e608ee","Type":"ContainerDied","Data":"1ed0ea0478fa2538f660c7535e88122b73dd1729a435ce3115667596d7cdeb6f"} Feb 26 13:54:47 crc kubenswrapper[4724]: I0226 13:54:47.791477 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwprp" event={"ID":"46b3b9ea-0937-493f-a33b-106605e608ee","Type":"ContainerStarted","Data":"d39ccd48b66e50a6e08f786faa43473101085972525aa5a1b4a716221803e109"} Feb 26 13:54:47 crc kubenswrapper[4724]: I0226 13:54:47.794585 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 13:54:49 crc kubenswrapper[4724]: I0226 13:54:49.815822 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwprp" event={"ID":"46b3b9ea-0937-493f-a33b-106605e608ee","Type":"ContainerStarted","Data":"ba1bd846bcdf76723d68f9ee99d881cc68f10244eb0ae70b5ad407da938691fe"} Feb 26 13:54:54 crc kubenswrapper[4724]: E0226 13:54:54.795457 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46b3b9ea_0937_493f_a33b_106605e608ee.slice/crio-ba1bd846bcdf76723d68f9ee99d881cc68f10244eb0ae70b5ad407da938691fe.scope\": RecentStats: unable to find data in memory cache]" Feb 26 13:54:54 crc kubenswrapper[4724]: I0226 13:54:54.873940 4724 generic.go:334] "Generic (PLEG): container finished" podID="46b3b9ea-0937-493f-a33b-106605e608ee" containerID="ba1bd846bcdf76723d68f9ee99d881cc68f10244eb0ae70b5ad407da938691fe" exitCode=0 Feb 26 13:54:54 crc kubenswrapper[4724]: I0226 13:54:54.874020 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwprp" event={"ID":"46b3b9ea-0937-493f-a33b-106605e608ee","Type":"ContainerDied","Data":"ba1bd846bcdf76723d68f9ee99d881cc68f10244eb0ae70b5ad407da938691fe"} Feb 26 13:54:55 crc kubenswrapper[4724]: I0226 13:54:55.883448 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwprp" event={"ID":"46b3b9ea-0937-493f-a33b-106605e608ee","Type":"ContainerStarted","Data":"60556b9b1bd73cab819b7926d12df7eef9710284b5a03493666acfd0b2b03579"} Feb 26 13:54:55 crc kubenswrapper[4724]: I0226 13:54:55.913578 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pwprp" podStartSLOduration=2.398697802 podStartE2EDuration="9.91350339s" podCreationTimestamp="2026-02-26 13:54:46 +0000 UTC" firstStartedPulling="2026-02-26 13:54:47.794347969 +0000 UTC m=+10154.450087084" lastFinishedPulling="2026-02-26 13:54:55.309153557 +0000 UTC m=+10161.964892672" observedRunningTime="2026-02-26 13:54:55.903826925 +0000 UTC m=+10162.559566050" watchObservedRunningTime="2026-02-26 13:54:55.91350339 +0000 UTC m=+10162.569242505" Feb 26 13:54:56 crc kubenswrapper[4724]: I0226 13:54:56.481616 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:54:56 crc kubenswrapper[4724]: I0226 13:54:56.481688 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:54:57 crc kubenswrapper[4724]: I0226 13:54:57.532139 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pwprp" podUID="46b3b9ea-0937-493f-a33b-106605e608ee" containerName="registry-server" probeResult="failure" output=< Feb 26 13:54:57 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:54:57 crc kubenswrapper[4724]: > Feb 26 13:55:07 crc kubenswrapper[4724]: I0226 13:55:07.532829 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pwprp" podUID="46b3b9ea-0937-493f-a33b-106605e608ee" containerName="registry-server" probeResult="failure" output=< Feb 26 13:55:07 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:55:07 crc kubenswrapper[4724]: > Feb 26 13:55:16 crc kubenswrapper[4724]: I0226 13:55:16.905866 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:55:16 crc kubenswrapper[4724]: I0226 13:55:16.907969 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:55:17 crc kubenswrapper[4724]: I0226 13:55:17.527404 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pwprp" podUID="46b3b9ea-0937-493f-a33b-106605e608ee" containerName="registry-server" probeResult="failure" output=< Feb 26 13:55:17 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:55:17 crc kubenswrapper[4724]: > Feb 26 13:55:27 crc kubenswrapper[4724]: I0226 13:55:27.533493 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pwprp" podUID="46b3b9ea-0937-493f-a33b-106605e608ee" containerName="registry-server" probeResult="failure" output=< Feb 26 13:55:27 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:55:27 crc kubenswrapper[4724]: > Feb 26 13:55:36 crc kubenswrapper[4724]: I0226 13:55:36.557233 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:55:36 crc kubenswrapper[4724]: I0226 13:55:36.620549 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:55:37 crc kubenswrapper[4724]: I0226 13:55:37.618215 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwprp"] Feb 26 13:55:38 crc kubenswrapper[4724]: I0226 13:55:38.303520 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pwprp" podUID="46b3b9ea-0937-493f-a33b-106605e608ee" containerName="registry-server" containerID="cri-o://60556b9b1bd73cab819b7926d12df7eef9710284b5a03493666acfd0b2b03579" gracePeriod=2 Feb 26 13:55:39 crc kubenswrapper[4724]: I0226 13:55:39.318865 4724 generic.go:334] "Generic (PLEG): container finished" podID="46b3b9ea-0937-493f-a33b-106605e608ee" containerID="60556b9b1bd73cab819b7926d12df7eef9710284b5a03493666acfd0b2b03579" exitCode=0 Feb 26 13:55:39 crc kubenswrapper[4724]: I0226 13:55:39.318930 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwprp" event={"ID":"46b3b9ea-0937-493f-a33b-106605e608ee","Type":"ContainerDied","Data":"60556b9b1bd73cab819b7926d12df7eef9710284b5a03493666acfd0b2b03579"} Feb 26 13:55:39 crc kubenswrapper[4724]: I0226 13:55:39.538108 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:55:39 crc kubenswrapper[4724]: I0226 13:55:39.703249 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46b3b9ea-0937-493f-a33b-106605e608ee-utilities\") pod \"46b3b9ea-0937-493f-a33b-106605e608ee\" (UID: \"46b3b9ea-0937-493f-a33b-106605e608ee\") " Feb 26 13:55:39 crc kubenswrapper[4724]: I0226 13:55:39.703646 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pm4k\" (UniqueName: \"kubernetes.io/projected/46b3b9ea-0937-493f-a33b-106605e608ee-kube-api-access-2pm4k\") pod \"46b3b9ea-0937-493f-a33b-106605e608ee\" (UID: \"46b3b9ea-0937-493f-a33b-106605e608ee\") " Feb 26 13:55:39 crc kubenswrapper[4724]: I0226 13:55:39.703672 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46b3b9ea-0937-493f-a33b-106605e608ee-catalog-content\") pod \"46b3b9ea-0937-493f-a33b-106605e608ee\" (UID: \"46b3b9ea-0937-493f-a33b-106605e608ee\") " Feb 26 13:55:39 crc kubenswrapper[4724]: I0226 13:55:39.706600 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46b3b9ea-0937-493f-a33b-106605e608ee-utilities" (OuterVolumeSpecName: "utilities") pod "46b3b9ea-0937-493f-a33b-106605e608ee" (UID: "46b3b9ea-0937-493f-a33b-106605e608ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:55:39 crc kubenswrapper[4724]: I0226 13:55:39.727813 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b3b9ea-0937-493f-a33b-106605e608ee-kube-api-access-2pm4k" (OuterVolumeSpecName: "kube-api-access-2pm4k") pod "46b3b9ea-0937-493f-a33b-106605e608ee" (UID: "46b3b9ea-0937-493f-a33b-106605e608ee"). InnerVolumeSpecName "kube-api-access-2pm4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:55:39 crc kubenswrapper[4724]: I0226 13:55:39.811101 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pm4k\" (UniqueName: \"kubernetes.io/projected/46b3b9ea-0937-493f-a33b-106605e608ee-kube-api-access-2pm4k\") on node \"crc\" DevicePath \"\"" Feb 26 13:55:39 crc kubenswrapper[4724]: I0226 13:55:39.811138 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46b3b9ea-0937-493f-a33b-106605e608ee-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:55:39 crc kubenswrapper[4724]: I0226 13:55:39.898835 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46b3b9ea-0937-493f-a33b-106605e608ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46b3b9ea-0937-493f-a33b-106605e608ee" (UID: "46b3b9ea-0937-493f-a33b-106605e608ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:55:39 crc kubenswrapper[4724]: I0226 13:55:39.913138 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46b3b9ea-0937-493f-a33b-106605e608ee-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:55:40 crc kubenswrapper[4724]: I0226 13:55:40.343516 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwprp" event={"ID":"46b3b9ea-0937-493f-a33b-106605e608ee","Type":"ContainerDied","Data":"d39ccd48b66e50a6e08f786faa43473101085972525aa5a1b4a716221803e109"} Feb 26 13:55:40 crc kubenswrapper[4724]: I0226 13:55:40.343615 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwprp" Feb 26 13:55:40 crc kubenswrapper[4724]: I0226 13:55:40.344203 4724 scope.go:117] "RemoveContainer" containerID="60556b9b1bd73cab819b7926d12df7eef9710284b5a03493666acfd0b2b03579" Feb 26 13:55:40 crc kubenswrapper[4724]: I0226 13:55:40.373472 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwprp"] Feb 26 13:55:40 crc kubenswrapper[4724]: I0226 13:55:40.382300 4724 scope.go:117] "RemoveContainer" containerID="ba1bd846bcdf76723d68f9ee99d881cc68f10244eb0ae70b5ad407da938691fe" Feb 26 13:55:40 crc kubenswrapper[4724]: I0226 13:55:40.383398 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pwprp"] Feb 26 13:55:40 crc kubenswrapper[4724]: I0226 13:55:40.412859 4724 scope.go:117] "RemoveContainer" containerID="1ed0ea0478fa2538f660c7535e88122b73dd1729a435ce3115667596d7cdeb6f" Feb 26 13:55:41 crc kubenswrapper[4724]: I0226 13:55:41.988214 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46b3b9ea-0937-493f-a33b-106605e608ee" path="/var/lib/kubelet/pods/46b3b9ea-0937-493f-a33b-106605e608ee/volumes" Feb 26 13:55:46 crc kubenswrapper[4724]: I0226 13:55:46.906601 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:55:46 crc kubenswrapper[4724]: I0226 13:55:46.907093 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:55:46 crc kubenswrapper[4724]: I0226 13:55:46.907145 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 13:55:46 crc kubenswrapper[4724]: I0226 13:55:46.908787 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b17d85d6c6df34095807be107f1568f69425ad599d566862bbb00bf5e94b604"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 13:55:46 crc kubenswrapper[4724]: I0226 13:55:46.908870 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://2b17d85d6c6df34095807be107f1568f69425ad599d566862bbb00bf5e94b604" gracePeriod=600 Feb 26 13:55:47 crc kubenswrapper[4724]: I0226 13:55:47.406410 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="2b17d85d6c6df34095807be107f1568f69425ad599d566862bbb00bf5e94b604" exitCode=0 Feb 26 13:55:47 crc kubenswrapper[4724]: I0226 13:55:47.406466 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"2b17d85d6c6df34095807be107f1568f69425ad599d566862bbb00bf5e94b604"} Feb 26 13:55:47 crc kubenswrapper[4724]: I0226 13:55:47.406765 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788"} Feb 26 13:55:47 crc kubenswrapper[4724]: I0226 13:55:47.406805 4724 scope.go:117] "RemoveContainer" containerID="e937676624bd60f0c07b6d7a0bfcb3ea5df38463ff920afcf367ae2be916be6b" Feb 26 13:56:00 crc kubenswrapper[4724]: I0226 13:56:00.186435 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535236-vwhdd"] Feb 26 13:56:00 crc kubenswrapper[4724]: E0226 13:56:00.190470 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b3b9ea-0937-493f-a33b-106605e608ee" containerName="extract-content" Feb 26 13:56:00 crc kubenswrapper[4724]: I0226 13:56:00.190694 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b3b9ea-0937-493f-a33b-106605e608ee" containerName="extract-content" Feb 26 13:56:00 crc kubenswrapper[4724]: E0226 13:56:00.190815 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b3b9ea-0937-493f-a33b-106605e608ee" containerName="extract-utilities" Feb 26 13:56:00 crc kubenswrapper[4724]: I0226 13:56:00.190828 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b3b9ea-0937-493f-a33b-106605e608ee" containerName="extract-utilities" Feb 26 13:56:00 crc kubenswrapper[4724]: E0226 13:56:00.190838 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b3b9ea-0937-493f-a33b-106605e608ee" containerName="registry-server" Feb 26 13:56:00 crc kubenswrapper[4724]: I0226 13:56:00.190845 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b3b9ea-0937-493f-a33b-106605e608ee" containerName="registry-server" Feb 26 13:56:00 crc kubenswrapper[4724]: I0226 13:56:00.191600 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b3b9ea-0937-493f-a33b-106605e608ee" containerName="registry-server" Feb 26 13:56:00 crc kubenswrapper[4724]: I0226 13:56:00.193020 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535236-vwhdd" Feb 26 13:56:00 crc kubenswrapper[4724]: I0226 13:56:00.211861 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:56:00 crc kubenswrapper[4724]: I0226 13:56:00.211959 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:56:00 crc kubenswrapper[4724]: I0226 13:56:00.212519 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:56:00 crc kubenswrapper[4724]: I0226 13:56:00.223335 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535236-vwhdd"] Feb 26 13:56:00 crc kubenswrapper[4724]: I0226 13:56:00.281568 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br4rm\" (UniqueName: \"kubernetes.io/projected/f8f8e897-2aae-472c-9741-67537defe9d1-kube-api-access-br4rm\") pod \"auto-csr-approver-29535236-vwhdd\" (UID: \"f8f8e897-2aae-472c-9741-67537defe9d1\") " pod="openshift-infra/auto-csr-approver-29535236-vwhdd" Feb 26 13:56:00 crc kubenswrapper[4724]: I0226 13:56:00.384736 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br4rm\" (UniqueName: \"kubernetes.io/projected/f8f8e897-2aae-472c-9741-67537defe9d1-kube-api-access-br4rm\") pod \"auto-csr-approver-29535236-vwhdd\" (UID: \"f8f8e897-2aae-472c-9741-67537defe9d1\") " pod="openshift-infra/auto-csr-approver-29535236-vwhdd" Feb 26 13:56:00 crc kubenswrapper[4724]: I0226 13:56:00.407825 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br4rm\" (UniqueName: \"kubernetes.io/projected/f8f8e897-2aae-472c-9741-67537defe9d1-kube-api-access-br4rm\") pod \"auto-csr-approver-29535236-vwhdd\" (UID: \"f8f8e897-2aae-472c-9741-67537defe9d1\") " pod="openshift-infra/auto-csr-approver-29535236-vwhdd" Feb 26 13:56:00 crc kubenswrapper[4724]: I0226 13:56:00.534474 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535236-vwhdd" Feb 26 13:56:01 crc kubenswrapper[4724]: I0226 13:56:01.103494 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535236-vwhdd"] Feb 26 13:56:01 crc kubenswrapper[4724]: I0226 13:56:01.543151 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535236-vwhdd" event={"ID":"f8f8e897-2aae-472c-9741-67537defe9d1","Type":"ContainerStarted","Data":"969e26db160bfd26445d3e1f0f2e99cb1045d1599bcd2cb7ca4306a459237812"} Feb 26 13:56:03 crc kubenswrapper[4724]: I0226 13:56:03.584797 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535236-vwhdd" event={"ID":"f8f8e897-2aae-472c-9741-67537defe9d1","Type":"ContainerStarted","Data":"63982e76cb7175666804bfa659266ba78892012c972ed2c701805c614f53d51c"} Feb 26 13:56:03 crc kubenswrapper[4724]: I0226 13:56:03.617281 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535236-vwhdd" podStartSLOduration=2.429790413 podStartE2EDuration="3.617251911s" podCreationTimestamp="2026-02-26 13:56:00 +0000 UTC" firstStartedPulling="2026-02-26 13:56:01.114420092 +0000 UTC m=+10227.770159207" lastFinishedPulling="2026-02-26 13:56:02.30188159 +0000 UTC m=+10228.957620705" observedRunningTime="2026-02-26 13:56:03.609406553 +0000 UTC m=+10230.265145678" watchObservedRunningTime="2026-02-26 13:56:03.617251911 +0000 UTC m=+10230.272991026" Feb 26 13:56:05 crc kubenswrapper[4724]: I0226 13:56:05.612757 4724 generic.go:334] "Generic (PLEG): container finished" podID="f8f8e897-2aae-472c-9741-67537defe9d1" containerID="63982e76cb7175666804bfa659266ba78892012c972ed2c701805c614f53d51c" exitCode=0 Feb 26 13:56:05 crc kubenswrapper[4724]: I0226 13:56:05.612834 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535236-vwhdd" event={"ID":"f8f8e897-2aae-472c-9741-67537defe9d1","Type":"ContainerDied","Data":"63982e76cb7175666804bfa659266ba78892012c972ed2c701805c614f53d51c"} Feb 26 13:56:07 crc kubenswrapper[4724]: I0226 13:56:07.244381 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535236-vwhdd" Feb 26 13:56:07 crc kubenswrapper[4724]: I0226 13:56:07.424901 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br4rm\" (UniqueName: \"kubernetes.io/projected/f8f8e897-2aae-472c-9741-67537defe9d1-kube-api-access-br4rm\") pod \"f8f8e897-2aae-472c-9741-67537defe9d1\" (UID: \"f8f8e897-2aae-472c-9741-67537defe9d1\") " Feb 26 13:56:07 crc kubenswrapper[4724]: I0226 13:56:07.448836 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8f8e897-2aae-472c-9741-67537defe9d1-kube-api-access-br4rm" (OuterVolumeSpecName: "kube-api-access-br4rm") pod "f8f8e897-2aae-472c-9741-67537defe9d1" (UID: "f8f8e897-2aae-472c-9741-67537defe9d1"). InnerVolumeSpecName "kube-api-access-br4rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:56:07 crc kubenswrapper[4724]: I0226 13:56:07.528977 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br4rm\" (UniqueName: \"kubernetes.io/projected/f8f8e897-2aae-472c-9741-67537defe9d1-kube-api-access-br4rm\") on node \"crc\" DevicePath \"\"" Feb 26 13:56:07 crc kubenswrapper[4724]: I0226 13:56:07.636935 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535236-vwhdd" event={"ID":"f8f8e897-2aae-472c-9741-67537defe9d1","Type":"ContainerDied","Data":"969e26db160bfd26445d3e1f0f2e99cb1045d1599bcd2cb7ca4306a459237812"} Feb 26 13:56:07 crc kubenswrapper[4724]: I0226 13:56:07.637023 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="969e26db160bfd26445d3e1f0f2e99cb1045d1599bcd2cb7ca4306a459237812" Feb 26 13:56:07 crc kubenswrapper[4724]: I0226 13:56:07.637118 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535236-vwhdd" Feb 26 13:56:07 crc kubenswrapper[4724]: I0226 13:56:07.828575 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535230-89n8p"] Feb 26 13:56:07 crc kubenswrapper[4724]: I0226 13:56:07.840384 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535230-89n8p"] Feb 26 13:56:07 crc kubenswrapper[4724]: I0226 13:56:07.992112 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2003f32e-3582-4e5d-a0ae-bbf3aa146536" path="/var/lib/kubelet/pods/2003f32e-3582-4e5d-a0ae-bbf3aa146536/volumes" Feb 26 13:56:30 crc kubenswrapper[4724]: I0226 13:56:30.135787 4724 scope.go:117] "RemoveContainer" containerID="210be6f0568bc693371f3278ee0772229decf941b9fcfd3db328f2b331063781" Feb 26 13:58:00 crc kubenswrapper[4724]: I0226 13:58:00.163804 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535238-j5bxm"] Feb 26 13:58:00 crc kubenswrapper[4724]: E0226 13:58:00.166000 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f8e897-2aae-472c-9741-67537defe9d1" containerName="oc" Feb 26 13:58:00 crc kubenswrapper[4724]: I0226 13:58:00.166037 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f8e897-2aae-472c-9741-67537defe9d1" containerName="oc" Feb 26 13:58:00 crc kubenswrapper[4724]: I0226 13:58:00.166528 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f8e897-2aae-472c-9741-67537defe9d1" containerName="oc" Feb 26 13:58:00 crc kubenswrapper[4724]: I0226 13:58:00.169527 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535238-j5bxm" Feb 26 13:58:00 crc kubenswrapper[4724]: I0226 13:58:00.175283 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535238-j5bxm"] Feb 26 13:58:00 crc kubenswrapper[4724]: I0226 13:58:00.176026 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2z87\" (UniqueName: \"kubernetes.io/projected/b0b3d573-e3f5-4eac-91fc-e5436296bd24-kube-api-access-g2z87\") pod \"auto-csr-approver-29535238-j5bxm\" (UID: \"b0b3d573-e3f5-4eac-91fc-e5436296bd24\") " pod="openshift-infra/auto-csr-approver-29535238-j5bxm" Feb 26 13:58:00 crc kubenswrapper[4724]: I0226 13:58:00.176693 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 13:58:00 crc kubenswrapper[4724]: I0226 13:58:00.178580 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 13:58:00 crc kubenswrapper[4724]: I0226 13:58:00.178759 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 13:58:00 crc kubenswrapper[4724]: I0226 13:58:00.277143 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2z87\" (UniqueName: \"kubernetes.io/projected/b0b3d573-e3f5-4eac-91fc-e5436296bd24-kube-api-access-g2z87\") pod \"auto-csr-approver-29535238-j5bxm\" (UID: \"b0b3d573-e3f5-4eac-91fc-e5436296bd24\") " pod="openshift-infra/auto-csr-approver-29535238-j5bxm" Feb 26 13:58:00 crc kubenswrapper[4724]: I0226 13:58:00.301290 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2z87\" (UniqueName: \"kubernetes.io/projected/b0b3d573-e3f5-4eac-91fc-e5436296bd24-kube-api-access-g2z87\") pod \"auto-csr-approver-29535238-j5bxm\" (UID: \"b0b3d573-e3f5-4eac-91fc-e5436296bd24\") " pod="openshift-infra/auto-csr-approver-29535238-j5bxm" Feb 26 13:58:00 crc kubenswrapper[4724]: I0226 13:58:00.517721 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535238-j5bxm" Feb 26 13:58:01 crc kubenswrapper[4724]: I0226 13:58:01.057265 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535238-j5bxm"] Feb 26 13:58:01 crc kubenswrapper[4724]: I0226 13:58:01.508649 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535238-j5bxm" event={"ID":"b0b3d573-e3f5-4eac-91fc-e5436296bd24","Type":"ContainerStarted","Data":"7dfa556e470a890077c73ccdca371f2b4cb5a8e85a6bf2dd70a75cec48e07d11"} Feb 26 13:58:02 crc kubenswrapper[4724]: I0226 13:58:02.520282 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535238-j5bxm" event={"ID":"b0b3d573-e3f5-4eac-91fc-e5436296bd24","Type":"ContainerStarted","Data":"bf18d5241f9a375ab46bf94f9f9f61f2ff2dde09b45e41a00b9b6127ae4f0cde"} Feb 26 13:58:02 crc kubenswrapper[4724]: I0226 13:58:02.544801 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535238-j5bxm" podStartSLOduration=1.4266594590000001 podStartE2EDuration="2.544764096s" podCreationTimestamp="2026-02-26 13:58:00 +0000 UTC" firstStartedPulling="2026-02-26 13:58:01.05628866 +0000 UTC m=+10347.712027775" lastFinishedPulling="2026-02-26 13:58:02.174393297 +0000 UTC m=+10348.830132412" observedRunningTime="2026-02-26 13:58:02.539122194 +0000 UTC m=+10349.194861319" watchObservedRunningTime="2026-02-26 13:58:02.544764096 +0000 UTC m=+10349.200503211" Feb 26 13:58:04 crc kubenswrapper[4724]: I0226 13:58:04.584054 4724 generic.go:334] "Generic (PLEG): container finished" podID="b0b3d573-e3f5-4eac-91fc-e5436296bd24" containerID="bf18d5241f9a375ab46bf94f9f9f61f2ff2dde09b45e41a00b9b6127ae4f0cde" exitCode=0 Feb 26 13:58:04 crc kubenswrapper[4724]: I0226 13:58:04.584494 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535238-j5bxm" event={"ID":"b0b3d573-e3f5-4eac-91fc-e5436296bd24","Type":"ContainerDied","Data":"bf18d5241f9a375ab46bf94f9f9f61f2ff2dde09b45e41a00b9b6127ae4f0cde"} Feb 26 13:58:06 crc kubenswrapper[4724]: I0226 13:58:06.128218 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535238-j5bxm" Feb 26 13:58:06 crc kubenswrapper[4724]: I0226 13:58:06.313025 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2z87\" (UniqueName: \"kubernetes.io/projected/b0b3d573-e3f5-4eac-91fc-e5436296bd24-kube-api-access-g2z87\") pod \"b0b3d573-e3f5-4eac-91fc-e5436296bd24\" (UID: \"b0b3d573-e3f5-4eac-91fc-e5436296bd24\") " Feb 26 13:58:06 crc kubenswrapper[4724]: I0226 13:58:06.319140 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b3d573-e3f5-4eac-91fc-e5436296bd24-kube-api-access-g2z87" (OuterVolumeSpecName: "kube-api-access-g2z87") pod "b0b3d573-e3f5-4eac-91fc-e5436296bd24" (UID: "b0b3d573-e3f5-4eac-91fc-e5436296bd24"). InnerVolumeSpecName "kube-api-access-g2z87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:58:06 crc kubenswrapper[4724]: I0226 13:58:06.415850 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2z87\" (UniqueName: \"kubernetes.io/projected/b0b3d573-e3f5-4eac-91fc-e5436296bd24-kube-api-access-g2z87\") on node \"crc\" DevicePath \"\"" Feb 26 13:58:06 crc kubenswrapper[4724]: I0226 13:58:06.605032 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535238-j5bxm" event={"ID":"b0b3d573-e3f5-4eac-91fc-e5436296bd24","Type":"ContainerDied","Data":"7dfa556e470a890077c73ccdca371f2b4cb5a8e85a6bf2dd70a75cec48e07d11"} Feb 26 13:58:06 crc kubenswrapper[4724]: I0226 13:58:06.605072 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535238-j5bxm" Feb 26 13:58:06 crc kubenswrapper[4724]: I0226 13:58:06.605089 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dfa556e470a890077c73ccdca371f2b4cb5a8e85a6bf2dd70a75cec48e07d11" Feb 26 13:58:06 crc kubenswrapper[4724]: I0226 13:58:06.670782 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535232-lsdkz"] Feb 26 13:58:06 crc kubenswrapper[4724]: I0226 13:58:06.679287 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535232-lsdkz"] Feb 26 13:58:07 crc kubenswrapper[4724]: I0226 13:58:07.987755 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89714aff-88c5-4509-9792-9410d85e784b" path="/var/lib/kubelet/pods/89714aff-88c5-4509-9792-9410d85e784b/volumes" Feb 26 13:58:16 crc kubenswrapper[4724]: I0226 13:58:16.906270 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:58:16 crc kubenswrapper[4724]: I0226 13:58:16.907041 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:58:30 crc kubenswrapper[4724]: I0226 13:58:30.310273 4724 scope.go:117] "RemoveContainer" containerID="4bca270691629de1da0a1026715ede626226a4f93e7ca085e56beb16e3c0ba1d" Feb 26 13:58:46 crc kubenswrapper[4724]: I0226 13:58:46.906205 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:58:46 crc kubenswrapper[4724]: I0226 13:58:46.906957 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:58:56 crc kubenswrapper[4724]: I0226 13:58:56.303443 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b2nqn"] Feb 26 13:58:56 crc kubenswrapper[4724]: E0226 13:58:56.304255 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b3d573-e3f5-4eac-91fc-e5436296bd24" containerName="oc" Feb 26 13:58:56 crc kubenswrapper[4724]: I0226 13:58:56.304270 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b3d573-e3f5-4eac-91fc-e5436296bd24" containerName="oc" Feb 26 13:58:56 crc kubenswrapper[4724]: I0226 13:58:56.304494 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b3d573-e3f5-4eac-91fc-e5436296bd24" containerName="oc" Feb 26 13:58:56 crc kubenswrapper[4724]: I0226 13:58:56.354545 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2nqn"] Feb 26 13:58:56 crc kubenswrapper[4724]: I0226 13:58:56.358749 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:58:56 crc kubenswrapper[4724]: I0226 13:58:56.516299 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8799371-343f-471f-9e91-12818e2988e9-utilities\") pod \"redhat-marketplace-b2nqn\" (UID: \"a8799371-343f-471f-9e91-12818e2988e9\") " pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:58:56 crc kubenswrapper[4724]: I0226 13:58:56.516400 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmwlg\" (UniqueName: \"kubernetes.io/projected/a8799371-343f-471f-9e91-12818e2988e9-kube-api-access-rmwlg\") pod \"redhat-marketplace-b2nqn\" (UID: \"a8799371-343f-471f-9e91-12818e2988e9\") " pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:58:56 crc kubenswrapper[4724]: I0226 13:58:56.516795 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8799371-343f-471f-9e91-12818e2988e9-catalog-content\") pod \"redhat-marketplace-b2nqn\" (UID: \"a8799371-343f-471f-9e91-12818e2988e9\") " pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:58:56 crc kubenswrapper[4724]: I0226 13:58:56.618670 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8799371-343f-471f-9e91-12818e2988e9-catalog-content\") pod \"redhat-marketplace-b2nqn\" (UID: \"a8799371-343f-471f-9e91-12818e2988e9\") " pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:58:56 crc kubenswrapper[4724]: I0226 13:58:56.618767 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8799371-343f-471f-9e91-12818e2988e9-utilities\") pod \"redhat-marketplace-b2nqn\" (UID: \"a8799371-343f-471f-9e91-12818e2988e9\") " pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:58:56 crc kubenswrapper[4724]: I0226 13:58:56.618799 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmwlg\" (UniqueName: \"kubernetes.io/projected/a8799371-343f-471f-9e91-12818e2988e9-kube-api-access-rmwlg\") pod \"redhat-marketplace-b2nqn\" (UID: \"a8799371-343f-471f-9e91-12818e2988e9\") " pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:58:56 crc kubenswrapper[4724]: I0226 13:58:56.619529 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8799371-343f-471f-9e91-12818e2988e9-utilities\") pod \"redhat-marketplace-b2nqn\" (UID: \"a8799371-343f-471f-9e91-12818e2988e9\") " pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:58:56 crc kubenswrapper[4724]: I0226 13:58:56.619837 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8799371-343f-471f-9e91-12818e2988e9-catalog-content\") pod \"redhat-marketplace-b2nqn\" (UID: \"a8799371-343f-471f-9e91-12818e2988e9\") " pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:58:56 crc kubenswrapper[4724]: I0226 13:58:56.808322 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmwlg\" (UniqueName: \"kubernetes.io/projected/a8799371-343f-471f-9e91-12818e2988e9-kube-api-access-rmwlg\") pod \"redhat-marketplace-b2nqn\" (UID: \"a8799371-343f-471f-9e91-12818e2988e9\") " pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:58:56 crc kubenswrapper[4724]: I0226 13:58:56.994485 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:58:57 crc kubenswrapper[4724]: I0226 13:58:57.775235 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2nqn"] Feb 26 13:58:58 crc kubenswrapper[4724]: I0226 13:58:58.105362 4724 generic.go:334] "Generic (PLEG): container finished" podID="a8799371-343f-471f-9e91-12818e2988e9" containerID="3f5c670baf00f969723b9e9d31a868f91aa18bdd66066819dd671e174957b153" exitCode=0 Feb 26 13:58:58 crc kubenswrapper[4724]: I0226 13:58:58.105450 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2nqn" event={"ID":"a8799371-343f-471f-9e91-12818e2988e9","Type":"ContainerDied","Data":"3f5c670baf00f969723b9e9d31a868f91aa18bdd66066819dd671e174957b153"} Feb 26 13:58:58 crc kubenswrapper[4724]: I0226 13:58:58.105644 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2nqn" event={"ID":"a8799371-343f-471f-9e91-12818e2988e9","Type":"ContainerStarted","Data":"f92c2dba1b2e67d4170702e5d98de5d3d7ef1b1ac99e9bfa0d680b01e4dfd1cc"} Feb 26 13:58:59 crc kubenswrapper[4724]: I0226 13:58:59.116469 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2nqn" event={"ID":"a8799371-343f-471f-9e91-12818e2988e9","Type":"ContainerStarted","Data":"0b57bccb076665cb4744f7369b28ef695f6e7bfa77993691f131077780a73717"} Feb 26 13:59:01 crc kubenswrapper[4724]: I0226 13:59:01.141751 4724 generic.go:334] "Generic (PLEG): container finished" podID="a8799371-343f-471f-9e91-12818e2988e9" containerID="0b57bccb076665cb4744f7369b28ef695f6e7bfa77993691f131077780a73717" exitCode=0 Feb 26 13:59:01 crc kubenswrapper[4724]: I0226 13:59:01.141825 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2nqn" event={"ID":"a8799371-343f-471f-9e91-12818e2988e9","Type":"ContainerDied","Data":"0b57bccb076665cb4744f7369b28ef695f6e7bfa77993691f131077780a73717"} Feb 26 13:59:02 crc kubenswrapper[4724]: I0226 13:59:02.211907 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2nqn" event={"ID":"a8799371-343f-471f-9e91-12818e2988e9","Type":"ContainerStarted","Data":"1d7a2e7e40ca2487646674b8dbe8279dba45f31cf2fc43021c0c45266c39d579"} Feb 26 13:59:02 crc kubenswrapper[4724]: I0226 13:59:02.280835 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b2nqn" podStartSLOduration=2.845038109 podStartE2EDuration="6.280681201s" podCreationTimestamp="2026-02-26 13:58:56 +0000 UTC" firstStartedPulling="2026-02-26 13:58:58.108440304 +0000 UTC m=+10404.764179419" lastFinishedPulling="2026-02-26 13:59:01.544083396 +0000 UTC m=+10408.199822511" observedRunningTime="2026-02-26 13:59:02.273027568 +0000 UTC m=+10408.928766693" watchObservedRunningTime="2026-02-26 13:59:02.280681201 +0000 UTC m=+10408.936420316" Feb 26 13:59:06 crc kubenswrapper[4724]: I0226 13:59:06.996940 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:59:06 crc kubenswrapper[4724]: I0226 13:59:06.997651 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:59:08 crc kubenswrapper[4724]: I0226 13:59:08.194828 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-b2nqn" podUID="a8799371-343f-471f-9e91-12818e2988e9" containerName="registry-server" probeResult="failure" output=< Feb 26 13:59:08 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:59:08 crc kubenswrapper[4724]: > Feb 26 13:59:16 crc kubenswrapper[4724]: I0226 13:59:16.906380 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 13:59:16 crc kubenswrapper[4724]: I0226 13:59:16.906929 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 13:59:16 crc kubenswrapper[4724]: I0226 13:59:16.906989 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 13:59:16 crc kubenswrapper[4724]: I0226 13:59:16.908605 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 13:59:16 crc kubenswrapper[4724]: I0226 13:59:16.908681 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" gracePeriod=600 Feb 26 13:59:17 crc kubenswrapper[4724]: E0226 13:59:17.041277 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:59:17 crc kubenswrapper[4724]: I0226 13:59:17.046776 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:59:17 crc kubenswrapper[4724]: I0226 13:59:17.103268 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:59:17 crc kubenswrapper[4724]: I0226 13:59:17.289299 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2nqn"] Feb 26 13:59:17 crc kubenswrapper[4724]: I0226 13:59:17.367599 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" exitCode=0 Feb 26 13:59:17 crc kubenswrapper[4724]: I0226 13:59:17.367697 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788"} Feb 26 13:59:17 crc kubenswrapper[4724]: I0226 13:59:17.367793 4724 scope.go:117] "RemoveContainer" containerID="2b17d85d6c6df34095807be107f1568f69425ad599d566862bbb00bf5e94b604" Feb 26 13:59:17 crc kubenswrapper[4724]: I0226 13:59:17.371156 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 13:59:17 crc kubenswrapper[4724]: E0226 13:59:17.371588 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:59:18 crc kubenswrapper[4724]: I0226 13:59:18.378794 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b2nqn" podUID="a8799371-343f-471f-9e91-12818e2988e9" containerName="registry-server" containerID="cri-o://1d7a2e7e40ca2487646674b8dbe8279dba45f31cf2fc43021c0c45266c39d579" gracePeriod=2 Feb 26 13:59:18 crc kubenswrapper[4724]: I0226 13:59:18.832892 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:59:18 crc kubenswrapper[4724]: I0226 13:59:18.886232 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmwlg\" (UniqueName: \"kubernetes.io/projected/a8799371-343f-471f-9e91-12818e2988e9-kube-api-access-rmwlg\") pod \"a8799371-343f-471f-9e91-12818e2988e9\" (UID: \"a8799371-343f-471f-9e91-12818e2988e9\") " Feb 26 13:59:18 crc kubenswrapper[4724]: I0226 13:59:18.886571 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8799371-343f-471f-9e91-12818e2988e9-catalog-content\") pod \"a8799371-343f-471f-9e91-12818e2988e9\" (UID: \"a8799371-343f-471f-9e91-12818e2988e9\") " Feb 26 13:59:18 crc kubenswrapper[4724]: I0226 13:59:18.886629 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8799371-343f-471f-9e91-12818e2988e9-utilities\") pod \"a8799371-343f-471f-9e91-12818e2988e9\" (UID: \"a8799371-343f-471f-9e91-12818e2988e9\") " Feb 26 13:59:18 crc kubenswrapper[4724]: I0226 13:59:18.887564 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8799371-343f-471f-9e91-12818e2988e9-utilities" (OuterVolumeSpecName: "utilities") pod "a8799371-343f-471f-9e91-12818e2988e9" (UID: "a8799371-343f-471f-9e91-12818e2988e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:59:18 crc kubenswrapper[4724]: I0226 13:59:18.893150 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8799371-343f-471f-9e91-12818e2988e9-kube-api-access-rmwlg" (OuterVolumeSpecName: "kube-api-access-rmwlg") pod "a8799371-343f-471f-9e91-12818e2988e9" (UID: "a8799371-343f-471f-9e91-12818e2988e9"). InnerVolumeSpecName "kube-api-access-rmwlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 13:59:18 crc kubenswrapper[4724]: I0226 13:59:18.914615 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8799371-343f-471f-9e91-12818e2988e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8799371-343f-471f-9e91-12818e2988e9" (UID: "a8799371-343f-471f-9e91-12818e2988e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 13:59:18 crc kubenswrapper[4724]: I0226 13:59:18.989369 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8799371-343f-471f-9e91-12818e2988e9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 13:59:18 crc kubenswrapper[4724]: I0226 13:59:18.989401 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8799371-343f-471f-9e91-12818e2988e9-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 13:59:18 crc kubenswrapper[4724]: I0226 13:59:18.989414 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmwlg\" (UniqueName: \"kubernetes.io/projected/a8799371-343f-471f-9e91-12818e2988e9-kube-api-access-rmwlg\") on node \"crc\" DevicePath \"\"" Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.392226 4724 generic.go:334] "Generic (PLEG): container finished" podID="a8799371-343f-471f-9e91-12818e2988e9" containerID="1d7a2e7e40ca2487646674b8dbe8279dba45f31cf2fc43021c0c45266c39d579" exitCode=0 Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.392280 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2nqn" event={"ID":"a8799371-343f-471f-9e91-12818e2988e9","Type":"ContainerDied","Data":"1d7a2e7e40ca2487646674b8dbe8279dba45f31cf2fc43021c0c45266c39d579"} Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.392309 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2nqn" event={"ID":"a8799371-343f-471f-9e91-12818e2988e9","Type":"ContainerDied","Data":"f92c2dba1b2e67d4170702e5d98de5d3d7ef1b1ac99e9bfa0d680b01e4dfd1cc"} Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.392327 4724 scope.go:117] "RemoveContainer" containerID="1d7a2e7e40ca2487646674b8dbe8279dba45f31cf2fc43021c0c45266c39d579" Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.392482 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2nqn" Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.418632 4724 scope.go:117] "RemoveContainer" containerID="0b57bccb076665cb4744f7369b28ef695f6e7bfa77993691f131077780a73717" Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.452091 4724 scope.go:117] "RemoveContainer" containerID="3f5c670baf00f969723b9e9d31a868f91aa18bdd66066819dd671e174957b153" Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.465172 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2nqn"] Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.473504 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2nqn"] Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.500604 4724 scope.go:117] "RemoveContainer" containerID="1d7a2e7e40ca2487646674b8dbe8279dba45f31cf2fc43021c0c45266c39d579" Feb 26 13:59:19 crc kubenswrapper[4724]: E0226 13:59:19.505641 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d7a2e7e40ca2487646674b8dbe8279dba45f31cf2fc43021c0c45266c39d579\": container with ID starting with 1d7a2e7e40ca2487646674b8dbe8279dba45f31cf2fc43021c0c45266c39d579 not found: ID does not exist" containerID="1d7a2e7e40ca2487646674b8dbe8279dba45f31cf2fc43021c0c45266c39d579" Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.505699 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d7a2e7e40ca2487646674b8dbe8279dba45f31cf2fc43021c0c45266c39d579"} err="failed to get container status \"1d7a2e7e40ca2487646674b8dbe8279dba45f31cf2fc43021c0c45266c39d579\": rpc error: code = NotFound desc = could not find container \"1d7a2e7e40ca2487646674b8dbe8279dba45f31cf2fc43021c0c45266c39d579\": container with ID starting with 1d7a2e7e40ca2487646674b8dbe8279dba45f31cf2fc43021c0c45266c39d579 not found: ID does not exist" Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.505730 4724 scope.go:117] "RemoveContainer" containerID="0b57bccb076665cb4744f7369b28ef695f6e7bfa77993691f131077780a73717" Feb 26 13:59:19 crc kubenswrapper[4724]: E0226 13:59:19.506067 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b57bccb076665cb4744f7369b28ef695f6e7bfa77993691f131077780a73717\": container with ID starting with 0b57bccb076665cb4744f7369b28ef695f6e7bfa77993691f131077780a73717 not found: ID does not exist" containerID="0b57bccb076665cb4744f7369b28ef695f6e7bfa77993691f131077780a73717" Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.506111 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b57bccb076665cb4744f7369b28ef695f6e7bfa77993691f131077780a73717"} err="failed to get container status \"0b57bccb076665cb4744f7369b28ef695f6e7bfa77993691f131077780a73717\": rpc error: code = NotFound desc = could not find container \"0b57bccb076665cb4744f7369b28ef695f6e7bfa77993691f131077780a73717\": container with ID starting with 0b57bccb076665cb4744f7369b28ef695f6e7bfa77993691f131077780a73717 not found: ID does not exist" Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.506133 4724 scope.go:117] "RemoveContainer" containerID="3f5c670baf00f969723b9e9d31a868f91aa18bdd66066819dd671e174957b153" Feb 26 13:59:19 crc kubenswrapper[4724]: E0226 13:59:19.506981 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f5c670baf00f969723b9e9d31a868f91aa18bdd66066819dd671e174957b153\": container with ID starting with 3f5c670baf00f969723b9e9d31a868f91aa18bdd66066819dd671e174957b153 not found: ID does not exist" containerID="3f5c670baf00f969723b9e9d31a868f91aa18bdd66066819dd671e174957b153" Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.507031 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f5c670baf00f969723b9e9d31a868f91aa18bdd66066819dd671e174957b153"} err="failed to get container status \"3f5c670baf00f969723b9e9d31a868f91aa18bdd66066819dd671e174957b153\": rpc error: code = NotFound desc = could not find container \"3f5c670baf00f969723b9e9d31a868f91aa18bdd66066819dd671e174957b153\": container with ID starting with 3f5c670baf00f969723b9e9d31a868f91aa18bdd66066819dd671e174957b153 not found: ID does not exist" Feb 26 13:59:19 crc kubenswrapper[4724]: I0226 13:59:19.988713 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8799371-343f-471f-9e91-12818e2988e9" path="/var/lib/kubelet/pods/a8799371-343f-471f-9e91-12818e2988e9/volumes" Feb 26 13:59:28 crc kubenswrapper[4724]: I0226 13:59:28.976335 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 13:59:28 crc kubenswrapper[4724]: E0226 13:59:28.977343 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:59:39 crc kubenswrapper[4724]: I0226 13:59:39.944739 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wsrc7"] Feb 26 13:59:39 crc kubenswrapper[4724]: E0226 13:59:39.946270 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8799371-343f-471f-9e91-12818e2988e9" containerName="extract-utilities" Feb 26 13:59:39 crc kubenswrapper[4724]: I0226 13:59:39.946294 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8799371-343f-471f-9e91-12818e2988e9" containerName="extract-utilities" Feb 26 13:59:39 crc kubenswrapper[4724]: E0226 13:59:39.946337 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8799371-343f-471f-9e91-12818e2988e9" containerName="registry-server" Feb 26 13:59:39 crc kubenswrapper[4724]: I0226 13:59:39.946343 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8799371-343f-471f-9e91-12818e2988e9" containerName="registry-server" Feb 26 13:59:39 crc kubenswrapper[4724]: E0226 13:59:39.946374 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8799371-343f-471f-9e91-12818e2988e9" containerName="extract-content" Feb 26 13:59:39 crc kubenswrapper[4724]: I0226 13:59:39.946381 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8799371-343f-471f-9e91-12818e2988e9" containerName="extract-content" Feb 26 13:59:39 crc kubenswrapper[4724]: I0226 13:59:39.946594 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8799371-343f-471f-9e91-12818e2988e9" containerName="registry-server" Feb 26 13:59:39 crc kubenswrapper[4724]: I0226 13:59:39.951955 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 13:59:39 crc kubenswrapper[4724]: I0226 13:59:39.962322 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wsrc7"] Feb 26 13:59:40 crc kubenswrapper[4724]: I0226 13:59:40.102254 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76028d58-1109-4e9d-a4a7-24222feb4b7f-catalog-content\") pod \"certified-operators-wsrc7\" (UID: \"76028d58-1109-4e9d-a4a7-24222feb4b7f\") " pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 13:59:40 crc kubenswrapper[4724]: I0226 13:59:40.102376 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p6bf\" (UniqueName: \"kubernetes.io/projected/76028d58-1109-4e9d-a4a7-24222feb4b7f-kube-api-access-5p6bf\") pod \"certified-operators-wsrc7\" (UID: \"76028d58-1109-4e9d-a4a7-24222feb4b7f\") " pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 13:59:40 crc kubenswrapper[4724]: I0226 13:59:40.102474 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76028d58-1109-4e9d-a4a7-24222feb4b7f-utilities\") pod \"certified-operators-wsrc7\" (UID: \"76028d58-1109-4e9d-a4a7-24222feb4b7f\") " pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 13:59:40 crc kubenswrapper[4724]: I0226 13:59:40.205459 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p6bf\" (UniqueName: \"kubernetes.io/projected/76028d58-1109-4e9d-a4a7-24222feb4b7f-kube-api-access-5p6bf\") pod \"certified-operators-wsrc7\" (UID: \"76028d58-1109-4e9d-a4a7-24222feb4b7f\") " pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 13:59:40 crc kubenswrapper[4724]: I0226 13:59:40.205571 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76028d58-1109-4e9d-a4a7-24222feb4b7f-utilities\") pod \"certified-operators-wsrc7\" (UID: \"76028d58-1109-4e9d-a4a7-24222feb4b7f\") " pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 13:59:40 crc kubenswrapper[4724]: I0226 13:59:40.205742 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76028d58-1109-4e9d-a4a7-24222feb4b7f-catalog-content\") pod \"certified-operators-wsrc7\" (UID: \"76028d58-1109-4e9d-a4a7-24222feb4b7f\") " pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 13:59:40 crc kubenswrapper[4724]: I0226 13:59:40.206450 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76028d58-1109-4e9d-a4a7-24222feb4b7f-utilities\") pod \"certified-operators-wsrc7\" (UID: \"76028d58-1109-4e9d-a4a7-24222feb4b7f\") " pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 13:59:40 crc kubenswrapper[4724]: I0226 13:59:40.206781 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76028d58-1109-4e9d-a4a7-24222feb4b7f-catalog-content\") pod \"certified-operators-wsrc7\" (UID: \"76028d58-1109-4e9d-a4a7-24222feb4b7f\") " pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 13:59:40 crc kubenswrapper[4724]: I0226 13:59:40.232220 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p6bf\" (UniqueName: \"kubernetes.io/projected/76028d58-1109-4e9d-a4a7-24222feb4b7f-kube-api-access-5p6bf\") pod \"certified-operators-wsrc7\" (UID: \"76028d58-1109-4e9d-a4a7-24222feb4b7f\") " pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 13:59:40 crc kubenswrapper[4724]: I0226 13:59:40.286546 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 13:59:40 crc kubenswrapper[4724]: I0226 13:59:40.819565 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wsrc7"] Feb 26 13:59:41 crc kubenswrapper[4724]: I0226 13:59:41.590279 4724 generic.go:334] "Generic (PLEG): container finished" podID="76028d58-1109-4e9d-a4a7-24222feb4b7f" containerID="89e98c57645a8fdd1a628b8a89bead000d54e9d8e227a52b8ad27b74332ce5f9" exitCode=0 Feb 26 13:59:41 crc kubenswrapper[4724]: I0226 13:59:41.591368 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsrc7" event={"ID":"76028d58-1109-4e9d-a4a7-24222feb4b7f","Type":"ContainerDied","Data":"89e98c57645a8fdd1a628b8a89bead000d54e9d8e227a52b8ad27b74332ce5f9"} Feb 26 13:59:41 crc kubenswrapper[4724]: I0226 13:59:41.591451 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsrc7" event={"ID":"76028d58-1109-4e9d-a4a7-24222feb4b7f","Type":"ContainerStarted","Data":"8a5abbc5fd026cbadb7434f8df61914e17de34d138013fda0485ad73160b193c"} Feb 26 13:59:43 crc kubenswrapper[4724]: I0226 13:59:43.982804 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 13:59:43 crc kubenswrapper[4724]: E0226 13:59:43.983728 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 13:59:44 crc kubenswrapper[4724]: I0226 13:59:44.621512 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsrc7" event={"ID":"76028d58-1109-4e9d-a4a7-24222feb4b7f","Type":"ContainerStarted","Data":"82f35bf1dc0687eaf50444b2078e141a764ce0ec1c69d17d0cf7f68e4873a6c3"} Feb 26 13:59:48 crc kubenswrapper[4724]: I0226 13:59:48.664992 4724 generic.go:334] "Generic (PLEG): container finished" podID="76028d58-1109-4e9d-a4a7-24222feb4b7f" containerID="82f35bf1dc0687eaf50444b2078e141a764ce0ec1c69d17d0cf7f68e4873a6c3" exitCode=0 Feb 26 13:59:48 crc kubenswrapper[4724]: I0226 13:59:48.665120 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsrc7" event={"ID":"76028d58-1109-4e9d-a4a7-24222feb4b7f","Type":"ContainerDied","Data":"82f35bf1dc0687eaf50444b2078e141a764ce0ec1c69d17d0cf7f68e4873a6c3"} Feb 26 13:59:48 crc kubenswrapper[4724]: I0226 13:59:48.668089 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 13:59:49 crc kubenswrapper[4724]: I0226 13:59:49.677986 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsrc7" event={"ID":"76028d58-1109-4e9d-a4a7-24222feb4b7f","Type":"ContainerStarted","Data":"60b9291bc17ff199cb5c05d5fb3d6634899775838180958626457515f58ff82b"} Feb 26 13:59:49 crc kubenswrapper[4724]: I0226 13:59:49.719915 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wsrc7" podStartSLOduration=3.261110883 podStartE2EDuration="10.719891275s" podCreationTimestamp="2026-02-26 13:59:39 +0000 UTC" firstStartedPulling="2026-02-26 13:59:41.591913771 +0000 UTC m=+10448.247652886" lastFinishedPulling="2026-02-26 13:59:49.050694163 +0000 UTC m=+10455.706433278" observedRunningTime="2026-02-26 13:59:49.701580322 +0000 UTC m=+10456.357319527" watchObservedRunningTime="2026-02-26 13:59:49.719891275 +0000 UTC m=+10456.375630400" Feb 26 13:59:50 crc kubenswrapper[4724]: I0226 13:59:50.287298 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 13:59:50 crc kubenswrapper[4724]: I0226 13:59:50.287668 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 13:59:51 crc kubenswrapper[4724]: I0226 13:59:51.546374 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wsrc7" podUID="76028d58-1109-4e9d-a4a7-24222feb4b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 13:59:51 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 13:59:51 crc kubenswrapper[4724]: > Feb 26 13:59:54 crc kubenswrapper[4724]: I0226 13:59:54.975435 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 13:59:54 crc kubenswrapper[4724]: E0226 13:59:54.976269 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.214384 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535240-5rkzk"] Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.218637 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535240-5rkzk" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.226952 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.227144 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.228082 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.242394 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5"] Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.243932 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.247642 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.251758 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.276435 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td9vw\" (UniqueName: \"kubernetes.io/projected/1011342d-98f8-4495-9997-b52e55037233-kube-api-access-td9vw\") pod \"auto-csr-approver-29535240-5rkzk\" (UID: \"1011342d-98f8-4495-9997-b52e55037233\") " pod="openshift-infra/auto-csr-approver-29535240-5rkzk" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.276538 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a934683-a48a-4008-b63f-9cdad4022fba-config-volume\") pod \"collect-profiles-29535240-rcgj5\" (UID: \"2a934683-a48a-4008-b63f-9cdad4022fba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.276607 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mww9\" (UniqueName: \"kubernetes.io/projected/2a934683-a48a-4008-b63f-9cdad4022fba-kube-api-access-9mww9\") pod \"collect-profiles-29535240-rcgj5\" (UID: \"2a934683-a48a-4008-b63f-9cdad4022fba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.276641 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a934683-a48a-4008-b63f-9cdad4022fba-secret-volume\") pod \"collect-profiles-29535240-rcgj5\" (UID: \"2a934683-a48a-4008-b63f-9cdad4022fba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.278058 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5"] Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.296600 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535240-5rkzk"] Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.379709 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td9vw\" (UniqueName: \"kubernetes.io/projected/1011342d-98f8-4495-9997-b52e55037233-kube-api-access-td9vw\") pod \"auto-csr-approver-29535240-5rkzk\" (UID: \"1011342d-98f8-4495-9997-b52e55037233\") " pod="openshift-infra/auto-csr-approver-29535240-5rkzk" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.379937 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a934683-a48a-4008-b63f-9cdad4022fba-config-volume\") pod \"collect-profiles-29535240-rcgj5\" (UID: \"2a934683-a48a-4008-b63f-9cdad4022fba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.380019 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mww9\" (UniqueName: \"kubernetes.io/projected/2a934683-a48a-4008-b63f-9cdad4022fba-kube-api-access-9mww9\") pod \"collect-profiles-29535240-rcgj5\" (UID: \"2a934683-a48a-4008-b63f-9cdad4022fba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.380066 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a934683-a48a-4008-b63f-9cdad4022fba-secret-volume\") pod \"collect-profiles-29535240-rcgj5\" (UID: \"2a934683-a48a-4008-b63f-9cdad4022fba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.381697 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a934683-a48a-4008-b63f-9cdad4022fba-config-volume\") pod \"collect-profiles-29535240-rcgj5\" (UID: \"2a934683-a48a-4008-b63f-9cdad4022fba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.408205 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td9vw\" (UniqueName: \"kubernetes.io/projected/1011342d-98f8-4495-9997-b52e55037233-kube-api-access-td9vw\") pod \"auto-csr-approver-29535240-5rkzk\" (UID: \"1011342d-98f8-4495-9997-b52e55037233\") " pod="openshift-infra/auto-csr-approver-29535240-5rkzk" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.408283 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mww9\" (UniqueName: \"kubernetes.io/projected/2a934683-a48a-4008-b63f-9cdad4022fba-kube-api-access-9mww9\") pod \"collect-profiles-29535240-rcgj5\" (UID: \"2a934683-a48a-4008-b63f-9cdad4022fba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.408361 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a934683-a48a-4008-b63f-9cdad4022fba-secret-volume\") pod \"collect-profiles-29535240-rcgj5\" (UID: \"2a934683-a48a-4008-b63f-9cdad4022fba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.577202 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535240-5rkzk" Feb 26 14:00:00 crc kubenswrapper[4724]: I0226 14:00:00.610074 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" Feb 26 14:00:01 crc kubenswrapper[4724]: I0226 14:00:01.360767 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wsrc7" podUID="76028d58-1109-4e9d-a4a7-24222feb4b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:00:01 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:00:01 crc kubenswrapper[4724]: > Feb 26 14:00:01 crc kubenswrapper[4724]: I0226 14:00:01.891780 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5"] Feb 26 14:00:01 crc kubenswrapper[4724]: I0226 14:00:01.903744 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535240-5rkzk"] Feb 26 14:00:01 crc kubenswrapper[4724]: W0226 14:00:01.909358 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a934683_a48a_4008_b63f_9cdad4022fba.slice/crio-ad7b7b4af13d328ee3eb16d73445d024dd410ab9e5ed41f97274e11b70d1cd3b WatchSource:0}: Error finding container ad7b7b4af13d328ee3eb16d73445d024dd410ab9e5ed41f97274e11b70d1cd3b: Status 404 returned error can't find the container with id ad7b7b4af13d328ee3eb16d73445d024dd410ab9e5ed41f97274e11b70d1cd3b Feb 26 14:00:02 crc kubenswrapper[4724]: I0226 14:00:02.823205 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535240-5rkzk" event={"ID":"1011342d-98f8-4495-9997-b52e55037233","Type":"ContainerStarted","Data":"d7a7d358c3a08239d45d31731cbbcac62a6c9935b789430b73d2d10fc004d65a"} Feb 26 14:00:02 crc kubenswrapper[4724]: I0226 14:00:02.825761 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" event={"ID":"2a934683-a48a-4008-b63f-9cdad4022fba","Type":"ContainerStarted","Data":"10f7855e4ba4aa5be5dc943ba8d259a7f08e27aa621febc0ba28d6beca456d9c"} Feb 26 14:00:02 crc kubenswrapper[4724]: I0226 14:00:02.825855 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" event={"ID":"2a934683-a48a-4008-b63f-9cdad4022fba","Type":"ContainerStarted","Data":"ad7b7b4af13d328ee3eb16d73445d024dd410ab9e5ed41f97274e11b70d1cd3b"} Feb 26 14:00:02 crc kubenswrapper[4724]: I0226 14:00:02.858142 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" podStartSLOduration=2.858113402 podStartE2EDuration="2.858113402s" podCreationTimestamp="2026-02-26 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:00:02.851668989 +0000 UTC m=+10469.507408104" watchObservedRunningTime="2026-02-26 14:00:02.858113402 +0000 UTC m=+10469.513852527" Feb 26 14:00:03 crc kubenswrapper[4724]: I0226 14:00:03.836921 4724 generic.go:334] "Generic (PLEG): container finished" podID="2a934683-a48a-4008-b63f-9cdad4022fba" containerID="10f7855e4ba4aa5be5dc943ba8d259a7f08e27aa621febc0ba28d6beca456d9c" exitCode=0 Feb 26 14:00:03 crc kubenswrapper[4724]: I0226 14:00:03.837262 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" event={"ID":"2a934683-a48a-4008-b63f-9cdad4022fba","Type":"ContainerDied","Data":"10f7855e4ba4aa5be5dc943ba8d259a7f08e27aa621febc0ba28d6beca456d9c"} Feb 26 14:00:05 crc kubenswrapper[4724]: I0226 14:00:05.315234 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" Feb 26 14:00:05 crc kubenswrapper[4724]: I0226 14:00:05.423333 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a934683-a48a-4008-b63f-9cdad4022fba-secret-volume\") pod \"2a934683-a48a-4008-b63f-9cdad4022fba\" (UID: \"2a934683-a48a-4008-b63f-9cdad4022fba\") " Feb 26 14:00:05 crc kubenswrapper[4724]: I0226 14:00:05.423604 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a934683-a48a-4008-b63f-9cdad4022fba-config-volume\") pod \"2a934683-a48a-4008-b63f-9cdad4022fba\" (UID: \"2a934683-a48a-4008-b63f-9cdad4022fba\") " Feb 26 14:00:05 crc kubenswrapper[4724]: I0226 14:00:05.423819 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mww9\" (UniqueName: \"kubernetes.io/projected/2a934683-a48a-4008-b63f-9cdad4022fba-kube-api-access-9mww9\") pod \"2a934683-a48a-4008-b63f-9cdad4022fba\" (UID: \"2a934683-a48a-4008-b63f-9cdad4022fba\") " Feb 26 14:00:05 crc kubenswrapper[4724]: I0226 14:00:05.424754 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a934683-a48a-4008-b63f-9cdad4022fba-config-volume" (OuterVolumeSpecName: "config-volume") pod "2a934683-a48a-4008-b63f-9cdad4022fba" (UID: "2a934683-a48a-4008-b63f-9cdad4022fba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:00:05 crc kubenswrapper[4724]: I0226 14:00:05.425083 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a934683-a48a-4008-b63f-9cdad4022fba-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 14:00:05 crc kubenswrapper[4724]: I0226 14:00:05.432761 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a934683-a48a-4008-b63f-9cdad4022fba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2a934683-a48a-4008-b63f-9cdad4022fba" (UID: "2a934683-a48a-4008-b63f-9cdad4022fba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:00:05 crc kubenswrapper[4724]: I0226 14:00:05.434514 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a934683-a48a-4008-b63f-9cdad4022fba-kube-api-access-9mww9" (OuterVolumeSpecName: "kube-api-access-9mww9") pod "2a934683-a48a-4008-b63f-9cdad4022fba" (UID: "2a934683-a48a-4008-b63f-9cdad4022fba"). InnerVolumeSpecName "kube-api-access-9mww9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:00:05 crc kubenswrapper[4724]: I0226 14:00:05.528018 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a934683-a48a-4008-b63f-9cdad4022fba-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 14:00:05 crc kubenswrapper[4724]: I0226 14:00:05.528077 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mww9\" (UniqueName: \"kubernetes.io/projected/2a934683-a48a-4008-b63f-9cdad4022fba-kube-api-access-9mww9\") on node \"crc\" DevicePath \"\"" Feb 26 14:00:05 crc kubenswrapper[4724]: I0226 14:00:05.866044 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535240-5rkzk" event={"ID":"1011342d-98f8-4495-9997-b52e55037233","Type":"ContainerStarted","Data":"4879b777422fa8bcf74b6b70b95d558c49b52a868d7a143acbc412b0499b61eb"} Feb 26 14:00:05 crc kubenswrapper[4724]: I0226 14:00:05.868097 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" event={"ID":"2a934683-a48a-4008-b63f-9cdad4022fba","Type":"ContainerDied","Data":"ad7b7b4af13d328ee3eb16d73445d024dd410ab9e5ed41f97274e11b70d1cd3b"} Feb 26 14:00:05 crc kubenswrapper[4724]: I0226 14:00:05.868141 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad7b7b4af13d328ee3eb16d73445d024dd410ab9e5ed41f97274e11b70d1cd3b" Feb 26 14:00:05 crc kubenswrapper[4724]: I0226 14:00:05.868201 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5" Feb 26 14:00:05 crc kubenswrapper[4724]: I0226 14:00:05.894344 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535240-5rkzk" podStartSLOduration=3.087515919 podStartE2EDuration="5.89431841s" podCreationTimestamp="2026-02-26 14:00:00 +0000 UTC" firstStartedPulling="2026-02-26 14:00:01.918788794 +0000 UTC m=+10468.574527919" lastFinishedPulling="2026-02-26 14:00:04.725591295 +0000 UTC m=+10471.381330410" observedRunningTime="2026-02-26 14:00:05.884512853 +0000 UTC m=+10472.540251988" watchObservedRunningTime="2026-02-26 14:00:05.89431841 +0000 UTC m=+10472.550057526" Feb 26 14:00:06 crc kubenswrapper[4724]: I0226 14:00:06.432206 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml"] Feb 26 14:00:06 crc kubenswrapper[4724]: I0226 14:00:06.442258 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535195-gx2ml"] Feb 26 14:00:06 crc kubenswrapper[4724]: I0226 14:00:06.880228 4724 generic.go:334] "Generic (PLEG): container finished" podID="1011342d-98f8-4495-9997-b52e55037233" containerID="4879b777422fa8bcf74b6b70b95d558c49b52a868d7a143acbc412b0499b61eb" exitCode=0 Feb 26 14:00:06 crc kubenswrapper[4724]: I0226 14:00:06.880323 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535240-5rkzk" event={"ID":"1011342d-98f8-4495-9997-b52e55037233","Type":"ContainerDied","Data":"4879b777422fa8bcf74b6b70b95d558c49b52a868d7a143acbc412b0499b61eb"} Feb 26 14:00:07 crc kubenswrapper[4724]: I0226 14:00:07.988512 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e4bb34d-f6bb-438a-9fd9-6a7d1155d311" path="/var/lib/kubelet/pods/9e4bb34d-f6bb-438a-9fd9-6a7d1155d311/volumes" Feb 26 14:00:08 crc kubenswrapper[4724]: I0226 14:00:08.718679 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535240-5rkzk" Feb 26 14:00:08 crc kubenswrapper[4724]: I0226 14:00:08.807575 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td9vw\" (UniqueName: \"kubernetes.io/projected/1011342d-98f8-4495-9997-b52e55037233-kube-api-access-td9vw\") pod \"1011342d-98f8-4495-9997-b52e55037233\" (UID: \"1011342d-98f8-4495-9997-b52e55037233\") " Feb 26 14:00:08 crc kubenswrapper[4724]: I0226 14:00:08.818532 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1011342d-98f8-4495-9997-b52e55037233-kube-api-access-td9vw" (OuterVolumeSpecName: "kube-api-access-td9vw") pod "1011342d-98f8-4495-9997-b52e55037233" (UID: "1011342d-98f8-4495-9997-b52e55037233"). InnerVolumeSpecName "kube-api-access-td9vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:00:08 crc kubenswrapper[4724]: I0226 14:00:08.906170 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535240-5rkzk" event={"ID":"1011342d-98f8-4495-9997-b52e55037233","Type":"ContainerDied","Data":"d7a7d358c3a08239d45d31731cbbcac62a6c9935b789430b73d2d10fc004d65a"} Feb 26 14:00:08 crc kubenswrapper[4724]: I0226 14:00:08.906223 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7a7d358c3a08239d45d31731cbbcac62a6c9935b789430b73d2d10fc004d65a" Feb 26 14:00:08 crc kubenswrapper[4724]: I0226 14:00:08.906364 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535240-5rkzk" Feb 26 14:00:08 crc kubenswrapper[4724]: I0226 14:00:08.910986 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td9vw\" (UniqueName: \"kubernetes.io/projected/1011342d-98f8-4495-9997-b52e55037233-kube-api-access-td9vw\") on node \"crc\" DevicePath \"\"" Feb 26 14:00:08 crc kubenswrapper[4724]: I0226 14:00:08.981105 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:00:08 crc kubenswrapper[4724]: E0226 14:00:08.981619 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:00:09 crc kubenswrapper[4724]: I0226 14:00:09.002319 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535234-5qtgg"] Feb 26 14:00:09 crc kubenswrapper[4724]: I0226 14:00:09.014853 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535234-5qtgg"] Feb 26 14:00:09 crc kubenswrapper[4724]: I0226 14:00:09.987919 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621a966f-e03c-41bc-82dc-121342e2ea65" path="/var/lib/kubelet/pods/621a966f-e03c-41bc-82dc-121342e2ea65/volumes" Feb 26 14:00:12 crc kubenswrapper[4724]: I0226 14:00:12.131625 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wsrc7" podUID="76028d58-1109-4e9d-a4a7-24222feb4b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:00:12 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:00:12 crc kubenswrapper[4724]: > Feb 26 14:00:19 crc kubenswrapper[4724]: I0226 14:00:19.976669 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:00:19 crc kubenswrapper[4724]: E0226 14:00:19.977882 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:00:20 crc kubenswrapper[4724]: I0226 14:00:20.387291 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 14:00:20 crc kubenswrapper[4724]: I0226 14:00:20.453629 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 14:00:20 crc kubenswrapper[4724]: I0226 14:00:20.520280 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wsrc7"] Feb 26 14:00:22 crc kubenswrapper[4724]: I0226 14:00:22.056469 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wsrc7" podUID="76028d58-1109-4e9d-a4a7-24222feb4b7f" containerName="registry-server" containerID="cri-o://60b9291bc17ff199cb5c05d5fb3d6634899775838180958626457515f58ff82b" gracePeriod=2 Feb 26 14:00:22 crc kubenswrapper[4724]: I0226 14:00:22.584406 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 14:00:22 crc kubenswrapper[4724]: I0226 14:00:22.730944 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p6bf\" (UniqueName: \"kubernetes.io/projected/76028d58-1109-4e9d-a4a7-24222feb4b7f-kube-api-access-5p6bf\") pod \"76028d58-1109-4e9d-a4a7-24222feb4b7f\" (UID: \"76028d58-1109-4e9d-a4a7-24222feb4b7f\") " Feb 26 14:00:22 crc kubenswrapper[4724]: I0226 14:00:22.731417 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76028d58-1109-4e9d-a4a7-24222feb4b7f-catalog-content\") pod \"76028d58-1109-4e9d-a4a7-24222feb4b7f\" (UID: \"76028d58-1109-4e9d-a4a7-24222feb4b7f\") " Feb 26 14:00:22 crc kubenswrapper[4724]: I0226 14:00:22.731462 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76028d58-1109-4e9d-a4a7-24222feb4b7f-utilities\") pod \"76028d58-1109-4e9d-a4a7-24222feb4b7f\" (UID: \"76028d58-1109-4e9d-a4a7-24222feb4b7f\") " Feb 26 14:00:22 crc kubenswrapper[4724]: I0226 14:00:22.732615 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76028d58-1109-4e9d-a4a7-24222feb4b7f-utilities" (OuterVolumeSpecName: "utilities") pod "76028d58-1109-4e9d-a4a7-24222feb4b7f" (UID: "76028d58-1109-4e9d-a4a7-24222feb4b7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:00:22 crc kubenswrapper[4724]: I0226 14:00:22.743001 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76028d58-1109-4e9d-a4a7-24222feb4b7f-kube-api-access-5p6bf" (OuterVolumeSpecName: "kube-api-access-5p6bf") pod "76028d58-1109-4e9d-a4a7-24222feb4b7f" (UID: "76028d58-1109-4e9d-a4a7-24222feb4b7f"). InnerVolumeSpecName "kube-api-access-5p6bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:00:22 crc kubenswrapper[4724]: I0226 14:00:22.817805 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76028d58-1109-4e9d-a4a7-24222feb4b7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76028d58-1109-4e9d-a4a7-24222feb4b7f" (UID: "76028d58-1109-4e9d-a4a7-24222feb4b7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:00:22 crc kubenswrapper[4724]: I0226 14:00:22.834820 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p6bf\" (UniqueName: \"kubernetes.io/projected/76028d58-1109-4e9d-a4a7-24222feb4b7f-kube-api-access-5p6bf\") on node \"crc\" DevicePath \"\"" Feb 26 14:00:22 crc kubenswrapper[4724]: I0226 14:00:22.835280 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76028d58-1109-4e9d-a4a7-24222feb4b7f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:00:22 crc kubenswrapper[4724]: I0226 14:00:22.835367 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76028d58-1109-4e9d-a4a7-24222feb4b7f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.071938 4724 generic.go:334] "Generic (PLEG): container finished" podID="76028d58-1109-4e9d-a4a7-24222feb4b7f" containerID="60b9291bc17ff199cb5c05d5fb3d6634899775838180958626457515f58ff82b" exitCode=0 Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.072026 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsrc7" event={"ID":"76028d58-1109-4e9d-a4a7-24222feb4b7f","Type":"ContainerDied","Data":"60b9291bc17ff199cb5c05d5fb3d6634899775838180958626457515f58ff82b"} Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.072051 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsrc7" Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.072094 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsrc7" event={"ID":"76028d58-1109-4e9d-a4a7-24222feb4b7f","Type":"ContainerDied","Data":"8a5abbc5fd026cbadb7434f8df61914e17de34d138013fda0485ad73160b193c"} Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.072123 4724 scope.go:117] "RemoveContainer" containerID="60b9291bc17ff199cb5c05d5fb3d6634899775838180958626457515f58ff82b" Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.111303 4724 scope.go:117] "RemoveContainer" containerID="82f35bf1dc0687eaf50444b2078e141a764ce0ec1c69d17d0cf7f68e4873a6c3" Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.137528 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wsrc7"] Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.152794 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wsrc7"] Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.167601 4724 scope.go:117] "RemoveContainer" containerID="89e98c57645a8fdd1a628b8a89bead000d54e9d8e227a52b8ad27b74332ce5f9" Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.207353 4724 scope.go:117] "RemoveContainer" containerID="60b9291bc17ff199cb5c05d5fb3d6634899775838180958626457515f58ff82b" Feb 26 14:00:23 crc kubenswrapper[4724]: E0226 14:00:23.208371 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60b9291bc17ff199cb5c05d5fb3d6634899775838180958626457515f58ff82b\": container with ID starting with 60b9291bc17ff199cb5c05d5fb3d6634899775838180958626457515f58ff82b not found: ID does not exist" containerID="60b9291bc17ff199cb5c05d5fb3d6634899775838180958626457515f58ff82b" Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.208456 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60b9291bc17ff199cb5c05d5fb3d6634899775838180958626457515f58ff82b"} err="failed to get container status \"60b9291bc17ff199cb5c05d5fb3d6634899775838180958626457515f58ff82b\": rpc error: code = NotFound desc = could not find container \"60b9291bc17ff199cb5c05d5fb3d6634899775838180958626457515f58ff82b\": container with ID starting with 60b9291bc17ff199cb5c05d5fb3d6634899775838180958626457515f58ff82b not found: ID does not exist" Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.208494 4724 scope.go:117] "RemoveContainer" containerID="82f35bf1dc0687eaf50444b2078e141a764ce0ec1c69d17d0cf7f68e4873a6c3" Feb 26 14:00:23 crc kubenswrapper[4724]: E0226 14:00:23.209855 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f35bf1dc0687eaf50444b2078e141a764ce0ec1c69d17d0cf7f68e4873a6c3\": container with ID starting with 82f35bf1dc0687eaf50444b2078e141a764ce0ec1c69d17d0cf7f68e4873a6c3 not found: ID does not exist" containerID="82f35bf1dc0687eaf50444b2078e141a764ce0ec1c69d17d0cf7f68e4873a6c3" Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.209917 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f35bf1dc0687eaf50444b2078e141a764ce0ec1c69d17d0cf7f68e4873a6c3"} err="failed to get container status \"82f35bf1dc0687eaf50444b2078e141a764ce0ec1c69d17d0cf7f68e4873a6c3\": rpc error: code = NotFound desc = could not find container \"82f35bf1dc0687eaf50444b2078e141a764ce0ec1c69d17d0cf7f68e4873a6c3\": container with ID starting with 82f35bf1dc0687eaf50444b2078e141a764ce0ec1c69d17d0cf7f68e4873a6c3 not found: ID does not exist" Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.209952 4724 scope.go:117] "RemoveContainer" containerID="89e98c57645a8fdd1a628b8a89bead000d54e9d8e227a52b8ad27b74332ce5f9" Feb 26 14:00:23 crc kubenswrapper[4724]: E0226 14:00:23.210894 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89e98c57645a8fdd1a628b8a89bead000d54e9d8e227a52b8ad27b74332ce5f9\": container with ID starting with 89e98c57645a8fdd1a628b8a89bead000d54e9d8e227a52b8ad27b74332ce5f9 not found: ID does not exist" containerID="89e98c57645a8fdd1a628b8a89bead000d54e9d8e227a52b8ad27b74332ce5f9" Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.210981 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89e98c57645a8fdd1a628b8a89bead000d54e9d8e227a52b8ad27b74332ce5f9"} err="failed to get container status \"89e98c57645a8fdd1a628b8a89bead000d54e9d8e227a52b8ad27b74332ce5f9\": rpc error: code = NotFound desc = could not find container \"89e98c57645a8fdd1a628b8a89bead000d54e9d8e227a52b8ad27b74332ce5f9\": container with ID starting with 89e98c57645a8fdd1a628b8a89bead000d54e9d8e227a52b8ad27b74332ce5f9 not found: ID does not exist" Feb 26 14:00:23 crc kubenswrapper[4724]: I0226 14:00:23.986303 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76028d58-1109-4e9d-a4a7-24222feb4b7f" path="/var/lib/kubelet/pods/76028d58-1109-4e9d-a4a7-24222feb4b7f/volumes" Feb 26 14:00:30 crc kubenswrapper[4724]: I0226 14:00:30.436733 4724 scope.go:117] "RemoveContainer" containerID="7c0cf80f1f46bc30166516b17a3dc65642704e8f72f8825d19c81fb37ca08901" Feb 26 14:00:30 crc kubenswrapper[4724]: I0226 14:00:30.488240 4724 scope.go:117] "RemoveContainer" containerID="fbae59b9af9f2e90e9db4908ecb38e3327f336b7957d0f8c51bb458de3f2b805" Feb 26 14:00:34 crc kubenswrapper[4724]: I0226 14:00:34.976636 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:00:34 crc kubenswrapper[4724]: E0226 14:00:34.977950 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:00:49 crc kubenswrapper[4724]: I0226 14:00:49.975531 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:00:49 crc kubenswrapper[4724]: E0226 14:00:49.976529 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.180053 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29535241-52mbf"] Feb 26 14:01:00 crc kubenswrapper[4724]: E0226 14:01:00.182131 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1011342d-98f8-4495-9997-b52e55037233" containerName="oc" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.182255 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1011342d-98f8-4495-9997-b52e55037233" containerName="oc" Feb 26 14:01:00 crc kubenswrapper[4724]: E0226 14:01:00.182352 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76028d58-1109-4e9d-a4a7-24222feb4b7f" containerName="extract-content" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.182415 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="76028d58-1109-4e9d-a4a7-24222feb4b7f" containerName="extract-content" Feb 26 14:01:00 crc kubenswrapper[4724]: E0226 14:01:00.182496 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76028d58-1109-4e9d-a4a7-24222feb4b7f" containerName="extract-utilities" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.182552 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="76028d58-1109-4e9d-a4a7-24222feb4b7f" containerName="extract-utilities" Feb 26 14:01:00 crc kubenswrapper[4724]: E0226 14:01:00.182616 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76028d58-1109-4e9d-a4a7-24222feb4b7f" containerName="registry-server" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.182674 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="76028d58-1109-4e9d-a4a7-24222feb4b7f" containerName="registry-server" Feb 26 14:01:00 crc kubenswrapper[4724]: E0226 14:01:00.182741 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a934683-a48a-4008-b63f-9cdad4022fba" containerName="collect-profiles" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.182749 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a934683-a48a-4008-b63f-9cdad4022fba" containerName="collect-profiles" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.182929 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="76028d58-1109-4e9d-a4a7-24222feb4b7f" containerName="registry-server" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.182941 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a934683-a48a-4008-b63f-9cdad4022fba" containerName="collect-profiles" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.182966 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1011342d-98f8-4495-9997-b52e55037233" containerName="oc" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.183917 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.203426 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535241-52mbf"] Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.329830 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxf89\" (UniqueName: \"kubernetes.io/projected/5276bce5-b50f-415f-a487-2bcf33a42e0d-kube-api-access-zxf89\") pod \"keystone-cron-29535241-52mbf\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.330310 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-config-data\") pod \"keystone-cron-29535241-52mbf\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.330510 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-combined-ca-bundle\") pod \"keystone-cron-29535241-52mbf\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.330689 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-fernet-keys\") pod \"keystone-cron-29535241-52mbf\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.432970 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-fernet-keys\") pod \"keystone-cron-29535241-52mbf\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.433308 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxf89\" (UniqueName: \"kubernetes.io/projected/5276bce5-b50f-415f-a487-2bcf33a42e0d-kube-api-access-zxf89\") pod \"keystone-cron-29535241-52mbf\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.433478 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-config-data\") pod \"keystone-cron-29535241-52mbf\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.433604 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-combined-ca-bundle\") pod \"keystone-cron-29535241-52mbf\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.497345 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-config-data\") pod \"keystone-cron-29535241-52mbf\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.497540 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-combined-ca-bundle\") pod \"keystone-cron-29535241-52mbf\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.497616 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-fernet-keys\") pod \"keystone-cron-29535241-52mbf\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.498408 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxf89\" (UniqueName: \"kubernetes.io/projected/5276bce5-b50f-415f-a487-2bcf33a42e0d-kube-api-access-zxf89\") pod \"keystone-cron-29535241-52mbf\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:00 crc kubenswrapper[4724]: I0226 14:01:00.525954 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:01 crc kubenswrapper[4724]: I0226 14:01:01.023148 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535241-52mbf"] Feb 26 14:01:01 crc kubenswrapper[4724]: I0226 14:01:01.474732 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535241-52mbf" event={"ID":"5276bce5-b50f-415f-a487-2bcf33a42e0d","Type":"ContainerStarted","Data":"f180abeff9ebb94f40c9188241904eb11549a43a12d9b90d4554c5fc18585d6a"} Feb 26 14:01:01 crc kubenswrapper[4724]: I0226 14:01:01.474798 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535241-52mbf" event={"ID":"5276bce5-b50f-415f-a487-2bcf33a42e0d","Type":"ContainerStarted","Data":"200225f49a4f3371d9908b3839b52e18f8b79eb8094428ab81776bc7f987f909"} Feb 26 14:01:01 crc kubenswrapper[4724]: I0226 14:01:01.506445 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29535241-52mbf" podStartSLOduration=1.506419763 podStartE2EDuration="1.506419763s" podCreationTimestamp="2026-02-26 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:01:01.49642578 +0000 UTC m=+10528.152164905" watchObservedRunningTime="2026-02-26 14:01:01.506419763 +0000 UTC m=+10528.162158878" Feb 26 14:01:01 crc kubenswrapper[4724]: I0226 14:01:01.975964 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:01:01 crc kubenswrapper[4724]: E0226 14:01:01.976487 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:01:05 crc kubenswrapper[4724]: I0226 14:01:05.514153 4724 generic.go:334] "Generic (PLEG): container finished" podID="5276bce5-b50f-415f-a487-2bcf33a42e0d" containerID="f180abeff9ebb94f40c9188241904eb11549a43a12d9b90d4554c5fc18585d6a" exitCode=0 Feb 26 14:01:05 crc kubenswrapper[4724]: I0226 14:01:05.514285 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535241-52mbf" event={"ID":"5276bce5-b50f-415f-a487-2bcf33a42e0d","Type":"ContainerDied","Data":"f180abeff9ebb94f40c9188241904eb11549a43a12d9b90d4554c5fc18585d6a"} Feb 26 14:01:06 crc kubenswrapper[4724]: I0226 14:01:06.964072 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:07 crc kubenswrapper[4724]: I0226 14:01:07.072119 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-config-data\") pod \"5276bce5-b50f-415f-a487-2bcf33a42e0d\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " Feb 26 14:01:07 crc kubenswrapper[4724]: I0226 14:01:07.072707 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-fernet-keys\") pod \"5276bce5-b50f-415f-a487-2bcf33a42e0d\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " Feb 26 14:01:07 crc kubenswrapper[4724]: I0226 14:01:07.072791 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxf89\" (UniqueName: \"kubernetes.io/projected/5276bce5-b50f-415f-a487-2bcf33a42e0d-kube-api-access-zxf89\") pod \"5276bce5-b50f-415f-a487-2bcf33a42e0d\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " Feb 26 14:01:07 crc kubenswrapper[4724]: I0226 14:01:07.072883 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-combined-ca-bundle\") pod \"5276bce5-b50f-415f-a487-2bcf33a42e0d\" (UID: \"5276bce5-b50f-415f-a487-2bcf33a42e0d\") " Feb 26 14:01:07 crc kubenswrapper[4724]: I0226 14:01:07.092350 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5276bce5-b50f-415f-a487-2bcf33a42e0d" (UID: "5276bce5-b50f-415f-a487-2bcf33a42e0d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:01:07 crc kubenswrapper[4724]: I0226 14:01:07.093150 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5276bce5-b50f-415f-a487-2bcf33a42e0d-kube-api-access-zxf89" (OuterVolumeSpecName: "kube-api-access-zxf89") pod "5276bce5-b50f-415f-a487-2bcf33a42e0d" (UID: "5276bce5-b50f-415f-a487-2bcf33a42e0d"). InnerVolumeSpecName "kube-api-access-zxf89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:01:07 crc kubenswrapper[4724]: I0226 14:01:07.115644 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5276bce5-b50f-415f-a487-2bcf33a42e0d" (UID: "5276bce5-b50f-415f-a487-2bcf33a42e0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:01:07 crc kubenswrapper[4724]: I0226 14:01:07.168324 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-config-data" (OuterVolumeSpecName: "config-data") pod "5276bce5-b50f-415f-a487-2bcf33a42e0d" (UID: "5276bce5-b50f-415f-a487-2bcf33a42e0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:01:07 crc kubenswrapper[4724]: I0226 14:01:07.177566 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:01:07 crc kubenswrapper[4724]: I0226 14:01:07.177626 4724 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 14:01:07 crc kubenswrapper[4724]: I0226 14:01:07.177648 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxf89\" (UniqueName: \"kubernetes.io/projected/5276bce5-b50f-415f-a487-2bcf33a42e0d-kube-api-access-zxf89\") on node \"crc\" DevicePath \"\"" Feb 26 14:01:07 crc kubenswrapper[4724]: I0226 14:01:07.177667 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5276bce5-b50f-415f-a487-2bcf33a42e0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:01:07 crc kubenswrapper[4724]: I0226 14:01:07.534966 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535241-52mbf" event={"ID":"5276bce5-b50f-415f-a487-2bcf33a42e0d","Type":"ContainerDied","Data":"200225f49a4f3371d9908b3839b52e18f8b79eb8094428ab81776bc7f987f909"} Feb 26 14:01:07 crc kubenswrapper[4724]: I0226 14:01:07.535010 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="200225f49a4f3371d9908b3839b52e18f8b79eb8094428ab81776bc7f987f909" Feb 26 14:01:07 crc kubenswrapper[4724]: I0226 14:01:07.535062 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535241-52mbf" Feb 26 14:01:13 crc kubenswrapper[4724]: I0226 14:01:13.976216 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:01:13 crc kubenswrapper[4724]: E0226 14:01:13.977043 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:01:28 crc kubenswrapper[4724]: I0226 14:01:28.975888 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:01:28 crc kubenswrapper[4724]: E0226 14:01:28.976727 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:01:40 crc kubenswrapper[4724]: I0226 14:01:40.976130 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:01:40 crc kubenswrapper[4724]: E0226 14:01:40.976856 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:01:54 crc kubenswrapper[4724]: I0226 14:01:54.976529 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:01:54 crc kubenswrapper[4724]: E0226 14:01:54.977421 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:02:00 crc kubenswrapper[4724]: I0226 14:02:00.191231 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535242-9p4mf"] Feb 26 14:02:00 crc kubenswrapper[4724]: E0226 14:02:00.195492 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5276bce5-b50f-415f-a487-2bcf33a42e0d" containerName="keystone-cron" Feb 26 14:02:00 crc kubenswrapper[4724]: I0226 14:02:00.195534 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5276bce5-b50f-415f-a487-2bcf33a42e0d" containerName="keystone-cron" Feb 26 14:02:00 crc kubenswrapper[4724]: I0226 14:02:00.195917 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5276bce5-b50f-415f-a487-2bcf33a42e0d" containerName="keystone-cron" Feb 26 14:02:00 crc kubenswrapper[4724]: I0226 14:02:00.198337 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535242-9p4mf" Feb 26 14:02:00 crc kubenswrapper[4724]: I0226 14:02:00.203043 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:02:00 crc kubenswrapper[4724]: I0226 14:02:00.210913 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:02:00 crc kubenswrapper[4724]: I0226 14:02:00.211778 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535242-9p4mf"] Feb 26 14:02:00 crc kubenswrapper[4724]: I0226 14:02:00.229220 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:02:00 crc kubenswrapper[4724]: I0226 14:02:00.296402 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l2wl\" (UniqueName: \"kubernetes.io/projected/46e5e605-5a6d-4533-aadd-00fe984cd22b-kube-api-access-6l2wl\") pod \"auto-csr-approver-29535242-9p4mf\" (UID: \"46e5e605-5a6d-4533-aadd-00fe984cd22b\") " pod="openshift-infra/auto-csr-approver-29535242-9p4mf" Feb 26 14:02:00 crc kubenswrapper[4724]: I0226 14:02:00.397995 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l2wl\" (UniqueName: \"kubernetes.io/projected/46e5e605-5a6d-4533-aadd-00fe984cd22b-kube-api-access-6l2wl\") pod \"auto-csr-approver-29535242-9p4mf\" (UID: \"46e5e605-5a6d-4533-aadd-00fe984cd22b\") " pod="openshift-infra/auto-csr-approver-29535242-9p4mf" Feb 26 14:02:00 crc kubenswrapper[4724]: I0226 14:02:00.421745 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l2wl\" (UniqueName: \"kubernetes.io/projected/46e5e605-5a6d-4533-aadd-00fe984cd22b-kube-api-access-6l2wl\") pod \"auto-csr-approver-29535242-9p4mf\" (UID: \"46e5e605-5a6d-4533-aadd-00fe984cd22b\") " pod="openshift-infra/auto-csr-approver-29535242-9p4mf" Feb 26 14:02:00 crc kubenswrapper[4724]: I0226 14:02:00.520004 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535242-9p4mf" Feb 26 14:02:01 crc kubenswrapper[4724]: I0226 14:02:01.218551 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535242-9p4mf"] Feb 26 14:02:01 crc kubenswrapper[4724]: I0226 14:02:01.382897 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535242-9p4mf" event={"ID":"46e5e605-5a6d-4533-aadd-00fe984cd22b","Type":"ContainerStarted","Data":"591e38aab0cc7858175274c89edad903894cf36da92e906e1298aa8819afd523"} Feb 26 14:02:03 crc kubenswrapper[4724]: I0226 14:02:03.407428 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535242-9p4mf" event={"ID":"46e5e605-5a6d-4533-aadd-00fe984cd22b","Type":"ContainerStarted","Data":"71bcd1f4a1a8612b9a2508982ecbe19cf28911a847216dce424162565f131969"} Feb 26 14:02:03 crc kubenswrapper[4724]: I0226 14:02:03.448044 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535242-9p4mf" podStartSLOduration=2.220958647 podStartE2EDuration="3.447997336s" podCreationTimestamp="2026-02-26 14:02:00 +0000 UTC" firstStartedPulling="2026-02-26 14:02:01.23096094 +0000 UTC m=+10587.886700055" lastFinishedPulling="2026-02-26 14:02:02.457999619 +0000 UTC m=+10589.113738744" observedRunningTime="2026-02-26 14:02:03.440138048 +0000 UTC m=+10590.095877173" watchObservedRunningTime="2026-02-26 14:02:03.447997336 +0000 UTC m=+10590.103736451" Feb 26 14:02:05 crc kubenswrapper[4724]: I0226 14:02:05.435408 4724 generic.go:334] "Generic (PLEG): container finished" podID="46e5e605-5a6d-4533-aadd-00fe984cd22b" containerID="71bcd1f4a1a8612b9a2508982ecbe19cf28911a847216dce424162565f131969" exitCode=0 Feb 26 14:02:05 crc kubenswrapper[4724]: I0226 14:02:05.435485 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535242-9p4mf" event={"ID":"46e5e605-5a6d-4533-aadd-00fe984cd22b","Type":"ContainerDied","Data":"71bcd1f4a1a8612b9a2508982ecbe19cf28911a847216dce424162565f131969"} Feb 26 14:02:07 crc kubenswrapper[4724]: I0226 14:02:07.088956 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535242-9p4mf" Feb 26 14:02:07 crc kubenswrapper[4724]: I0226 14:02:07.206945 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l2wl\" (UniqueName: \"kubernetes.io/projected/46e5e605-5a6d-4533-aadd-00fe984cd22b-kube-api-access-6l2wl\") pod \"46e5e605-5a6d-4533-aadd-00fe984cd22b\" (UID: \"46e5e605-5a6d-4533-aadd-00fe984cd22b\") " Feb 26 14:02:07 crc kubenswrapper[4724]: I0226 14:02:07.214726 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e5e605-5a6d-4533-aadd-00fe984cd22b-kube-api-access-6l2wl" (OuterVolumeSpecName: "kube-api-access-6l2wl") pod "46e5e605-5a6d-4533-aadd-00fe984cd22b" (UID: "46e5e605-5a6d-4533-aadd-00fe984cd22b"). InnerVolumeSpecName "kube-api-access-6l2wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:02:07 crc kubenswrapper[4724]: I0226 14:02:07.310442 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l2wl\" (UniqueName: \"kubernetes.io/projected/46e5e605-5a6d-4533-aadd-00fe984cd22b-kube-api-access-6l2wl\") on node \"crc\" DevicePath \"\"" Feb 26 14:02:07 crc kubenswrapper[4724]: I0226 14:02:07.469538 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535242-9p4mf" event={"ID":"46e5e605-5a6d-4533-aadd-00fe984cd22b","Type":"ContainerDied","Data":"591e38aab0cc7858175274c89edad903894cf36da92e906e1298aa8819afd523"} Feb 26 14:02:07 crc kubenswrapper[4724]: I0226 14:02:07.469598 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="591e38aab0cc7858175274c89edad903894cf36da92e906e1298aa8819afd523" Feb 26 14:02:07 crc kubenswrapper[4724]: I0226 14:02:07.469605 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535242-9p4mf" Feb 26 14:02:07 crc kubenswrapper[4724]: I0226 14:02:07.579858 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535236-vwhdd"] Feb 26 14:02:07 crc kubenswrapper[4724]: I0226 14:02:07.590886 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535236-vwhdd"] Feb 26 14:02:07 crc kubenswrapper[4724]: I0226 14:02:07.991246 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8f8e897-2aae-472c-9741-67537defe9d1" path="/var/lib/kubelet/pods/f8f8e897-2aae-472c-9741-67537defe9d1/volumes" Feb 26 14:02:08 crc kubenswrapper[4724]: I0226 14:02:08.976329 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:02:08 crc kubenswrapper[4724]: E0226 14:02:08.977125 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:02:20 crc kubenswrapper[4724]: I0226 14:02:20.977052 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:02:20 crc kubenswrapper[4724]: E0226 14:02:20.978694 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:02:30 crc kubenswrapper[4724]: I0226 14:02:30.649903 4724 scope.go:117] "RemoveContainer" containerID="63982e76cb7175666804bfa659266ba78892012c972ed2c701805c614f53d51c" Feb 26 14:02:35 crc kubenswrapper[4724]: I0226 14:02:35.976683 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:02:35 crc kubenswrapper[4724]: E0226 14:02:35.977337 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:02:49 crc kubenswrapper[4724]: I0226 14:02:49.975302 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:02:49 crc kubenswrapper[4724]: E0226 14:02:49.976122 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:03:01 crc kubenswrapper[4724]: I0226 14:03:01.976945 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:03:01 crc kubenswrapper[4724]: E0226 14:03:01.978410 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:03:06 crc kubenswrapper[4724]: I0226 14:03:06.723224 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k5d59"] Feb 26 14:03:06 crc kubenswrapper[4724]: E0226 14:03:06.724040 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e5e605-5a6d-4533-aadd-00fe984cd22b" containerName="oc" Feb 26 14:03:06 crc kubenswrapper[4724]: I0226 14:03:06.724054 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e5e605-5a6d-4533-aadd-00fe984cd22b" containerName="oc" Feb 26 14:03:06 crc kubenswrapper[4724]: I0226 14:03:06.724263 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e5e605-5a6d-4533-aadd-00fe984cd22b" containerName="oc" Feb 26 14:03:06 crc kubenswrapper[4724]: I0226 14:03:06.725549 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:06 crc kubenswrapper[4724]: I0226 14:03:06.759507 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5d59"] Feb 26 14:03:06 crc kubenswrapper[4724]: I0226 14:03:06.784795 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw6wz\" (UniqueName: \"kubernetes.io/projected/8083d09f-5dad-48df-a0d2-87115a9bf375-kube-api-access-dw6wz\") pod \"community-operators-k5d59\" (UID: \"8083d09f-5dad-48df-a0d2-87115a9bf375\") " pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:06 crc kubenswrapper[4724]: I0226 14:03:06.784837 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8083d09f-5dad-48df-a0d2-87115a9bf375-catalog-content\") pod \"community-operators-k5d59\" (UID: \"8083d09f-5dad-48df-a0d2-87115a9bf375\") " pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:06 crc kubenswrapper[4724]: I0226 14:03:06.785083 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8083d09f-5dad-48df-a0d2-87115a9bf375-utilities\") pod \"community-operators-k5d59\" (UID: \"8083d09f-5dad-48df-a0d2-87115a9bf375\") " pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:06 crc kubenswrapper[4724]: I0226 14:03:06.887530 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8083d09f-5dad-48df-a0d2-87115a9bf375-utilities\") pod \"community-operators-k5d59\" (UID: \"8083d09f-5dad-48df-a0d2-87115a9bf375\") " pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:06 crc kubenswrapper[4724]: I0226 14:03:06.887671 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw6wz\" (UniqueName: \"kubernetes.io/projected/8083d09f-5dad-48df-a0d2-87115a9bf375-kube-api-access-dw6wz\") pod \"community-operators-k5d59\" (UID: \"8083d09f-5dad-48df-a0d2-87115a9bf375\") " pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:06 crc kubenswrapper[4724]: I0226 14:03:06.887701 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8083d09f-5dad-48df-a0d2-87115a9bf375-catalog-content\") pod \"community-operators-k5d59\" (UID: \"8083d09f-5dad-48df-a0d2-87115a9bf375\") " pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:06 crc kubenswrapper[4724]: I0226 14:03:06.888137 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8083d09f-5dad-48df-a0d2-87115a9bf375-utilities\") pod \"community-operators-k5d59\" (UID: \"8083d09f-5dad-48df-a0d2-87115a9bf375\") " pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:06 crc kubenswrapper[4724]: I0226 14:03:06.888223 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8083d09f-5dad-48df-a0d2-87115a9bf375-catalog-content\") pod \"community-operators-k5d59\" (UID: \"8083d09f-5dad-48df-a0d2-87115a9bf375\") " pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:06 crc kubenswrapper[4724]: I0226 14:03:06.918386 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw6wz\" (UniqueName: \"kubernetes.io/projected/8083d09f-5dad-48df-a0d2-87115a9bf375-kube-api-access-dw6wz\") pod \"community-operators-k5d59\" (UID: \"8083d09f-5dad-48df-a0d2-87115a9bf375\") " pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:07 crc kubenswrapper[4724]: I0226 14:03:07.051687 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:08 crc kubenswrapper[4724]: I0226 14:03:08.062431 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5d59"] Feb 26 14:03:09 crc kubenswrapper[4724]: I0226 14:03:09.069207 4724 generic.go:334] "Generic (PLEG): container finished" podID="8083d09f-5dad-48df-a0d2-87115a9bf375" containerID="03558ee879fe0d3c18d9c03a5d094a9dec944ba9673e39f1dfcd7dc8c93cb078" exitCode=0 Feb 26 14:03:09 crc kubenswrapper[4724]: I0226 14:03:09.069289 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5d59" event={"ID":"8083d09f-5dad-48df-a0d2-87115a9bf375","Type":"ContainerDied","Data":"03558ee879fe0d3c18d9c03a5d094a9dec944ba9673e39f1dfcd7dc8c93cb078"} Feb 26 14:03:09 crc kubenswrapper[4724]: I0226 14:03:09.069526 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5d59" event={"ID":"8083d09f-5dad-48df-a0d2-87115a9bf375","Type":"ContainerStarted","Data":"b10581681509c02d8de469379700f3ec720a3d4e3a20c40b24b9883129ceb11c"} Feb 26 14:03:11 crc kubenswrapper[4724]: I0226 14:03:11.099585 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5d59" event={"ID":"8083d09f-5dad-48df-a0d2-87115a9bf375","Type":"ContainerStarted","Data":"0a7320b25551c643a00f2c6c40e4049695b1664acabce41c36c24a7a058f3eda"} Feb 26 14:03:13 crc kubenswrapper[4724]: I0226 14:03:13.127269 4724 generic.go:334] "Generic (PLEG): container finished" podID="8083d09f-5dad-48df-a0d2-87115a9bf375" containerID="0a7320b25551c643a00f2c6c40e4049695b1664acabce41c36c24a7a058f3eda" exitCode=0 Feb 26 14:03:13 crc kubenswrapper[4724]: I0226 14:03:13.127721 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5d59" event={"ID":"8083d09f-5dad-48df-a0d2-87115a9bf375","Type":"ContainerDied","Data":"0a7320b25551c643a00f2c6c40e4049695b1664acabce41c36c24a7a058f3eda"} Feb 26 14:03:14 crc kubenswrapper[4724]: I0226 14:03:14.141976 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5d59" event={"ID":"8083d09f-5dad-48df-a0d2-87115a9bf375","Type":"ContainerStarted","Data":"80e32b584e6adb44c76efcb8c043dd1b1c85ccd608ff93f53406be00a1437481"} Feb 26 14:03:14 crc kubenswrapper[4724]: I0226 14:03:14.173186 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k5d59" podStartSLOduration=3.630261419 podStartE2EDuration="8.173144683s" podCreationTimestamp="2026-02-26 14:03:06 +0000 UTC" firstStartedPulling="2026-02-26 14:03:09.072031171 +0000 UTC m=+10655.727770296" lastFinishedPulling="2026-02-26 14:03:13.614914445 +0000 UTC m=+10660.270653560" observedRunningTime="2026-02-26 14:03:14.168118856 +0000 UTC m=+10660.823857981" watchObservedRunningTime="2026-02-26 14:03:14.173144683 +0000 UTC m=+10660.828883798" Feb 26 14:03:16 crc kubenswrapper[4724]: I0226 14:03:16.975554 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:03:16 crc kubenswrapper[4724]: E0226 14:03:16.976367 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:03:17 crc kubenswrapper[4724]: I0226 14:03:17.051881 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:17 crc kubenswrapper[4724]: I0226 14:03:17.051945 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:18 crc kubenswrapper[4724]: I0226 14:03:18.101864 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k5d59" podUID="8083d09f-5dad-48df-a0d2-87115a9bf375" containerName="registry-server" probeResult="failure" output=< Feb 26 14:03:18 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:03:18 crc kubenswrapper[4724]: > Feb 26 14:03:28 crc kubenswrapper[4724]: I0226 14:03:28.105373 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k5d59" podUID="8083d09f-5dad-48df-a0d2-87115a9bf375" containerName="registry-server" probeResult="failure" output=< Feb 26 14:03:28 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:03:28 crc kubenswrapper[4724]: > Feb 26 14:03:29 crc kubenswrapper[4724]: I0226 14:03:29.976399 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:03:29 crc kubenswrapper[4724]: E0226 14:03:29.976852 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:03:37 crc kubenswrapper[4724]: I0226 14:03:37.103394 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:37 crc kubenswrapper[4724]: I0226 14:03:37.152697 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:38 crc kubenswrapper[4724]: I0226 14:03:38.362546 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5d59"] Feb 26 14:03:38 crc kubenswrapper[4724]: I0226 14:03:38.388739 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k5d59" podUID="8083d09f-5dad-48df-a0d2-87115a9bf375" containerName="registry-server" containerID="cri-o://80e32b584e6adb44c76efcb8c043dd1b1c85ccd608ff93f53406be00a1437481" gracePeriod=2 Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.022118 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.078831 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw6wz\" (UniqueName: \"kubernetes.io/projected/8083d09f-5dad-48df-a0d2-87115a9bf375-kube-api-access-dw6wz\") pod \"8083d09f-5dad-48df-a0d2-87115a9bf375\" (UID: \"8083d09f-5dad-48df-a0d2-87115a9bf375\") " Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.079058 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8083d09f-5dad-48df-a0d2-87115a9bf375-utilities\") pod \"8083d09f-5dad-48df-a0d2-87115a9bf375\" (UID: \"8083d09f-5dad-48df-a0d2-87115a9bf375\") " Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.079223 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8083d09f-5dad-48df-a0d2-87115a9bf375-catalog-content\") pod \"8083d09f-5dad-48df-a0d2-87115a9bf375\" (UID: \"8083d09f-5dad-48df-a0d2-87115a9bf375\") " Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.079803 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8083d09f-5dad-48df-a0d2-87115a9bf375-utilities" (OuterVolumeSpecName: "utilities") pod "8083d09f-5dad-48df-a0d2-87115a9bf375" (UID: "8083d09f-5dad-48df-a0d2-87115a9bf375"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.092480 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8083d09f-5dad-48df-a0d2-87115a9bf375-kube-api-access-dw6wz" (OuterVolumeSpecName: "kube-api-access-dw6wz") pod "8083d09f-5dad-48df-a0d2-87115a9bf375" (UID: "8083d09f-5dad-48df-a0d2-87115a9bf375"). InnerVolumeSpecName "kube-api-access-dw6wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.150089 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8083d09f-5dad-48df-a0d2-87115a9bf375-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8083d09f-5dad-48df-a0d2-87115a9bf375" (UID: "8083d09f-5dad-48df-a0d2-87115a9bf375"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.182677 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw6wz\" (UniqueName: \"kubernetes.io/projected/8083d09f-5dad-48df-a0d2-87115a9bf375-kube-api-access-dw6wz\") on node \"crc\" DevicePath \"\"" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.182720 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8083d09f-5dad-48df-a0d2-87115a9bf375-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.182733 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8083d09f-5dad-48df-a0d2-87115a9bf375-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.398496 4724 generic.go:334] "Generic (PLEG): container finished" podID="8083d09f-5dad-48df-a0d2-87115a9bf375" containerID="80e32b584e6adb44c76efcb8c043dd1b1c85ccd608ff93f53406be00a1437481" exitCode=0 Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.398543 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5d59" event={"ID":"8083d09f-5dad-48df-a0d2-87115a9bf375","Type":"ContainerDied","Data":"80e32b584e6adb44c76efcb8c043dd1b1c85ccd608ff93f53406be00a1437481"} Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.398555 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5d59" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.398572 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5d59" event={"ID":"8083d09f-5dad-48df-a0d2-87115a9bf375","Type":"ContainerDied","Data":"b10581681509c02d8de469379700f3ec720a3d4e3a20c40b24b9883129ceb11c"} Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.398591 4724 scope.go:117] "RemoveContainer" containerID="80e32b584e6adb44c76efcb8c043dd1b1c85ccd608ff93f53406be00a1437481" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.423448 4724 scope.go:117] "RemoveContainer" containerID="0a7320b25551c643a00f2c6c40e4049695b1664acabce41c36c24a7a058f3eda" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.444482 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5d59"] Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.453407 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k5d59"] Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.462001 4724 scope.go:117] "RemoveContainer" containerID="03558ee879fe0d3c18d9c03a5d094a9dec944ba9673e39f1dfcd7dc8c93cb078" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.505753 4724 scope.go:117] "RemoveContainer" containerID="80e32b584e6adb44c76efcb8c043dd1b1c85ccd608ff93f53406be00a1437481" Feb 26 14:03:39 crc kubenswrapper[4724]: E0226 14:03:39.510641 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e32b584e6adb44c76efcb8c043dd1b1c85ccd608ff93f53406be00a1437481\": container with ID starting with 80e32b584e6adb44c76efcb8c043dd1b1c85ccd608ff93f53406be00a1437481 not found: ID does not exist" containerID="80e32b584e6adb44c76efcb8c043dd1b1c85ccd608ff93f53406be00a1437481" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.510700 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e32b584e6adb44c76efcb8c043dd1b1c85ccd608ff93f53406be00a1437481"} err="failed to get container status \"80e32b584e6adb44c76efcb8c043dd1b1c85ccd608ff93f53406be00a1437481\": rpc error: code = NotFound desc = could not find container \"80e32b584e6adb44c76efcb8c043dd1b1c85ccd608ff93f53406be00a1437481\": container with ID starting with 80e32b584e6adb44c76efcb8c043dd1b1c85ccd608ff93f53406be00a1437481 not found: ID does not exist" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.510815 4724 scope.go:117] "RemoveContainer" containerID="0a7320b25551c643a00f2c6c40e4049695b1664acabce41c36c24a7a058f3eda" Feb 26 14:03:39 crc kubenswrapper[4724]: E0226 14:03:39.512320 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a7320b25551c643a00f2c6c40e4049695b1664acabce41c36c24a7a058f3eda\": container with ID starting with 0a7320b25551c643a00f2c6c40e4049695b1664acabce41c36c24a7a058f3eda not found: ID does not exist" containerID="0a7320b25551c643a00f2c6c40e4049695b1664acabce41c36c24a7a058f3eda" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.512514 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a7320b25551c643a00f2c6c40e4049695b1664acabce41c36c24a7a058f3eda"} err="failed to get container status \"0a7320b25551c643a00f2c6c40e4049695b1664acabce41c36c24a7a058f3eda\": rpc error: code = NotFound desc = could not find container \"0a7320b25551c643a00f2c6c40e4049695b1664acabce41c36c24a7a058f3eda\": container with ID starting with 0a7320b25551c643a00f2c6c40e4049695b1664acabce41c36c24a7a058f3eda not found: ID does not exist" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.512548 4724 scope.go:117] "RemoveContainer" containerID="03558ee879fe0d3c18d9c03a5d094a9dec944ba9673e39f1dfcd7dc8c93cb078" Feb 26 14:03:39 crc kubenswrapper[4724]: E0226 14:03:39.512955 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03558ee879fe0d3c18d9c03a5d094a9dec944ba9673e39f1dfcd7dc8c93cb078\": container with ID starting with 03558ee879fe0d3c18d9c03a5d094a9dec944ba9673e39f1dfcd7dc8c93cb078 not found: ID does not exist" containerID="03558ee879fe0d3c18d9c03a5d094a9dec944ba9673e39f1dfcd7dc8c93cb078" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.513012 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03558ee879fe0d3c18d9c03a5d094a9dec944ba9673e39f1dfcd7dc8c93cb078"} err="failed to get container status \"03558ee879fe0d3c18d9c03a5d094a9dec944ba9673e39f1dfcd7dc8c93cb078\": rpc error: code = NotFound desc = could not find container \"03558ee879fe0d3c18d9c03a5d094a9dec944ba9673e39f1dfcd7dc8c93cb078\": container with ID starting with 03558ee879fe0d3c18d9c03a5d094a9dec944ba9673e39f1dfcd7dc8c93cb078 not found: ID does not exist" Feb 26 14:03:39 crc kubenswrapper[4724]: I0226 14:03:39.991684 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8083d09f-5dad-48df-a0d2-87115a9bf375" path="/var/lib/kubelet/pods/8083d09f-5dad-48df-a0d2-87115a9bf375/volumes" Feb 26 14:03:44 crc kubenswrapper[4724]: I0226 14:03:44.975960 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:03:44 crc kubenswrapper[4724]: E0226 14:03:44.976816 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:03:55 crc kubenswrapper[4724]: I0226 14:03:55.975651 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:03:55 crc kubenswrapper[4724]: E0226 14:03:55.976426 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:04:00 crc kubenswrapper[4724]: I0226 14:04:00.179475 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535244-5h5wd"] Feb 26 14:04:00 crc kubenswrapper[4724]: E0226 14:04:00.182200 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8083d09f-5dad-48df-a0d2-87115a9bf375" containerName="registry-server" Feb 26 14:04:00 crc kubenswrapper[4724]: I0226 14:04:00.182234 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8083d09f-5dad-48df-a0d2-87115a9bf375" containerName="registry-server" Feb 26 14:04:00 crc kubenswrapper[4724]: E0226 14:04:00.182313 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8083d09f-5dad-48df-a0d2-87115a9bf375" containerName="extract-utilities" Feb 26 14:04:00 crc kubenswrapper[4724]: I0226 14:04:00.182323 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8083d09f-5dad-48df-a0d2-87115a9bf375" containerName="extract-utilities" Feb 26 14:04:00 crc kubenswrapper[4724]: E0226 14:04:00.182344 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8083d09f-5dad-48df-a0d2-87115a9bf375" containerName="extract-content" Feb 26 14:04:00 crc kubenswrapper[4724]: I0226 14:04:00.182352 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="8083d09f-5dad-48df-a0d2-87115a9bf375" containerName="extract-content" Feb 26 14:04:00 crc kubenswrapper[4724]: I0226 14:04:00.182647 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="8083d09f-5dad-48df-a0d2-87115a9bf375" containerName="registry-server" Feb 26 14:04:00 crc kubenswrapper[4724]: I0226 14:04:00.184611 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535244-5h5wd" Feb 26 14:04:00 crc kubenswrapper[4724]: I0226 14:04:00.189691 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:04:00 crc kubenswrapper[4724]: I0226 14:04:00.193616 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:04:00 crc kubenswrapper[4724]: I0226 14:04:00.215429 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535244-5h5wd"] Feb 26 14:04:00 crc kubenswrapper[4724]: I0226 14:04:00.221485 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:04:00 crc kubenswrapper[4724]: I0226 14:04:00.298342 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqq7c\" (UniqueName: \"kubernetes.io/projected/6b9bb426-a627-4c99-bcdb-dec403e13758-kube-api-access-mqq7c\") pod \"auto-csr-approver-29535244-5h5wd\" (UID: \"6b9bb426-a627-4c99-bcdb-dec403e13758\") " pod="openshift-infra/auto-csr-approver-29535244-5h5wd" Feb 26 14:04:00 crc kubenswrapper[4724]: I0226 14:04:00.410558 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqq7c\" (UniqueName: \"kubernetes.io/projected/6b9bb426-a627-4c99-bcdb-dec403e13758-kube-api-access-mqq7c\") pod \"auto-csr-approver-29535244-5h5wd\" (UID: \"6b9bb426-a627-4c99-bcdb-dec403e13758\") " pod="openshift-infra/auto-csr-approver-29535244-5h5wd" Feb 26 14:04:00 crc kubenswrapper[4724]: I0226 14:04:00.438114 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqq7c\" (UniqueName: \"kubernetes.io/projected/6b9bb426-a627-4c99-bcdb-dec403e13758-kube-api-access-mqq7c\") pod \"auto-csr-approver-29535244-5h5wd\" (UID: \"6b9bb426-a627-4c99-bcdb-dec403e13758\") " pod="openshift-infra/auto-csr-approver-29535244-5h5wd" Feb 26 14:04:00 crc kubenswrapper[4724]: I0226 14:04:00.597781 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535244-5h5wd" Feb 26 14:04:01 crc kubenswrapper[4724]: I0226 14:04:01.189377 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535244-5h5wd"] Feb 26 14:04:01 crc kubenswrapper[4724]: I0226 14:04:01.602488 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535244-5h5wd" event={"ID":"6b9bb426-a627-4c99-bcdb-dec403e13758","Type":"ContainerStarted","Data":"16dcb38df482218e7d2019c370ccaf64e9f21baa688f674476dfd5e70595b94a"} Feb 26 14:04:03 crc kubenswrapper[4724]: I0226 14:04:03.626959 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535244-5h5wd" event={"ID":"6b9bb426-a627-4c99-bcdb-dec403e13758","Type":"ContainerStarted","Data":"199f0aa2fa873f15c6e7b08cf7d579b8c73a1e9f8ab497b7bfba70ba7ebcb1e1"} Feb 26 14:04:03 crc kubenswrapper[4724]: I0226 14:04:03.648825 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535244-5h5wd" podStartSLOduration=2.357705624 podStartE2EDuration="3.648796202s" podCreationTimestamp="2026-02-26 14:04:00 +0000 UTC" firstStartedPulling="2026-02-26 14:04:01.203298503 +0000 UTC m=+10707.859037618" lastFinishedPulling="2026-02-26 14:04:02.494389081 +0000 UTC m=+10709.150128196" observedRunningTime="2026-02-26 14:04:03.647761967 +0000 UTC m=+10710.303501102" watchObservedRunningTime="2026-02-26 14:04:03.648796202 +0000 UTC m=+10710.304535317" Feb 26 14:04:05 crc kubenswrapper[4724]: I0226 14:04:05.667275 4724 generic.go:334] "Generic (PLEG): container finished" podID="6b9bb426-a627-4c99-bcdb-dec403e13758" containerID="199f0aa2fa873f15c6e7b08cf7d579b8c73a1e9f8ab497b7bfba70ba7ebcb1e1" exitCode=0 Feb 26 14:04:05 crc kubenswrapper[4724]: I0226 14:04:05.667411 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535244-5h5wd" event={"ID":"6b9bb426-a627-4c99-bcdb-dec403e13758","Type":"ContainerDied","Data":"199f0aa2fa873f15c6e7b08cf7d579b8c73a1e9f8ab497b7bfba70ba7ebcb1e1"} Feb 26 14:04:07 crc kubenswrapper[4724]: I0226 14:04:07.217311 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535244-5h5wd" Feb 26 14:04:07 crc kubenswrapper[4724]: I0226 14:04:07.347330 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqq7c\" (UniqueName: \"kubernetes.io/projected/6b9bb426-a627-4c99-bcdb-dec403e13758-kube-api-access-mqq7c\") pod \"6b9bb426-a627-4c99-bcdb-dec403e13758\" (UID: \"6b9bb426-a627-4c99-bcdb-dec403e13758\") " Feb 26 14:04:07 crc kubenswrapper[4724]: I0226 14:04:07.366593 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b9bb426-a627-4c99-bcdb-dec403e13758-kube-api-access-mqq7c" (OuterVolumeSpecName: "kube-api-access-mqq7c") pod "6b9bb426-a627-4c99-bcdb-dec403e13758" (UID: "6b9bb426-a627-4c99-bcdb-dec403e13758"). InnerVolumeSpecName "kube-api-access-mqq7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:04:07 crc kubenswrapper[4724]: I0226 14:04:07.451216 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqq7c\" (UniqueName: \"kubernetes.io/projected/6b9bb426-a627-4c99-bcdb-dec403e13758-kube-api-access-mqq7c\") on node \"crc\" DevicePath \"\"" Feb 26 14:04:07 crc kubenswrapper[4724]: I0226 14:04:07.691754 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535244-5h5wd" event={"ID":"6b9bb426-a627-4c99-bcdb-dec403e13758","Type":"ContainerDied","Data":"16dcb38df482218e7d2019c370ccaf64e9f21baa688f674476dfd5e70595b94a"} Feb 26 14:04:07 crc kubenswrapper[4724]: I0226 14:04:07.691816 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16dcb38df482218e7d2019c370ccaf64e9f21baa688f674476dfd5e70595b94a" Feb 26 14:04:07 crc kubenswrapper[4724]: I0226 14:04:07.691810 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535244-5h5wd" Feb 26 14:04:07 crc kubenswrapper[4724]: I0226 14:04:07.819746 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535238-j5bxm"] Feb 26 14:04:07 crc kubenswrapper[4724]: I0226 14:04:07.827880 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535238-j5bxm"] Feb 26 14:04:07 crc kubenswrapper[4724]: I0226 14:04:07.988349 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0b3d573-e3f5-4eac-91fc-e5436296bd24" path="/var/lib/kubelet/pods/b0b3d573-e3f5-4eac-91fc-e5436296bd24/volumes" Feb 26 14:04:10 crc kubenswrapper[4724]: I0226 14:04:10.976151 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:04:10 crc kubenswrapper[4724]: E0226 14:04:10.976916 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:04:23 crc kubenswrapper[4724]: I0226 14:04:23.983758 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:04:24 crc kubenswrapper[4724]: I0226 14:04:24.843246 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"232809cf0ef7b44ca0c4267d955abe3d0e13f6297302072ab5c41227e168575b"} Feb 26 14:04:30 crc kubenswrapper[4724]: I0226 14:04:30.886724 4724 scope.go:117] "RemoveContainer" containerID="bf18d5241f9a375ab46bf94f9f9f61f2ff2dde09b45e41a00b9b6127ae4f0cde" Feb 26 14:05:51 crc kubenswrapper[4724]: I0226 14:05:51.391990 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vrjs2"] Feb 26 14:05:51 crc kubenswrapper[4724]: E0226 14:05:51.393053 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b9bb426-a627-4c99-bcdb-dec403e13758" containerName="oc" Feb 26 14:05:51 crc kubenswrapper[4724]: I0226 14:05:51.393068 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b9bb426-a627-4c99-bcdb-dec403e13758" containerName="oc" Feb 26 14:05:51 crc kubenswrapper[4724]: I0226 14:05:51.393284 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b9bb426-a627-4c99-bcdb-dec403e13758" containerName="oc" Feb 26 14:05:51 crc kubenswrapper[4724]: I0226 14:05:51.397422 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:05:51 crc kubenswrapper[4724]: I0226 14:05:51.466086 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vrjs2"] Feb 26 14:05:51 crc kubenswrapper[4724]: I0226 14:05:51.493443 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d39e830-8878-46d7-9bc8-c24c98ece848-utilities\") pod \"redhat-operators-vrjs2\" (UID: \"5d39e830-8878-46d7-9bc8-c24c98ece848\") " pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:05:51 crc kubenswrapper[4724]: I0226 14:05:51.493892 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x88tz\" (UniqueName: \"kubernetes.io/projected/5d39e830-8878-46d7-9bc8-c24c98ece848-kube-api-access-x88tz\") pod \"redhat-operators-vrjs2\" (UID: \"5d39e830-8878-46d7-9bc8-c24c98ece848\") " pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:05:51 crc kubenswrapper[4724]: I0226 14:05:51.494045 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d39e830-8878-46d7-9bc8-c24c98ece848-catalog-content\") pod \"redhat-operators-vrjs2\" (UID: \"5d39e830-8878-46d7-9bc8-c24c98ece848\") " pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:05:51 crc kubenswrapper[4724]: I0226 14:05:51.596167 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x88tz\" (UniqueName: \"kubernetes.io/projected/5d39e830-8878-46d7-9bc8-c24c98ece848-kube-api-access-x88tz\") pod \"redhat-operators-vrjs2\" (UID: \"5d39e830-8878-46d7-9bc8-c24c98ece848\") " pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:05:51 crc kubenswrapper[4724]: I0226 14:05:51.596304 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d39e830-8878-46d7-9bc8-c24c98ece848-catalog-content\") pod \"redhat-operators-vrjs2\" (UID: \"5d39e830-8878-46d7-9bc8-c24c98ece848\") " pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:05:51 crc kubenswrapper[4724]: I0226 14:05:51.596395 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d39e830-8878-46d7-9bc8-c24c98ece848-utilities\") pod \"redhat-operators-vrjs2\" (UID: \"5d39e830-8878-46d7-9bc8-c24c98ece848\") " pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:05:51 crc kubenswrapper[4724]: I0226 14:05:51.597053 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d39e830-8878-46d7-9bc8-c24c98ece848-utilities\") pod \"redhat-operators-vrjs2\" (UID: \"5d39e830-8878-46d7-9bc8-c24c98ece848\") " pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:05:51 crc kubenswrapper[4724]: I0226 14:05:51.597124 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d39e830-8878-46d7-9bc8-c24c98ece848-catalog-content\") pod \"redhat-operators-vrjs2\" (UID: \"5d39e830-8878-46d7-9bc8-c24c98ece848\") " pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:05:51 crc kubenswrapper[4724]: I0226 14:05:51.625108 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x88tz\" (UniqueName: \"kubernetes.io/projected/5d39e830-8878-46d7-9bc8-c24c98ece848-kube-api-access-x88tz\") pod \"redhat-operators-vrjs2\" (UID: \"5d39e830-8878-46d7-9bc8-c24c98ece848\") " pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:05:51 crc kubenswrapper[4724]: I0226 14:05:51.727743 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:05:52 crc kubenswrapper[4724]: I0226 14:05:52.646100 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vrjs2"] Feb 26 14:05:52 crc kubenswrapper[4724]: I0226 14:05:52.783333 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrjs2" event={"ID":"5d39e830-8878-46d7-9bc8-c24c98ece848","Type":"ContainerStarted","Data":"a38bcec5b80eb567de7aaffc4b45ac96633fff5035d0b30dc282578d9682d2fc"} Feb 26 14:05:53 crc kubenswrapper[4724]: I0226 14:05:53.794360 4724 generic.go:334] "Generic (PLEG): container finished" podID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerID="0671d49a4d809a435abc95f30003185c99584b6fc4ee9bf6e1efb58e6edeea62" exitCode=0 Feb 26 14:05:53 crc kubenswrapper[4724]: I0226 14:05:53.794414 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrjs2" event={"ID":"5d39e830-8878-46d7-9bc8-c24c98ece848","Type":"ContainerDied","Data":"0671d49a4d809a435abc95f30003185c99584b6fc4ee9bf6e1efb58e6edeea62"} Feb 26 14:05:53 crc kubenswrapper[4724]: I0226 14:05:53.798523 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:05:56 crc kubenswrapper[4724]: I0226 14:05:56.840638 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrjs2" event={"ID":"5d39e830-8878-46d7-9bc8-c24c98ece848","Type":"ContainerStarted","Data":"d61c0248d3e5594a92776ee811733f0807ac1db1ddaf0c9e1aa37457f03a829a"} Feb 26 14:06:00 crc kubenswrapper[4724]: I0226 14:06:00.162238 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535246-2wvr6"] Feb 26 14:06:00 crc kubenswrapper[4724]: I0226 14:06:00.163611 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535246-2wvr6" Feb 26 14:06:00 crc kubenswrapper[4724]: I0226 14:06:00.167672 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:06:00 crc kubenswrapper[4724]: I0226 14:06:00.167933 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:06:00 crc kubenswrapper[4724]: I0226 14:06:00.168060 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:06:00 crc kubenswrapper[4724]: I0226 14:06:00.180272 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535246-2wvr6"] Feb 26 14:06:00 crc kubenswrapper[4724]: I0226 14:06:00.304734 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph2vj\" (UniqueName: \"kubernetes.io/projected/f586ddcd-6060-43aa-9ea4-f86d11774d95-kube-api-access-ph2vj\") pod \"auto-csr-approver-29535246-2wvr6\" (UID: \"f586ddcd-6060-43aa-9ea4-f86d11774d95\") " pod="openshift-infra/auto-csr-approver-29535246-2wvr6" Feb 26 14:06:00 crc kubenswrapper[4724]: I0226 14:06:00.407448 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph2vj\" (UniqueName: \"kubernetes.io/projected/f586ddcd-6060-43aa-9ea4-f86d11774d95-kube-api-access-ph2vj\") pod \"auto-csr-approver-29535246-2wvr6\" (UID: \"f586ddcd-6060-43aa-9ea4-f86d11774d95\") " pod="openshift-infra/auto-csr-approver-29535246-2wvr6" Feb 26 14:06:00 crc kubenswrapper[4724]: I0226 14:06:00.438588 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph2vj\" (UniqueName: \"kubernetes.io/projected/f586ddcd-6060-43aa-9ea4-f86d11774d95-kube-api-access-ph2vj\") pod \"auto-csr-approver-29535246-2wvr6\" (UID: \"f586ddcd-6060-43aa-9ea4-f86d11774d95\") " pod="openshift-infra/auto-csr-approver-29535246-2wvr6" Feb 26 14:06:00 crc kubenswrapper[4724]: I0226 14:06:00.522517 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535246-2wvr6" Feb 26 14:06:01 crc kubenswrapper[4724]: I0226 14:06:01.200121 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535246-2wvr6"] Feb 26 14:06:01 crc kubenswrapper[4724]: I0226 14:06:01.887020 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535246-2wvr6" event={"ID":"f586ddcd-6060-43aa-9ea4-f86d11774d95","Type":"ContainerStarted","Data":"d8266843aa6febe905cc2be9699c7af9f26daba0dc66ba79d62ef114d46fe008"} Feb 26 14:06:03 crc kubenswrapper[4724]: I0226 14:06:03.910021 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535246-2wvr6" event={"ID":"f586ddcd-6060-43aa-9ea4-f86d11774d95","Type":"ContainerStarted","Data":"60de01210f181aeda40e38cdae1079d9f9ce54364a15a76515842d8a8c108279"} Feb 26 14:06:03 crc kubenswrapper[4724]: I0226 14:06:03.945923 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535246-2wvr6" podStartSLOduration=2.765180437 podStartE2EDuration="3.945873308s" podCreationTimestamp="2026-02-26 14:06:00 +0000 UTC" firstStartedPulling="2026-02-26 14:06:01.269903886 +0000 UTC m=+10827.925643001" lastFinishedPulling="2026-02-26 14:06:02.450596757 +0000 UTC m=+10829.106335872" observedRunningTime="2026-02-26 14:06:03.938537277 +0000 UTC m=+10830.594276392" watchObservedRunningTime="2026-02-26 14:06:03.945873308 +0000 UTC m=+10830.601612423" Feb 26 14:06:04 crc kubenswrapper[4724]: I0226 14:06:04.921418 4724 generic.go:334] "Generic (PLEG): container finished" podID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerID="d61c0248d3e5594a92776ee811733f0807ac1db1ddaf0c9e1aa37457f03a829a" exitCode=0 Feb 26 14:06:04 crc kubenswrapper[4724]: I0226 14:06:04.921486 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrjs2" event={"ID":"5d39e830-8878-46d7-9bc8-c24c98ece848","Type":"ContainerDied","Data":"d61c0248d3e5594a92776ee811733f0807ac1db1ddaf0c9e1aa37457f03a829a"} Feb 26 14:06:05 crc kubenswrapper[4724]: I0226 14:06:05.932031 4724 generic.go:334] "Generic (PLEG): container finished" podID="f586ddcd-6060-43aa-9ea4-f86d11774d95" containerID="60de01210f181aeda40e38cdae1079d9f9ce54364a15a76515842d8a8c108279" exitCode=0 Feb 26 14:06:05 crc kubenswrapper[4724]: I0226 14:06:05.932233 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535246-2wvr6" event={"ID":"f586ddcd-6060-43aa-9ea4-f86d11774d95","Type":"ContainerDied","Data":"60de01210f181aeda40e38cdae1079d9f9ce54364a15a76515842d8a8c108279"} Feb 26 14:06:06 crc kubenswrapper[4724]: I0226 14:06:06.947511 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrjs2" event={"ID":"5d39e830-8878-46d7-9bc8-c24c98ece848","Type":"ContainerStarted","Data":"f2f31878845593dc171daf55c3bfffd25b95ba78a988b0dc1025524727a88a16"} Feb 26 14:06:06 crc kubenswrapper[4724]: I0226 14:06:06.986291 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vrjs2" podStartSLOduration=4.410784741 podStartE2EDuration="15.986267034s" podCreationTimestamp="2026-02-26 14:05:51 +0000 UTC" firstStartedPulling="2026-02-26 14:05:53.796661153 +0000 UTC m=+10820.452400278" lastFinishedPulling="2026-02-26 14:06:05.372143456 +0000 UTC m=+10832.027882571" observedRunningTime="2026-02-26 14:06:06.983249919 +0000 UTC m=+10833.638989034" watchObservedRunningTime="2026-02-26 14:06:06.986267034 +0000 UTC m=+10833.642006149" Feb 26 14:06:07 crc kubenswrapper[4724]: I0226 14:06:07.355407 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535246-2wvr6" Feb 26 14:06:07 crc kubenswrapper[4724]: I0226 14:06:07.456276 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph2vj\" (UniqueName: \"kubernetes.io/projected/f586ddcd-6060-43aa-9ea4-f86d11774d95-kube-api-access-ph2vj\") pod \"f586ddcd-6060-43aa-9ea4-f86d11774d95\" (UID: \"f586ddcd-6060-43aa-9ea4-f86d11774d95\") " Feb 26 14:06:07 crc kubenswrapper[4724]: I0226 14:06:07.468651 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f586ddcd-6060-43aa-9ea4-f86d11774d95-kube-api-access-ph2vj" (OuterVolumeSpecName: "kube-api-access-ph2vj") pod "f586ddcd-6060-43aa-9ea4-f86d11774d95" (UID: "f586ddcd-6060-43aa-9ea4-f86d11774d95"). InnerVolumeSpecName "kube-api-access-ph2vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:06:07 crc kubenswrapper[4724]: I0226 14:06:07.557921 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph2vj\" (UniqueName: \"kubernetes.io/projected/f586ddcd-6060-43aa-9ea4-f86d11774d95-kube-api-access-ph2vj\") on node \"crc\" DevicePath \"\"" Feb 26 14:06:07 crc kubenswrapper[4724]: I0226 14:06:07.959330 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535246-2wvr6" event={"ID":"f586ddcd-6060-43aa-9ea4-f86d11774d95","Type":"ContainerDied","Data":"d8266843aa6febe905cc2be9699c7af9f26daba0dc66ba79d62ef114d46fe008"} Feb 26 14:06:07 crc kubenswrapper[4724]: I0226 14:06:07.959369 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8266843aa6febe905cc2be9699c7af9f26daba0dc66ba79d62ef114d46fe008" Feb 26 14:06:07 crc kubenswrapper[4724]: I0226 14:06:07.959435 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535246-2wvr6" Feb 26 14:06:08 crc kubenswrapper[4724]: I0226 14:06:08.439704 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535240-5rkzk"] Feb 26 14:06:08 crc kubenswrapper[4724]: I0226 14:06:08.451525 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535240-5rkzk"] Feb 26 14:06:10 crc kubenswrapper[4724]: I0226 14:06:10.008374 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1011342d-98f8-4495-9997-b52e55037233" path="/var/lib/kubelet/pods/1011342d-98f8-4495-9997-b52e55037233/volumes" Feb 26 14:06:11 crc kubenswrapper[4724]: I0226 14:06:11.728673 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:06:11 crc kubenswrapper[4724]: I0226 14:06:11.729695 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:06:12 crc kubenswrapper[4724]: I0226 14:06:12.783363 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrjs2" podUID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerName="registry-server" probeResult="failure" output=< Feb 26 14:06:12 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:06:12 crc kubenswrapper[4724]: > Feb 26 14:06:23 crc kubenswrapper[4724]: I0226 14:06:23.171113 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrjs2" podUID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerName="registry-server" probeResult="failure" output=< Feb 26 14:06:23 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:06:23 crc kubenswrapper[4724]: > Feb 26 14:06:31 crc kubenswrapper[4724]: I0226 14:06:31.048215 4724 scope.go:117] "RemoveContainer" containerID="4879b777422fa8bcf74b6b70b95d558c49b52a868d7a143acbc412b0499b61eb" Feb 26 14:06:32 crc kubenswrapper[4724]: I0226 14:06:32.786285 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrjs2" podUID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerName="registry-server" probeResult="failure" output=< Feb 26 14:06:32 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:06:32 crc kubenswrapper[4724]: > Feb 26 14:06:42 crc kubenswrapper[4724]: I0226 14:06:42.783976 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrjs2" podUID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerName="registry-server" probeResult="failure" output=< Feb 26 14:06:42 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:06:42 crc kubenswrapper[4724]: > Feb 26 14:06:46 crc kubenswrapper[4724]: I0226 14:06:46.906218 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:06:46 crc kubenswrapper[4724]: I0226 14:06:46.910471 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:06:52 crc kubenswrapper[4724]: I0226 14:06:52.792521 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrjs2" podUID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerName="registry-server" probeResult="failure" output=< Feb 26 14:06:52 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:06:52 crc kubenswrapper[4724]: > Feb 26 14:07:01 crc kubenswrapper[4724]: I0226 14:07:01.835029 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:07:01 crc kubenswrapper[4724]: I0226 14:07:01.928596 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:07:02 crc kubenswrapper[4724]: I0226 14:07:02.083816 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vrjs2"] Feb 26 14:07:03 crc kubenswrapper[4724]: I0226 14:07:03.496636 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vrjs2" podUID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerName="registry-server" containerID="cri-o://f2f31878845593dc171daf55c3bfffd25b95ba78a988b0dc1025524727a88a16" gracePeriod=2 Feb 26 14:07:04 crc kubenswrapper[4724]: I0226 14:07:04.510909 4724 generic.go:334] "Generic (PLEG): container finished" podID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerID="f2f31878845593dc171daf55c3bfffd25b95ba78a988b0dc1025524727a88a16" exitCode=0 Feb 26 14:07:04 crc kubenswrapper[4724]: I0226 14:07:04.510966 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrjs2" event={"ID":"5d39e830-8878-46d7-9bc8-c24c98ece848","Type":"ContainerDied","Data":"f2f31878845593dc171daf55c3bfffd25b95ba78a988b0dc1025524727a88a16"} Feb 26 14:07:04 crc kubenswrapper[4724]: I0226 14:07:04.763925 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:07:04 crc kubenswrapper[4724]: I0226 14:07:04.928548 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d39e830-8878-46d7-9bc8-c24c98ece848-catalog-content\") pod \"5d39e830-8878-46d7-9bc8-c24c98ece848\" (UID: \"5d39e830-8878-46d7-9bc8-c24c98ece848\") " Feb 26 14:07:04 crc kubenswrapper[4724]: I0226 14:07:04.929045 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x88tz\" (UniqueName: \"kubernetes.io/projected/5d39e830-8878-46d7-9bc8-c24c98ece848-kube-api-access-x88tz\") pod \"5d39e830-8878-46d7-9bc8-c24c98ece848\" (UID: \"5d39e830-8878-46d7-9bc8-c24c98ece848\") " Feb 26 14:07:04 crc kubenswrapper[4724]: I0226 14:07:04.929074 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d39e830-8878-46d7-9bc8-c24c98ece848-utilities\") pod \"5d39e830-8878-46d7-9bc8-c24c98ece848\" (UID: \"5d39e830-8878-46d7-9bc8-c24c98ece848\") " Feb 26 14:07:04 crc kubenswrapper[4724]: I0226 14:07:04.933946 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d39e830-8878-46d7-9bc8-c24c98ece848-utilities" (OuterVolumeSpecName: "utilities") pod "5d39e830-8878-46d7-9bc8-c24c98ece848" (UID: "5d39e830-8878-46d7-9bc8-c24c98ece848"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:07:04 crc kubenswrapper[4724]: I0226 14:07:04.963416 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d39e830-8878-46d7-9bc8-c24c98ece848-kube-api-access-x88tz" (OuterVolumeSpecName: "kube-api-access-x88tz") pod "5d39e830-8878-46d7-9bc8-c24c98ece848" (UID: "5d39e830-8878-46d7-9bc8-c24c98ece848"). InnerVolumeSpecName "kube-api-access-x88tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:07:05 crc kubenswrapper[4724]: I0226 14:07:05.033807 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x88tz\" (UniqueName: \"kubernetes.io/projected/5d39e830-8878-46d7-9bc8-c24c98ece848-kube-api-access-x88tz\") on node \"crc\" DevicePath \"\"" Feb 26 14:07:05 crc kubenswrapper[4724]: I0226 14:07:05.033840 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d39e830-8878-46d7-9bc8-c24c98ece848-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:07:05 crc kubenswrapper[4724]: I0226 14:07:05.114406 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d39e830-8878-46d7-9bc8-c24c98ece848-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d39e830-8878-46d7-9bc8-c24c98ece848" (UID: "5d39e830-8878-46d7-9bc8-c24c98ece848"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:07:05 crc kubenswrapper[4724]: I0226 14:07:05.136398 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d39e830-8878-46d7-9bc8-c24c98ece848-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:07:05 crc kubenswrapper[4724]: I0226 14:07:05.526144 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrjs2" event={"ID":"5d39e830-8878-46d7-9bc8-c24c98ece848","Type":"ContainerDied","Data":"a38bcec5b80eb567de7aaffc4b45ac96633fff5035d0b30dc282578d9682d2fc"} Feb 26 14:07:05 crc kubenswrapper[4724]: I0226 14:07:05.526318 4724 scope.go:117] "RemoveContainer" containerID="f2f31878845593dc171daf55c3bfffd25b95ba78a988b0dc1025524727a88a16" Feb 26 14:07:05 crc kubenswrapper[4724]: I0226 14:07:05.526332 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrjs2" Feb 26 14:07:05 crc kubenswrapper[4724]: I0226 14:07:05.581357 4724 scope.go:117] "RemoveContainer" containerID="d61c0248d3e5594a92776ee811733f0807ac1db1ddaf0c9e1aa37457f03a829a" Feb 26 14:07:05 crc kubenswrapper[4724]: I0226 14:07:05.633593 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vrjs2"] Feb 26 14:07:05 crc kubenswrapper[4724]: I0226 14:07:05.635989 4724 scope.go:117] "RemoveContainer" containerID="0671d49a4d809a435abc95f30003185c99584b6fc4ee9bf6e1efb58e6edeea62" Feb 26 14:07:05 crc kubenswrapper[4724]: I0226 14:07:05.645431 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vrjs2"] Feb 26 14:07:05 crc kubenswrapper[4724]: I0226 14:07:05.988900 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d39e830-8878-46d7-9bc8-c24c98ece848" path="/var/lib/kubelet/pods/5d39e830-8878-46d7-9bc8-c24c98ece848/volumes" Feb 26 14:07:16 crc kubenswrapper[4724]: I0226 14:07:16.906090 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:07:16 crc kubenswrapper[4724]: I0226 14:07:16.906852 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:07:46 crc kubenswrapper[4724]: I0226 14:07:46.906267 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:07:46 crc kubenswrapper[4724]: I0226 14:07:46.907233 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:07:46 crc kubenswrapper[4724]: I0226 14:07:46.907317 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 14:07:46 crc kubenswrapper[4724]: I0226 14:07:46.908646 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"232809cf0ef7b44ca0c4267d955abe3d0e13f6297302072ab5c41227e168575b"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:07:46 crc kubenswrapper[4724]: I0226 14:07:46.908722 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://232809cf0ef7b44ca0c4267d955abe3d0e13f6297302072ab5c41227e168575b" gracePeriod=600 Feb 26 14:07:47 crc kubenswrapper[4724]: I0226 14:07:47.989314 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="232809cf0ef7b44ca0c4267d955abe3d0e13f6297302072ab5c41227e168575b" exitCode=0 Feb 26 14:07:47 crc kubenswrapper[4724]: I0226 14:07:47.989396 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"232809cf0ef7b44ca0c4267d955abe3d0e13f6297302072ab5c41227e168575b"} Feb 26 14:07:47 crc kubenswrapper[4724]: I0226 14:07:47.990258 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4"} Feb 26 14:07:47 crc kubenswrapper[4724]: I0226 14:07:47.990330 4724 scope.go:117] "RemoveContainer" containerID="4843f70b887a0142015f8f23ba9d78b0d44e8cc8ca47a37ed15afcba02ce8788" Feb 26 14:08:00 crc kubenswrapper[4724]: I0226 14:08:00.177396 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535248-rdw2m"] Feb 26 14:08:00 crc kubenswrapper[4724]: E0226 14:08:00.178412 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f586ddcd-6060-43aa-9ea4-f86d11774d95" containerName="oc" Feb 26 14:08:00 crc kubenswrapper[4724]: I0226 14:08:00.178427 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f586ddcd-6060-43aa-9ea4-f86d11774d95" containerName="oc" Feb 26 14:08:00 crc kubenswrapper[4724]: E0226 14:08:00.178454 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerName="extract-utilities" Feb 26 14:08:00 crc kubenswrapper[4724]: I0226 14:08:00.178460 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerName="extract-utilities" Feb 26 14:08:00 crc kubenswrapper[4724]: E0226 14:08:00.178474 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerName="registry-server" Feb 26 14:08:00 crc kubenswrapper[4724]: I0226 14:08:00.178480 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerName="registry-server" Feb 26 14:08:00 crc kubenswrapper[4724]: E0226 14:08:00.178496 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerName="extract-content" Feb 26 14:08:00 crc kubenswrapper[4724]: I0226 14:08:00.178502 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerName="extract-content" Feb 26 14:08:00 crc kubenswrapper[4724]: I0226 14:08:00.178703 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f586ddcd-6060-43aa-9ea4-f86d11774d95" containerName="oc" Feb 26 14:08:00 crc kubenswrapper[4724]: I0226 14:08:00.178720 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d39e830-8878-46d7-9bc8-c24c98ece848" containerName="registry-server" Feb 26 14:08:00 crc kubenswrapper[4724]: I0226 14:08:00.179577 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535248-rdw2m" Feb 26 14:08:00 crc kubenswrapper[4724]: I0226 14:08:00.188884 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:08:00 crc kubenswrapper[4724]: I0226 14:08:00.189111 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:08:00 crc kubenswrapper[4724]: I0226 14:08:00.191137 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:08:00 crc kubenswrapper[4724]: I0226 14:08:00.205715 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535248-rdw2m"] Feb 26 14:08:00 crc kubenswrapper[4724]: I0226 14:08:00.374439 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fptbg\" (UniqueName: \"kubernetes.io/projected/e1c532c8-5cee-4ebc-8104-cac5d020b5b0-kube-api-access-fptbg\") pod \"auto-csr-approver-29535248-rdw2m\" (UID: \"e1c532c8-5cee-4ebc-8104-cac5d020b5b0\") " pod="openshift-infra/auto-csr-approver-29535248-rdw2m" Feb 26 14:08:00 crc kubenswrapper[4724]: I0226 14:08:00.476856 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fptbg\" (UniqueName: \"kubernetes.io/projected/e1c532c8-5cee-4ebc-8104-cac5d020b5b0-kube-api-access-fptbg\") pod \"auto-csr-approver-29535248-rdw2m\" (UID: \"e1c532c8-5cee-4ebc-8104-cac5d020b5b0\") " pod="openshift-infra/auto-csr-approver-29535248-rdw2m" Feb 26 14:08:01 crc kubenswrapper[4724]: I0226 14:08:01.004011 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fptbg\" (UniqueName: \"kubernetes.io/projected/e1c532c8-5cee-4ebc-8104-cac5d020b5b0-kube-api-access-fptbg\") pod \"auto-csr-approver-29535248-rdw2m\" (UID: \"e1c532c8-5cee-4ebc-8104-cac5d020b5b0\") " pod="openshift-infra/auto-csr-approver-29535248-rdw2m" Feb 26 14:08:01 crc kubenswrapper[4724]: I0226 14:08:01.113002 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535248-rdw2m" Feb 26 14:08:01 crc kubenswrapper[4724]: I0226 14:08:01.615896 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535248-rdw2m"] Feb 26 14:08:02 crc kubenswrapper[4724]: I0226 14:08:02.157894 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535248-rdw2m" event={"ID":"e1c532c8-5cee-4ebc-8104-cac5d020b5b0","Type":"ContainerStarted","Data":"8465720d006a28cd6e3d98222c53b0fc60ab089cf9ea943d67bc1296a1667769"} Feb 26 14:08:03 crc kubenswrapper[4724]: I0226 14:08:03.175526 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535248-rdw2m" event={"ID":"e1c532c8-5cee-4ebc-8104-cac5d020b5b0","Type":"ContainerStarted","Data":"b9c86b0704a7348dd8176f2c9afa6e192f27324a94fe31dbeca94ff632b96604"} Feb 26 14:08:03 crc kubenswrapper[4724]: I0226 14:08:03.194436 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535248-rdw2m" podStartSLOduration=2.197863398 podStartE2EDuration="3.19441568s" podCreationTimestamp="2026-02-26 14:08:00 +0000 UTC" firstStartedPulling="2026-02-26 14:08:01.621098199 +0000 UTC m=+10948.276837314" lastFinishedPulling="2026-02-26 14:08:02.617650481 +0000 UTC m=+10949.273389596" observedRunningTime="2026-02-26 14:08:03.187965121 +0000 UTC m=+10949.843704246" watchObservedRunningTime="2026-02-26 14:08:03.19441568 +0000 UTC m=+10949.850154795" Feb 26 14:08:04 crc kubenswrapper[4724]: I0226 14:08:04.186940 4724 generic.go:334] "Generic (PLEG): container finished" podID="e1c532c8-5cee-4ebc-8104-cac5d020b5b0" containerID="b9c86b0704a7348dd8176f2c9afa6e192f27324a94fe31dbeca94ff632b96604" exitCode=0 Feb 26 14:08:04 crc kubenswrapper[4724]: I0226 14:08:04.187037 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535248-rdw2m" event={"ID":"e1c532c8-5cee-4ebc-8104-cac5d020b5b0","Type":"ContainerDied","Data":"b9c86b0704a7348dd8176f2c9afa6e192f27324a94fe31dbeca94ff632b96604"} Feb 26 14:08:05 crc kubenswrapper[4724]: I0226 14:08:05.618854 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535248-rdw2m" Feb 26 14:08:05 crc kubenswrapper[4724]: I0226 14:08:05.793943 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fptbg\" (UniqueName: \"kubernetes.io/projected/e1c532c8-5cee-4ebc-8104-cac5d020b5b0-kube-api-access-fptbg\") pod \"e1c532c8-5cee-4ebc-8104-cac5d020b5b0\" (UID: \"e1c532c8-5cee-4ebc-8104-cac5d020b5b0\") " Feb 26 14:08:05 crc kubenswrapper[4724]: I0226 14:08:05.803995 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1c532c8-5cee-4ebc-8104-cac5d020b5b0-kube-api-access-fptbg" (OuterVolumeSpecName: "kube-api-access-fptbg") pod "e1c532c8-5cee-4ebc-8104-cac5d020b5b0" (UID: "e1c532c8-5cee-4ebc-8104-cac5d020b5b0"). InnerVolumeSpecName "kube-api-access-fptbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:08:05 crc kubenswrapper[4724]: I0226 14:08:05.896153 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fptbg\" (UniqueName: \"kubernetes.io/projected/e1c532c8-5cee-4ebc-8104-cac5d020b5b0-kube-api-access-fptbg\") on node \"crc\" DevicePath \"\"" Feb 26 14:08:06 crc kubenswrapper[4724]: I0226 14:08:06.209268 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535248-rdw2m" event={"ID":"e1c532c8-5cee-4ebc-8104-cac5d020b5b0","Type":"ContainerDied","Data":"8465720d006a28cd6e3d98222c53b0fc60ab089cf9ea943d67bc1296a1667769"} Feb 26 14:08:06 crc kubenswrapper[4724]: I0226 14:08:06.209316 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8465720d006a28cd6e3d98222c53b0fc60ab089cf9ea943d67bc1296a1667769" Feb 26 14:08:06 crc kubenswrapper[4724]: I0226 14:08:06.209379 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535248-rdw2m" Feb 26 14:08:06 crc kubenswrapper[4724]: I0226 14:08:06.305892 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535242-9p4mf"] Feb 26 14:08:06 crc kubenswrapper[4724]: I0226 14:08:06.317383 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535242-9p4mf"] Feb 26 14:08:07 crc kubenswrapper[4724]: I0226 14:08:07.994883 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46e5e605-5a6d-4533-aadd-00fe984cd22b" path="/var/lib/kubelet/pods/46e5e605-5a6d-4533-aadd-00fe984cd22b/volumes" Feb 26 14:08:31 crc kubenswrapper[4724]: I0226 14:08:31.194406 4724 scope.go:117] "RemoveContainer" containerID="71bcd1f4a1a8612b9a2508982ecbe19cf28911a847216dce424162565f131969" Feb 26 14:10:00 crc kubenswrapper[4724]: I0226 14:10:00.164510 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535250-ftghv"] Feb 26 14:10:00 crc kubenswrapper[4724]: E0226 14:10:00.166393 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c532c8-5cee-4ebc-8104-cac5d020b5b0" containerName="oc" Feb 26 14:10:00 crc kubenswrapper[4724]: I0226 14:10:00.166417 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c532c8-5cee-4ebc-8104-cac5d020b5b0" containerName="oc" Feb 26 14:10:00 crc kubenswrapper[4724]: I0226 14:10:00.166715 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1c532c8-5cee-4ebc-8104-cac5d020b5b0" containerName="oc" Feb 26 14:10:00 crc kubenswrapper[4724]: I0226 14:10:00.168053 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535250-ftghv" Feb 26 14:10:00 crc kubenswrapper[4724]: I0226 14:10:00.174961 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:10:00 crc kubenswrapper[4724]: I0226 14:10:00.175531 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:10:00 crc kubenswrapper[4724]: I0226 14:10:00.177140 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:10:00 crc kubenswrapper[4724]: I0226 14:10:00.243089 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535250-ftghv"] Feb 26 14:10:00 crc kubenswrapper[4724]: I0226 14:10:00.276952 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf8m5\" (UniqueName: \"kubernetes.io/projected/fead7f94-8dc5-4b2a-8f4d-bdf5bb409677-kube-api-access-cf8m5\") pod \"auto-csr-approver-29535250-ftghv\" (UID: \"fead7f94-8dc5-4b2a-8f4d-bdf5bb409677\") " pod="openshift-infra/auto-csr-approver-29535250-ftghv" Feb 26 14:10:00 crc kubenswrapper[4724]: I0226 14:10:00.379303 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf8m5\" (UniqueName: \"kubernetes.io/projected/fead7f94-8dc5-4b2a-8f4d-bdf5bb409677-kube-api-access-cf8m5\") pod \"auto-csr-approver-29535250-ftghv\" (UID: \"fead7f94-8dc5-4b2a-8f4d-bdf5bb409677\") " pod="openshift-infra/auto-csr-approver-29535250-ftghv" Feb 26 14:10:00 crc kubenswrapper[4724]: I0226 14:10:00.400432 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf8m5\" (UniqueName: \"kubernetes.io/projected/fead7f94-8dc5-4b2a-8f4d-bdf5bb409677-kube-api-access-cf8m5\") pod \"auto-csr-approver-29535250-ftghv\" (UID: \"fead7f94-8dc5-4b2a-8f4d-bdf5bb409677\") " pod="openshift-infra/auto-csr-approver-29535250-ftghv" Feb 26 14:10:00 crc kubenswrapper[4724]: I0226 14:10:00.499511 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535250-ftghv" Feb 26 14:10:01 crc kubenswrapper[4724]: I0226 14:10:01.032981 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535250-ftghv"] Feb 26 14:10:01 crc kubenswrapper[4724]: I0226 14:10:01.266244 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535250-ftghv" event={"ID":"fead7f94-8dc5-4b2a-8f4d-bdf5bb409677","Type":"ContainerStarted","Data":"4b8103f890904d5f62f424d487939f46985e02dd62f51138ddd37892d709efc7"} Feb 26 14:10:04 crc kubenswrapper[4724]: I0226 14:10:04.296287 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535250-ftghv" event={"ID":"fead7f94-8dc5-4b2a-8f4d-bdf5bb409677","Type":"ContainerStarted","Data":"5a548eab816a2ee242d737ae53b33d8218321c084d44002fa976b408a4a8e9a3"} Feb 26 14:10:04 crc kubenswrapper[4724]: I0226 14:10:04.318023 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535250-ftghv" podStartSLOduration=1.987148879 podStartE2EDuration="4.317986274s" podCreationTimestamp="2026-02-26 14:10:00 +0000 UTC" firstStartedPulling="2026-02-26 14:10:01.047079323 +0000 UTC m=+11067.702818438" lastFinishedPulling="2026-02-26 14:10:03.377916718 +0000 UTC m=+11070.033655833" observedRunningTime="2026-02-26 14:10:04.310694684 +0000 UTC m=+11070.966433809" watchObservedRunningTime="2026-02-26 14:10:04.317986274 +0000 UTC m=+11070.973725409" Feb 26 14:10:05 crc kubenswrapper[4724]: I0226 14:10:05.308581 4724 generic.go:334] "Generic (PLEG): container finished" podID="fead7f94-8dc5-4b2a-8f4d-bdf5bb409677" containerID="5a548eab816a2ee242d737ae53b33d8218321c084d44002fa976b408a4a8e9a3" exitCode=0 Feb 26 14:10:05 crc kubenswrapper[4724]: I0226 14:10:05.308709 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535250-ftghv" event={"ID":"fead7f94-8dc5-4b2a-8f4d-bdf5bb409677","Type":"ContainerDied","Data":"5a548eab816a2ee242d737ae53b33d8218321c084d44002fa976b408a4a8e9a3"} Feb 26 14:10:06 crc kubenswrapper[4724]: I0226 14:10:06.742253 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535250-ftghv" Feb 26 14:10:06 crc kubenswrapper[4724]: I0226 14:10:06.836311 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf8m5\" (UniqueName: \"kubernetes.io/projected/fead7f94-8dc5-4b2a-8f4d-bdf5bb409677-kube-api-access-cf8m5\") pod \"fead7f94-8dc5-4b2a-8f4d-bdf5bb409677\" (UID: \"fead7f94-8dc5-4b2a-8f4d-bdf5bb409677\") " Feb 26 14:10:06 crc kubenswrapper[4724]: I0226 14:10:06.848112 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fead7f94-8dc5-4b2a-8f4d-bdf5bb409677-kube-api-access-cf8m5" (OuterVolumeSpecName: "kube-api-access-cf8m5") pod "fead7f94-8dc5-4b2a-8f4d-bdf5bb409677" (UID: "fead7f94-8dc5-4b2a-8f4d-bdf5bb409677"). InnerVolumeSpecName "kube-api-access-cf8m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:10:06 crc kubenswrapper[4724]: I0226 14:10:06.940523 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf8m5\" (UniqueName: \"kubernetes.io/projected/fead7f94-8dc5-4b2a-8f4d-bdf5bb409677-kube-api-access-cf8m5\") on node \"crc\" DevicePath \"\"" Feb 26 14:10:07 crc kubenswrapper[4724]: I0226 14:10:07.133876 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535244-5h5wd"] Feb 26 14:10:07 crc kubenswrapper[4724]: I0226 14:10:07.150147 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535244-5h5wd"] Feb 26 14:10:07 crc kubenswrapper[4724]: I0226 14:10:07.332394 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535250-ftghv" event={"ID":"fead7f94-8dc5-4b2a-8f4d-bdf5bb409677","Type":"ContainerDied","Data":"4b8103f890904d5f62f424d487939f46985e02dd62f51138ddd37892d709efc7"} Feb 26 14:10:07 crc kubenswrapper[4724]: I0226 14:10:07.332478 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535250-ftghv" Feb 26 14:10:07 crc kubenswrapper[4724]: I0226 14:10:07.332512 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b8103f890904d5f62f424d487939f46985e02dd62f51138ddd37892d709efc7" Feb 26 14:10:07 crc kubenswrapper[4724]: I0226 14:10:07.987361 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b9bb426-a627-4c99-bcdb-dec403e13758" path="/var/lib/kubelet/pods/6b9bb426-a627-4c99-bcdb-dec403e13758/volumes" Feb 26 14:10:16 crc kubenswrapper[4724]: I0226 14:10:16.908160 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:10:16 crc kubenswrapper[4724]: I0226 14:10:16.909246 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:10:31 crc kubenswrapper[4724]: I0226 14:10:31.379761 4724 scope.go:117] "RemoveContainer" containerID="199f0aa2fa873f15c6e7b08cf7d579b8c73a1e9f8ab497b7bfba70ba7ebcb1e1" Feb 26 14:10:44 crc kubenswrapper[4724]: I0226 14:10:44.687900 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-plhtf"] Feb 26 14:10:44 crc kubenswrapper[4724]: E0226 14:10:44.691094 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fead7f94-8dc5-4b2a-8f4d-bdf5bb409677" containerName="oc" Feb 26 14:10:44 crc kubenswrapper[4724]: I0226 14:10:44.691239 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fead7f94-8dc5-4b2a-8f4d-bdf5bb409677" containerName="oc" Feb 26 14:10:44 crc kubenswrapper[4724]: I0226 14:10:44.691629 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fead7f94-8dc5-4b2a-8f4d-bdf5bb409677" containerName="oc" Feb 26 14:10:44 crc kubenswrapper[4724]: I0226 14:10:44.693730 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:10:44 crc kubenswrapper[4724]: I0226 14:10:44.706025 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-plhtf"] Feb 26 14:10:44 crc kubenswrapper[4724]: I0226 14:10:44.826172 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfa1139-389b-4dbc-99c3-3d99b960a612-catalog-content\") pod \"certified-operators-plhtf\" (UID: \"3cfa1139-389b-4dbc-99c3-3d99b960a612\") " pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:10:44 crc kubenswrapper[4724]: I0226 14:10:44.826420 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfa1139-389b-4dbc-99c3-3d99b960a612-utilities\") pod \"certified-operators-plhtf\" (UID: \"3cfa1139-389b-4dbc-99c3-3d99b960a612\") " pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:10:44 crc kubenswrapper[4724]: I0226 14:10:44.826629 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqjsg\" (UniqueName: \"kubernetes.io/projected/3cfa1139-389b-4dbc-99c3-3d99b960a612-kube-api-access-jqjsg\") pod \"certified-operators-plhtf\" (UID: \"3cfa1139-389b-4dbc-99c3-3d99b960a612\") " pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:10:44 crc kubenswrapper[4724]: I0226 14:10:44.928452 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfa1139-389b-4dbc-99c3-3d99b960a612-utilities\") pod \"certified-operators-plhtf\" (UID: \"3cfa1139-389b-4dbc-99c3-3d99b960a612\") " pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:10:44 crc kubenswrapper[4724]: I0226 14:10:44.928624 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqjsg\" (UniqueName: \"kubernetes.io/projected/3cfa1139-389b-4dbc-99c3-3d99b960a612-kube-api-access-jqjsg\") pod \"certified-operators-plhtf\" (UID: \"3cfa1139-389b-4dbc-99c3-3d99b960a612\") " pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:10:44 crc kubenswrapper[4724]: I0226 14:10:44.928697 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfa1139-389b-4dbc-99c3-3d99b960a612-catalog-content\") pod \"certified-operators-plhtf\" (UID: \"3cfa1139-389b-4dbc-99c3-3d99b960a612\") " pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:10:44 crc kubenswrapper[4724]: I0226 14:10:44.929254 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfa1139-389b-4dbc-99c3-3d99b960a612-utilities\") pod \"certified-operators-plhtf\" (UID: \"3cfa1139-389b-4dbc-99c3-3d99b960a612\") " pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:10:44 crc kubenswrapper[4724]: I0226 14:10:44.929297 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfa1139-389b-4dbc-99c3-3d99b960a612-catalog-content\") pod \"certified-operators-plhtf\" (UID: \"3cfa1139-389b-4dbc-99c3-3d99b960a612\") " pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:10:44 crc kubenswrapper[4724]: I0226 14:10:44.959135 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqjsg\" (UniqueName: \"kubernetes.io/projected/3cfa1139-389b-4dbc-99c3-3d99b960a612-kube-api-access-jqjsg\") pod \"certified-operators-plhtf\" (UID: \"3cfa1139-389b-4dbc-99c3-3d99b960a612\") " pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:10:45 crc kubenswrapper[4724]: I0226 14:10:45.016165 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:10:45 crc kubenswrapper[4724]: I0226 14:10:45.618368 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-plhtf"] Feb 26 14:10:45 crc kubenswrapper[4724]: I0226 14:10:45.768990 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-plhtf" event={"ID":"3cfa1139-389b-4dbc-99c3-3d99b960a612","Type":"ContainerStarted","Data":"cdd867a2aafc50c05baac68546ab25fb55f2482cd28138393f27f8a7f1ea843c"} Feb 26 14:10:46 crc kubenswrapper[4724]: I0226 14:10:46.779422 4724 generic.go:334] "Generic (PLEG): container finished" podID="3cfa1139-389b-4dbc-99c3-3d99b960a612" containerID="b798c6e7a46fba5352090e7b993a38f0e014f234cb884bba536ebf9bc42bce11" exitCode=0 Feb 26 14:10:46 crc kubenswrapper[4724]: I0226 14:10:46.779511 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-plhtf" event={"ID":"3cfa1139-389b-4dbc-99c3-3d99b960a612","Type":"ContainerDied","Data":"b798c6e7a46fba5352090e7b993a38f0e014f234cb884bba536ebf9bc42bce11"} Feb 26 14:10:46 crc kubenswrapper[4724]: I0226 14:10:46.907147 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:10:46 crc kubenswrapper[4724]: I0226 14:10:46.907255 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:10:47 crc kubenswrapper[4724]: I0226 14:10:47.802867 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-plhtf" event={"ID":"3cfa1139-389b-4dbc-99c3-3d99b960a612","Type":"ContainerStarted","Data":"56b8454c15587229798e6b5d8b1c04c87bf619af5e074ed2e117dc73f1c9b24f"} Feb 26 14:10:50 crc kubenswrapper[4724]: I0226 14:10:50.833750 4724 generic.go:334] "Generic (PLEG): container finished" podID="3cfa1139-389b-4dbc-99c3-3d99b960a612" containerID="56b8454c15587229798e6b5d8b1c04c87bf619af5e074ed2e117dc73f1c9b24f" exitCode=0 Feb 26 14:10:50 crc kubenswrapper[4724]: I0226 14:10:50.833833 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-plhtf" event={"ID":"3cfa1139-389b-4dbc-99c3-3d99b960a612","Type":"ContainerDied","Data":"56b8454c15587229798e6b5d8b1c04c87bf619af5e074ed2e117dc73f1c9b24f"} Feb 26 14:10:52 crc kubenswrapper[4724]: I0226 14:10:52.858374 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-plhtf" event={"ID":"3cfa1139-389b-4dbc-99c3-3d99b960a612","Type":"ContainerStarted","Data":"d6a14038c2eb68f905dbd3126fc24d73322746b37b6c3e37605a7ff55f381f16"} Feb 26 14:10:52 crc kubenswrapper[4724]: I0226 14:10:52.886691 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-plhtf" podStartSLOduration=3.955250024 podStartE2EDuration="8.886658389s" podCreationTimestamp="2026-02-26 14:10:44 +0000 UTC" firstStartedPulling="2026-02-26 14:10:46.78206999 +0000 UTC m=+11113.437809105" lastFinishedPulling="2026-02-26 14:10:51.713478355 +0000 UTC m=+11118.369217470" observedRunningTime="2026-02-26 14:10:52.878796965 +0000 UTC m=+11119.534536080" watchObservedRunningTime="2026-02-26 14:10:52.886658389 +0000 UTC m=+11119.542397504" Feb 26 14:10:55 crc kubenswrapper[4724]: I0226 14:10:55.017251 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:10:55 crc kubenswrapper[4724]: I0226 14:10:55.018847 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:10:56 crc kubenswrapper[4724]: I0226 14:10:56.068229 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-plhtf" podUID="3cfa1139-389b-4dbc-99c3-3d99b960a612" containerName="registry-server" probeResult="failure" output=< Feb 26 14:10:56 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:10:56 crc kubenswrapper[4724]: > Feb 26 14:11:06 crc kubenswrapper[4724]: I0226 14:11:06.090062 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-plhtf" podUID="3cfa1139-389b-4dbc-99c3-3d99b960a612" containerName="registry-server" probeResult="failure" output=< Feb 26 14:11:06 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:11:06 crc kubenswrapper[4724]: > Feb 26 14:11:15 crc kubenswrapper[4724]: I0226 14:11:15.073552 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:11:15 crc kubenswrapper[4724]: I0226 14:11:15.150096 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:11:15 crc kubenswrapper[4724]: I0226 14:11:15.888313 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-plhtf"] Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.110474 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-plhtf" podUID="3cfa1139-389b-4dbc-99c3-3d99b960a612" containerName="registry-server" containerID="cri-o://d6a14038c2eb68f905dbd3126fc24d73322746b37b6c3e37605a7ff55f381f16" gracePeriod=2 Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.722599 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.867830 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfa1139-389b-4dbc-99c3-3d99b960a612-catalog-content\") pod \"3cfa1139-389b-4dbc-99c3-3d99b960a612\" (UID: \"3cfa1139-389b-4dbc-99c3-3d99b960a612\") " Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.868368 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqjsg\" (UniqueName: \"kubernetes.io/projected/3cfa1139-389b-4dbc-99c3-3d99b960a612-kube-api-access-jqjsg\") pod \"3cfa1139-389b-4dbc-99c3-3d99b960a612\" (UID: \"3cfa1139-389b-4dbc-99c3-3d99b960a612\") " Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.868614 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfa1139-389b-4dbc-99c3-3d99b960a612-utilities\") pod \"3cfa1139-389b-4dbc-99c3-3d99b960a612\" (UID: \"3cfa1139-389b-4dbc-99c3-3d99b960a612\") " Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.869542 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cfa1139-389b-4dbc-99c3-3d99b960a612-utilities" (OuterVolumeSpecName: "utilities") pod "3cfa1139-389b-4dbc-99c3-3d99b960a612" (UID: "3cfa1139-389b-4dbc-99c3-3d99b960a612"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.882376 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cfa1139-389b-4dbc-99c3-3d99b960a612-kube-api-access-jqjsg" (OuterVolumeSpecName: "kube-api-access-jqjsg") pod "3cfa1139-389b-4dbc-99c3-3d99b960a612" (UID: "3cfa1139-389b-4dbc-99c3-3d99b960a612"). InnerVolumeSpecName "kube-api-access-jqjsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.906413 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.906470 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.906516 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.907423 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.907488 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" gracePeriod=600 Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.946408 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cfa1139-389b-4dbc-99c3-3d99b960a612-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cfa1139-389b-4dbc-99c3-3d99b960a612" (UID: "3cfa1139-389b-4dbc-99c3-3d99b960a612"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.972328 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfa1139-389b-4dbc-99c3-3d99b960a612-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.972377 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfa1139-389b-4dbc-99c3-3d99b960a612-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:11:16 crc kubenswrapper[4724]: I0226 14:11:16.972396 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqjsg\" (UniqueName: \"kubernetes.io/projected/3cfa1139-389b-4dbc-99c3-3d99b960a612-kube-api-access-jqjsg\") on node \"crc\" DevicePath \"\"" Feb 26 14:11:17 crc kubenswrapper[4724]: E0226 14:11:17.041971 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.139746 4724 generic.go:334] "Generic (PLEG): container finished" podID="3cfa1139-389b-4dbc-99c3-3d99b960a612" containerID="d6a14038c2eb68f905dbd3126fc24d73322746b37b6c3e37605a7ff55f381f16" exitCode=0 Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.139840 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-plhtf" event={"ID":"3cfa1139-389b-4dbc-99c3-3d99b960a612","Type":"ContainerDied","Data":"d6a14038c2eb68f905dbd3126fc24d73322746b37b6c3e37605a7ff55f381f16"} Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.139882 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-plhtf" event={"ID":"3cfa1139-389b-4dbc-99c3-3d99b960a612","Type":"ContainerDied","Data":"cdd867a2aafc50c05baac68546ab25fb55f2482cd28138393f27f8a7f1ea843c"} Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.139916 4724 scope.go:117] "RemoveContainer" containerID="d6a14038c2eb68f905dbd3126fc24d73322746b37b6c3e37605a7ff55f381f16" Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.140134 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-plhtf" Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.147628 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4"} Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.147654 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" exitCode=0 Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.149197 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:11:17 crc kubenswrapper[4724]: E0226 14:11:17.149508 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.189394 4724 scope.go:117] "RemoveContainer" containerID="56b8454c15587229798e6b5d8b1c04c87bf619af5e074ed2e117dc73f1c9b24f" Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.204406 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-plhtf"] Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.211479 4724 scope.go:117] "RemoveContainer" containerID="b798c6e7a46fba5352090e7b993a38f0e014f234cb884bba536ebf9bc42bce11" Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.215160 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-plhtf"] Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.339327 4724 scope.go:117] "RemoveContainer" containerID="d6a14038c2eb68f905dbd3126fc24d73322746b37b6c3e37605a7ff55f381f16" Feb 26 14:11:17 crc kubenswrapper[4724]: E0226 14:11:17.345896 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6a14038c2eb68f905dbd3126fc24d73322746b37b6c3e37605a7ff55f381f16\": container with ID starting with d6a14038c2eb68f905dbd3126fc24d73322746b37b6c3e37605a7ff55f381f16 not found: ID does not exist" containerID="d6a14038c2eb68f905dbd3126fc24d73322746b37b6c3e37605a7ff55f381f16" Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.347450 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a14038c2eb68f905dbd3126fc24d73322746b37b6c3e37605a7ff55f381f16"} err="failed to get container status \"d6a14038c2eb68f905dbd3126fc24d73322746b37b6c3e37605a7ff55f381f16\": rpc error: code = NotFound desc = could not find container \"d6a14038c2eb68f905dbd3126fc24d73322746b37b6c3e37605a7ff55f381f16\": container with ID starting with d6a14038c2eb68f905dbd3126fc24d73322746b37b6c3e37605a7ff55f381f16 not found: ID does not exist" Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.347511 4724 scope.go:117] "RemoveContainer" containerID="56b8454c15587229798e6b5d8b1c04c87bf619af5e074ed2e117dc73f1c9b24f" Feb 26 14:11:17 crc kubenswrapper[4724]: E0226 14:11:17.348156 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56b8454c15587229798e6b5d8b1c04c87bf619af5e074ed2e117dc73f1c9b24f\": container with ID starting with 56b8454c15587229798e6b5d8b1c04c87bf619af5e074ed2e117dc73f1c9b24f not found: ID does not exist" containerID="56b8454c15587229798e6b5d8b1c04c87bf619af5e074ed2e117dc73f1c9b24f" Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.348243 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b8454c15587229798e6b5d8b1c04c87bf619af5e074ed2e117dc73f1c9b24f"} err="failed to get container status \"56b8454c15587229798e6b5d8b1c04c87bf619af5e074ed2e117dc73f1c9b24f\": rpc error: code = NotFound desc = could not find container \"56b8454c15587229798e6b5d8b1c04c87bf619af5e074ed2e117dc73f1c9b24f\": container with ID starting with 56b8454c15587229798e6b5d8b1c04c87bf619af5e074ed2e117dc73f1c9b24f not found: ID does not exist" Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.348266 4724 scope.go:117] "RemoveContainer" containerID="b798c6e7a46fba5352090e7b993a38f0e014f234cb884bba536ebf9bc42bce11" Feb 26 14:11:17 crc kubenswrapper[4724]: E0226 14:11:17.348808 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b798c6e7a46fba5352090e7b993a38f0e014f234cb884bba536ebf9bc42bce11\": container with ID starting with b798c6e7a46fba5352090e7b993a38f0e014f234cb884bba536ebf9bc42bce11 not found: ID does not exist" containerID="b798c6e7a46fba5352090e7b993a38f0e014f234cb884bba536ebf9bc42bce11" Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.348864 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b798c6e7a46fba5352090e7b993a38f0e014f234cb884bba536ebf9bc42bce11"} err="failed to get container status \"b798c6e7a46fba5352090e7b993a38f0e014f234cb884bba536ebf9bc42bce11\": rpc error: code = NotFound desc = could not find container \"b798c6e7a46fba5352090e7b993a38f0e014f234cb884bba536ebf9bc42bce11\": container with ID starting with b798c6e7a46fba5352090e7b993a38f0e014f234cb884bba536ebf9bc42bce11 not found: ID does not exist" Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.348903 4724 scope.go:117] "RemoveContainer" containerID="232809cf0ef7b44ca0c4267d955abe3d0e13f6297302072ab5c41227e168575b" Feb 26 14:11:17 crc kubenswrapper[4724]: I0226 14:11:17.991325 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cfa1139-389b-4dbc-99c3-3d99b960a612" path="/var/lib/kubelet/pods/3cfa1139-389b-4dbc-99c3-3d99b960a612/volumes" Feb 26 14:11:28 crc kubenswrapper[4724]: I0226 14:11:28.976508 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:11:28 crc kubenswrapper[4724]: E0226 14:11:28.977448 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:11:39 crc kubenswrapper[4724]: I0226 14:11:39.977383 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:11:39 crc kubenswrapper[4724]: E0226 14:11:39.979278 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:11:50 crc kubenswrapper[4724]: I0226 14:11:50.975454 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:11:50 crc kubenswrapper[4724]: E0226 14:11:50.976283 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:12:00 crc kubenswrapper[4724]: I0226 14:12:00.160410 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535252-9dvkv"] Feb 26 14:12:00 crc kubenswrapper[4724]: E0226 14:12:00.161425 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfa1139-389b-4dbc-99c3-3d99b960a612" containerName="extract-content" Feb 26 14:12:00 crc kubenswrapper[4724]: I0226 14:12:00.161443 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfa1139-389b-4dbc-99c3-3d99b960a612" containerName="extract-content" Feb 26 14:12:00 crc kubenswrapper[4724]: E0226 14:12:00.161469 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfa1139-389b-4dbc-99c3-3d99b960a612" containerName="extract-utilities" Feb 26 14:12:00 crc kubenswrapper[4724]: I0226 14:12:00.161478 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfa1139-389b-4dbc-99c3-3d99b960a612" containerName="extract-utilities" Feb 26 14:12:00 crc kubenswrapper[4724]: E0226 14:12:00.161506 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfa1139-389b-4dbc-99c3-3d99b960a612" containerName="registry-server" Feb 26 14:12:00 crc kubenswrapper[4724]: I0226 14:12:00.161513 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfa1139-389b-4dbc-99c3-3d99b960a612" containerName="registry-server" Feb 26 14:12:00 crc kubenswrapper[4724]: I0226 14:12:00.161737 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cfa1139-389b-4dbc-99c3-3d99b960a612" containerName="registry-server" Feb 26 14:12:00 crc kubenswrapper[4724]: I0226 14:12:00.162549 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535252-9dvkv" Feb 26 14:12:00 crc kubenswrapper[4724]: I0226 14:12:00.165032 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:12:00 crc kubenswrapper[4724]: I0226 14:12:00.168003 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:12:00 crc kubenswrapper[4724]: I0226 14:12:00.171648 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:12:00 crc kubenswrapper[4724]: I0226 14:12:00.195602 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535252-9dvkv"] Feb 26 14:12:00 crc kubenswrapper[4724]: I0226 14:12:00.243148 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcvhl\" (UniqueName: \"kubernetes.io/projected/cee8124b-347c-4fb2-87c3-4c79c7c86e9d-kube-api-access-qcvhl\") pod \"auto-csr-approver-29535252-9dvkv\" (UID: \"cee8124b-347c-4fb2-87c3-4c79c7c86e9d\") " pod="openshift-infra/auto-csr-approver-29535252-9dvkv" Feb 26 14:12:00 crc kubenswrapper[4724]: I0226 14:12:00.345458 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcvhl\" (UniqueName: \"kubernetes.io/projected/cee8124b-347c-4fb2-87c3-4c79c7c86e9d-kube-api-access-qcvhl\") pod \"auto-csr-approver-29535252-9dvkv\" (UID: \"cee8124b-347c-4fb2-87c3-4c79c7c86e9d\") " pod="openshift-infra/auto-csr-approver-29535252-9dvkv" Feb 26 14:12:00 crc kubenswrapper[4724]: I0226 14:12:00.369808 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcvhl\" (UniqueName: \"kubernetes.io/projected/cee8124b-347c-4fb2-87c3-4c79c7c86e9d-kube-api-access-qcvhl\") pod \"auto-csr-approver-29535252-9dvkv\" (UID: \"cee8124b-347c-4fb2-87c3-4c79c7c86e9d\") " pod="openshift-infra/auto-csr-approver-29535252-9dvkv" Feb 26 14:12:00 crc kubenswrapper[4724]: I0226 14:12:00.487495 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535252-9dvkv" Feb 26 14:12:01 crc kubenswrapper[4724]: I0226 14:12:01.135995 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535252-9dvkv"] Feb 26 14:12:01 crc kubenswrapper[4724]: I0226 14:12:01.150384 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:12:01 crc kubenswrapper[4724]: I0226 14:12:01.553343 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535252-9dvkv" event={"ID":"cee8124b-347c-4fb2-87c3-4c79c7c86e9d","Type":"ContainerStarted","Data":"20c0ed68e6ba69f0a27a6f0e6df265ffc18121276853bf1948546eef324cf882"} Feb 26 14:12:02 crc kubenswrapper[4724]: I0226 14:12:02.975802 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:12:02 crc kubenswrapper[4724]: E0226 14:12:02.976673 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:12:03 crc kubenswrapper[4724]: I0226 14:12:03.575868 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535252-9dvkv" event={"ID":"cee8124b-347c-4fb2-87c3-4c79c7c86e9d","Type":"ContainerStarted","Data":"fd92026b84a583aa8b6673880bbc8bc31591598a8cdcf9000fa7565d4e709503"} Feb 26 14:12:03 crc kubenswrapper[4724]: I0226 14:12:03.597283 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535252-9dvkv" podStartSLOduration=2.027964162 podStartE2EDuration="3.597246003s" podCreationTimestamp="2026-02-26 14:12:00 +0000 UTC" firstStartedPulling="2026-02-26 14:12:01.148068563 +0000 UTC m=+11187.803807678" lastFinishedPulling="2026-02-26 14:12:02.717350404 +0000 UTC m=+11189.373089519" observedRunningTime="2026-02-26 14:12:03.590755863 +0000 UTC m=+11190.246495008" watchObservedRunningTime="2026-02-26 14:12:03.597246003 +0000 UTC m=+11190.252985118" Feb 26 14:12:04 crc kubenswrapper[4724]: I0226 14:12:04.587636 4724 generic.go:334] "Generic (PLEG): container finished" podID="cee8124b-347c-4fb2-87c3-4c79c7c86e9d" containerID="fd92026b84a583aa8b6673880bbc8bc31591598a8cdcf9000fa7565d4e709503" exitCode=0 Feb 26 14:12:04 crc kubenswrapper[4724]: I0226 14:12:04.588065 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535252-9dvkv" event={"ID":"cee8124b-347c-4fb2-87c3-4c79c7c86e9d","Type":"ContainerDied","Data":"fd92026b84a583aa8b6673880bbc8bc31591598a8cdcf9000fa7565d4e709503"} Feb 26 14:12:05 crc kubenswrapper[4724]: I0226 14:12:05.993470 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535252-9dvkv" Feb 26 14:12:06 crc kubenswrapper[4724]: I0226 14:12:06.073713 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcvhl\" (UniqueName: \"kubernetes.io/projected/cee8124b-347c-4fb2-87c3-4c79c7c86e9d-kube-api-access-qcvhl\") pod \"cee8124b-347c-4fb2-87c3-4c79c7c86e9d\" (UID: \"cee8124b-347c-4fb2-87c3-4c79c7c86e9d\") " Feb 26 14:12:06 crc kubenswrapper[4724]: I0226 14:12:06.079315 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee8124b-347c-4fb2-87c3-4c79c7c86e9d-kube-api-access-qcvhl" (OuterVolumeSpecName: "kube-api-access-qcvhl") pod "cee8124b-347c-4fb2-87c3-4c79c7c86e9d" (UID: "cee8124b-347c-4fb2-87c3-4c79c7c86e9d"). InnerVolumeSpecName "kube-api-access-qcvhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:12:06 crc kubenswrapper[4724]: I0226 14:12:06.176098 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcvhl\" (UniqueName: \"kubernetes.io/projected/cee8124b-347c-4fb2-87c3-4c79c7c86e9d-kube-api-access-qcvhl\") on node \"crc\" DevicePath \"\"" Feb 26 14:12:06 crc kubenswrapper[4724]: I0226 14:12:06.609880 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535252-9dvkv" event={"ID":"cee8124b-347c-4fb2-87c3-4c79c7c86e9d","Type":"ContainerDied","Data":"20c0ed68e6ba69f0a27a6f0e6df265ffc18121276853bf1948546eef324cf882"} Feb 26 14:12:06 crc kubenswrapper[4724]: I0226 14:12:06.609956 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20c0ed68e6ba69f0a27a6f0e6df265ffc18121276853bf1948546eef324cf882" Feb 26 14:12:06 crc kubenswrapper[4724]: I0226 14:12:06.610055 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535252-9dvkv" Feb 26 14:12:06 crc kubenswrapper[4724]: I0226 14:12:06.686462 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535246-2wvr6"] Feb 26 14:12:06 crc kubenswrapper[4724]: I0226 14:12:06.696054 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535246-2wvr6"] Feb 26 14:12:07 crc kubenswrapper[4724]: I0226 14:12:07.985681 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f586ddcd-6060-43aa-9ea4-f86d11774d95" path="/var/lib/kubelet/pods/f586ddcd-6060-43aa-9ea4-f86d11774d95/volumes" Feb 26 14:12:13 crc kubenswrapper[4724]: I0226 14:12:13.983561 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:12:13 crc kubenswrapper[4724]: E0226 14:12:13.989275 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:12:28 crc kubenswrapper[4724]: I0226 14:12:28.976130 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:12:28 crc kubenswrapper[4724]: E0226 14:12:28.976992 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:12:31 crc kubenswrapper[4724]: I0226 14:12:31.535541 4724 scope.go:117] "RemoveContainer" containerID="60de01210f181aeda40e38cdae1079d9f9ce54364a15a76515842d8a8c108279" Feb 26 14:12:40 crc kubenswrapper[4724]: I0226 14:12:40.976058 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:12:40 crc kubenswrapper[4724]: E0226 14:12:40.976720 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:12:54 crc kubenswrapper[4724]: I0226 14:12:54.975524 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:12:54 crc kubenswrapper[4724]: E0226 14:12:54.976485 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:13:08 crc kubenswrapper[4724]: I0226 14:13:08.977474 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:13:08 crc kubenswrapper[4724]: E0226 14:13:08.979552 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:13:22 crc kubenswrapper[4724]: I0226 14:13:22.975499 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:13:22 crc kubenswrapper[4724]: E0226 14:13:22.976432 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:13:34 crc kubenswrapper[4724]: I0226 14:13:34.975161 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:13:34 crc kubenswrapper[4724]: E0226 14:13:34.975989 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:13:47 crc kubenswrapper[4724]: I0226 14:13:47.252994 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:13:47 crc kubenswrapper[4724]: E0226 14:13:47.268614 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:14:00 crc kubenswrapper[4724]: I0226 14:14:00.151763 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535254-458tm"] Feb 26 14:14:00 crc kubenswrapper[4724]: E0226 14:14:00.152810 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee8124b-347c-4fb2-87c3-4c79c7c86e9d" containerName="oc" Feb 26 14:14:00 crc kubenswrapper[4724]: I0226 14:14:00.152832 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee8124b-347c-4fb2-87c3-4c79c7c86e9d" containerName="oc" Feb 26 14:14:00 crc kubenswrapper[4724]: I0226 14:14:00.153091 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cee8124b-347c-4fb2-87c3-4c79c7c86e9d" containerName="oc" Feb 26 14:14:00 crc kubenswrapper[4724]: I0226 14:14:00.153978 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535254-458tm" Feb 26 14:14:00 crc kubenswrapper[4724]: I0226 14:14:00.157503 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:14:00 crc kubenswrapper[4724]: I0226 14:14:00.157777 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:14:00 crc kubenswrapper[4724]: I0226 14:14:00.158017 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:14:00 crc kubenswrapper[4724]: I0226 14:14:00.167848 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535254-458tm"] Feb 26 14:14:00 crc kubenswrapper[4724]: I0226 14:14:00.297678 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j86rq\" (UniqueName: \"kubernetes.io/projected/d4bbe523-5e73-4a67-898f-e22b51bcbb10-kube-api-access-j86rq\") pod \"auto-csr-approver-29535254-458tm\" (UID: \"d4bbe523-5e73-4a67-898f-e22b51bcbb10\") " pod="openshift-infra/auto-csr-approver-29535254-458tm" Feb 26 14:14:00 crc kubenswrapper[4724]: I0226 14:14:00.399512 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j86rq\" (UniqueName: \"kubernetes.io/projected/d4bbe523-5e73-4a67-898f-e22b51bcbb10-kube-api-access-j86rq\") pod \"auto-csr-approver-29535254-458tm\" (UID: \"d4bbe523-5e73-4a67-898f-e22b51bcbb10\") " pod="openshift-infra/auto-csr-approver-29535254-458tm" Feb 26 14:14:00 crc kubenswrapper[4724]: I0226 14:14:00.423031 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j86rq\" (UniqueName: \"kubernetes.io/projected/d4bbe523-5e73-4a67-898f-e22b51bcbb10-kube-api-access-j86rq\") pod \"auto-csr-approver-29535254-458tm\" (UID: \"d4bbe523-5e73-4a67-898f-e22b51bcbb10\") " pod="openshift-infra/auto-csr-approver-29535254-458tm" Feb 26 14:14:00 crc kubenswrapper[4724]: I0226 14:14:00.494351 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535254-458tm" Feb 26 14:14:01 crc kubenswrapper[4724]: I0226 14:14:00.966338 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535254-458tm"] Feb 26 14:14:01 crc kubenswrapper[4724]: W0226 14:14:00.970233 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4bbe523_5e73_4a67_898f_e22b51bcbb10.slice/crio-7512533269bab352387ee844864ea7544aa0da9316c92f9f49f17a07e8875231 WatchSource:0}: Error finding container 7512533269bab352387ee844864ea7544aa0da9316c92f9f49f17a07e8875231: Status 404 returned error can't find the container with id 7512533269bab352387ee844864ea7544aa0da9316c92f9f49f17a07e8875231 Feb 26 14:14:01 crc kubenswrapper[4724]: I0226 14:14:00.977143 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:14:01 crc kubenswrapper[4724]: E0226 14:14:00.977435 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:14:01 crc kubenswrapper[4724]: I0226 14:14:01.743607 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535254-458tm" event={"ID":"d4bbe523-5e73-4a67-898f-e22b51bcbb10","Type":"ContainerStarted","Data":"7512533269bab352387ee844864ea7544aa0da9316c92f9f49f17a07e8875231"} Feb 26 14:14:02 crc kubenswrapper[4724]: I0226 14:14:02.754528 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535254-458tm" event={"ID":"d4bbe523-5e73-4a67-898f-e22b51bcbb10","Type":"ContainerStarted","Data":"c4bdc974ce1093c89a5bebede21f962c70d18f1b396df837b590cc6e98050944"} Feb 26 14:14:03 crc kubenswrapper[4724]: I0226 14:14:03.765069 4724 generic.go:334] "Generic (PLEG): container finished" podID="d4bbe523-5e73-4a67-898f-e22b51bcbb10" containerID="c4bdc974ce1093c89a5bebede21f962c70d18f1b396df837b590cc6e98050944" exitCode=0 Feb 26 14:14:03 crc kubenswrapper[4724]: I0226 14:14:03.765138 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535254-458tm" event={"ID":"d4bbe523-5e73-4a67-898f-e22b51bcbb10","Type":"ContainerDied","Data":"c4bdc974ce1093c89a5bebede21f962c70d18f1b396df837b590cc6e98050944"} Feb 26 14:14:05 crc kubenswrapper[4724]: I0226 14:14:05.217932 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535254-458tm" Feb 26 14:14:05 crc kubenswrapper[4724]: I0226 14:14:05.308917 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j86rq\" (UniqueName: \"kubernetes.io/projected/d4bbe523-5e73-4a67-898f-e22b51bcbb10-kube-api-access-j86rq\") pod \"d4bbe523-5e73-4a67-898f-e22b51bcbb10\" (UID: \"d4bbe523-5e73-4a67-898f-e22b51bcbb10\") " Feb 26 14:14:05 crc kubenswrapper[4724]: I0226 14:14:05.319945 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4bbe523-5e73-4a67-898f-e22b51bcbb10-kube-api-access-j86rq" (OuterVolumeSpecName: "kube-api-access-j86rq") pod "d4bbe523-5e73-4a67-898f-e22b51bcbb10" (UID: "d4bbe523-5e73-4a67-898f-e22b51bcbb10"). InnerVolumeSpecName "kube-api-access-j86rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:14:05 crc kubenswrapper[4724]: I0226 14:14:05.413585 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j86rq\" (UniqueName: \"kubernetes.io/projected/d4bbe523-5e73-4a67-898f-e22b51bcbb10-kube-api-access-j86rq\") on node \"crc\" DevicePath \"\"" Feb 26 14:14:05 crc kubenswrapper[4724]: I0226 14:14:05.795146 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535254-458tm" event={"ID":"d4bbe523-5e73-4a67-898f-e22b51bcbb10","Type":"ContainerDied","Data":"7512533269bab352387ee844864ea7544aa0da9316c92f9f49f17a07e8875231"} Feb 26 14:14:05 crc kubenswrapper[4724]: I0226 14:14:05.795512 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7512533269bab352387ee844864ea7544aa0da9316c92f9f49f17a07e8875231" Feb 26 14:14:05 crc kubenswrapper[4724]: I0226 14:14:05.795204 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535254-458tm" Feb 26 14:14:05 crc kubenswrapper[4724]: I0226 14:14:05.862759 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535248-rdw2m"] Feb 26 14:14:05 crc kubenswrapper[4724]: I0226 14:14:05.878499 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535248-rdw2m"] Feb 26 14:14:05 crc kubenswrapper[4724]: I0226 14:14:05.989448 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1c532c8-5cee-4ebc-8104-cac5d020b5b0" path="/var/lib/kubelet/pods/e1c532c8-5cee-4ebc-8104-cac5d020b5b0/volumes" Feb 26 14:14:12 crc kubenswrapper[4724]: I0226 14:14:12.975794 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:14:12 crc kubenswrapper[4724]: E0226 14:14:12.976552 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:14:26 crc kubenswrapper[4724]: I0226 14:14:26.975273 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:14:26 crc kubenswrapper[4724]: E0226 14:14:26.976054 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:14:31 crc kubenswrapper[4724]: I0226 14:14:31.642536 4724 scope.go:117] "RemoveContainer" containerID="b9c86b0704a7348dd8176f2c9afa6e192f27324a94fe31dbeca94ff632b96604" Feb 26 14:14:41 crc kubenswrapper[4724]: I0226 14:14:41.981963 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:14:41 crc kubenswrapper[4724]: E0226 14:14:41.982822 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:14:47 crc kubenswrapper[4724]: I0226 14:14:46.553476 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4j9ps"] Feb 26 14:14:47 crc kubenswrapper[4724]: E0226 14:14:46.554718 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bbe523-5e73-4a67-898f-e22b51bcbb10" containerName="oc" Feb 26 14:14:47 crc kubenswrapper[4724]: I0226 14:14:46.554736 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bbe523-5e73-4a67-898f-e22b51bcbb10" containerName="oc" Feb 26 14:14:47 crc kubenswrapper[4724]: I0226 14:14:46.555026 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4bbe523-5e73-4a67-898f-e22b51bcbb10" containerName="oc" Feb 26 14:14:47 crc kubenswrapper[4724]: I0226 14:14:46.557022 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4j9ps"] Feb 26 14:14:47 crc kubenswrapper[4724]: I0226 14:14:46.557156 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:14:47 crc kubenswrapper[4724]: I0226 14:14:47.112569 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxnk2\" (UniqueName: \"kubernetes.io/projected/d687f75c-1ca6-4fcb-879c-6cf921851aff-kube-api-access-rxnk2\") pod \"community-operators-4j9ps\" (UID: \"d687f75c-1ca6-4fcb-879c-6cf921851aff\") " pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:14:47 crc kubenswrapper[4724]: I0226 14:14:47.112618 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d687f75c-1ca6-4fcb-879c-6cf921851aff-utilities\") pod \"community-operators-4j9ps\" (UID: \"d687f75c-1ca6-4fcb-879c-6cf921851aff\") " pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:14:47 crc kubenswrapper[4724]: I0226 14:14:47.112749 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d687f75c-1ca6-4fcb-879c-6cf921851aff-catalog-content\") pod \"community-operators-4j9ps\" (UID: \"d687f75c-1ca6-4fcb-879c-6cf921851aff\") " pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:14:47 crc kubenswrapper[4724]: I0226 14:14:47.215072 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxnk2\" (UniqueName: \"kubernetes.io/projected/d687f75c-1ca6-4fcb-879c-6cf921851aff-kube-api-access-rxnk2\") pod \"community-operators-4j9ps\" (UID: \"d687f75c-1ca6-4fcb-879c-6cf921851aff\") " pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:14:47 crc kubenswrapper[4724]: I0226 14:14:47.215135 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d687f75c-1ca6-4fcb-879c-6cf921851aff-utilities\") pod \"community-operators-4j9ps\" (UID: \"d687f75c-1ca6-4fcb-879c-6cf921851aff\") " pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:14:47 crc kubenswrapper[4724]: I0226 14:14:47.215257 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d687f75c-1ca6-4fcb-879c-6cf921851aff-catalog-content\") pod \"community-operators-4j9ps\" (UID: \"d687f75c-1ca6-4fcb-879c-6cf921851aff\") " pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:14:47 crc kubenswrapper[4724]: I0226 14:14:47.216896 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d687f75c-1ca6-4fcb-879c-6cf921851aff-utilities\") pod \"community-operators-4j9ps\" (UID: \"d687f75c-1ca6-4fcb-879c-6cf921851aff\") " pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:14:47 crc kubenswrapper[4724]: I0226 14:14:47.217221 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d687f75c-1ca6-4fcb-879c-6cf921851aff-catalog-content\") pod \"community-operators-4j9ps\" (UID: \"d687f75c-1ca6-4fcb-879c-6cf921851aff\") " pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:14:47 crc kubenswrapper[4724]: I0226 14:14:47.274548 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxnk2\" (UniqueName: \"kubernetes.io/projected/d687f75c-1ca6-4fcb-879c-6cf921851aff-kube-api-access-rxnk2\") pod \"community-operators-4j9ps\" (UID: \"d687f75c-1ca6-4fcb-879c-6cf921851aff\") " pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:14:47 crc kubenswrapper[4724]: I0226 14:14:47.480258 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:14:48 crc kubenswrapper[4724]: I0226 14:14:48.228736 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4j9ps"] Feb 26 14:14:48 crc kubenswrapper[4724]: I0226 14:14:48.292593 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j9ps" event={"ID":"d687f75c-1ca6-4fcb-879c-6cf921851aff","Type":"ContainerStarted","Data":"950cb00ff35b1721fedf01343fa352d0d1ff8dacdbab3d619d498ffa28b8abd4"} Feb 26 14:14:49 crc kubenswrapper[4724]: I0226 14:14:49.306857 4724 generic.go:334] "Generic (PLEG): container finished" podID="d687f75c-1ca6-4fcb-879c-6cf921851aff" containerID="12a16c354c8b9ef06ea21ceac2abef58dfb1763825a283b04b1e2be01e12e867" exitCode=0 Feb 26 14:14:49 crc kubenswrapper[4724]: I0226 14:14:49.307045 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j9ps" event={"ID":"d687f75c-1ca6-4fcb-879c-6cf921851aff","Type":"ContainerDied","Data":"12a16c354c8b9ef06ea21ceac2abef58dfb1763825a283b04b1e2be01e12e867"} Feb 26 14:14:51 crc kubenswrapper[4724]: I0226 14:14:51.328086 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j9ps" event={"ID":"d687f75c-1ca6-4fcb-879c-6cf921851aff","Type":"ContainerStarted","Data":"d1e4adb75275e2eafaedaf078ad642e8e24c2f63a6413a972720952be146d2a7"} Feb 26 14:14:52 crc kubenswrapper[4724]: I0226 14:14:52.341438 4724 generic.go:334] "Generic (PLEG): container finished" podID="d687f75c-1ca6-4fcb-879c-6cf921851aff" containerID="d1e4adb75275e2eafaedaf078ad642e8e24c2f63a6413a972720952be146d2a7" exitCode=0 Feb 26 14:14:52 crc kubenswrapper[4724]: I0226 14:14:52.341536 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j9ps" event={"ID":"d687f75c-1ca6-4fcb-879c-6cf921851aff","Type":"ContainerDied","Data":"d1e4adb75275e2eafaedaf078ad642e8e24c2f63a6413a972720952be146d2a7"} Feb 26 14:14:53 crc kubenswrapper[4724]: I0226 14:14:53.364493 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j9ps" event={"ID":"d687f75c-1ca6-4fcb-879c-6cf921851aff","Type":"ContainerStarted","Data":"3adc20ff7aaf482907ed0e8792184e06ab32c158b1647d29a7049da6a69871fc"} Feb 26 14:14:53 crc kubenswrapper[4724]: I0226 14:14:53.388632 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4j9ps" podStartSLOduration=3.678073595 podStartE2EDuration="7.388603567s" podCreationTimestamp="2026-02-26 14:14:46 +0000 UTC" firstStartedPulling="2026-02-26 14:14:49.309389216 +0000 UTC m=+11355.965128351" lastFinishedPulling="2026-02-26 14:14:53.019919208 +0000 UTC m=+11359.675658323" observedRunningTime="2026-02-26 14:14:53.383189033 +0000 UTC m=+11360.038928158" watchObservedRunningTime="2026-02-26 14:14:53.388603567 +0000 UTC m=+11360.044342672" Feb 26 14:14:55 crc kubenswrapper[4724]: I0226 14:14:55.975734 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:14:55 crc kubenswrapper[4724]: E0226 14:14:55.976894 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:14:57 crc kubenswrapper[4724]: I0226 14:14:57.481235 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:14:57 crc kubenswrapper[4724]: I0226 14:14:57.481649 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:14:58 crc kubenswrapper[4724]: I0226 14:14:58.542212 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4j9ps" podUID="d687f75c-1ca6-4fcb-879c-6cf921851aff" containerName="registry-server" probeResult="failure" output=< Feb 26 14:14:58 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:14:58 crc kubenswrapper[4724]: > Feb 26 14:15:00 crc kubenswrapper[4724]: I0226 14:15:00.187166 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br"] Feb 26 14:15:00 crc kubenswrapper[4724]: I0226 14:15:00.189697 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" Feb 26 14:15:00 crc kubenswrapper[4724]: I0226 14:15:00.197089 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 14:15:00 crc kubenswrapper[4724]: I0226 14:15:00.206906 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 14:15:00 crc kubenswrapper[4724]: I0226 14:15:00.210451 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br"] Feb 26 14:15:00 crc kubenswrapper[4724]: I0226 14:15:00.320565 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxjw\" (UniqueName: \"kubernetes.io/projected/e2d52259-8faf-4e53-9c4d-6210079417f4-kube-api-access-mwxjw\") pod \"collect-profiles-29535255-7p5br\" (UID: \"e2d52259-8faf-4e53-9c4d-6210079417f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" Feb 26 14:15:00 crc kubenswrapper[4724]: I0226 14:15:00.320654 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d52259-8faf-4e53-9c4d-6210079417f4-secret-volume\") pod \"collect-profiles-29535255-7p5br\" (UID: \"e2d52259-8faf-4e53-9c4d-6210079417f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" Feb 26 14:15:00 crc kubenswrapper[4724]: I0226 14:15:00.320831 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d52259-8faf-4e53-9c4d-6210079417f4-config-volume\") pod \"collect-profiles-29535255-7p5br\" (UID: \"e2d52259-8faf-4e53-9c4d-6210079417f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" Feb 26 14:15:00 crc kubenswrapper[4724]: I0226 14:15:00.422770 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwxjw\" (UniqueName: \"kubernetes.io/projected/e2d52259-8faf-4e53-9c4d-6210079417f4-kube-api-access-mwxjw\") pod \"collect-profiles-29535255-7p5br\" (UID: \"e2d52259-8faf-4e53-9c4d-6210079417f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" Feb 26 14:15:00 crc kubenswrapper[4724]: I0226 14:15:00.422852 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d52259-8faf-4e53-9c4d-6210079417f4-secret-volume\") pod \"collect-profiles-29535255-7p5br\" (UID: \"e2d52259-8faf-4e53-9c4d-6210079417f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" Feb 26 14:15:00 crc kubenswrapper[4724]: I0226 14:15:00.422949 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d52259-8faf-4e53-9c4d-6210079417f4-config-volume\") pod \"collect-profiles-29535255-7p5br\" (UID: \"e2d52259-8faf-4e53-9c4d-6210079417f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" Feb 26 14:15:00 crc kubenswrapper[4724]: I0226 14:15:00.425836 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d52259-8faf-4e53-9c4d-6210079417f4-config-volume\") pod \"collect-profiles-29535255-7p5br\" (UID: \"e2d52259-8faf-4e53-9c4d-6210079417f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" Feb 26 14:15:00 crc kubenswrapper[4724]: I0226 14:15:00.430561 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d52259-8faf-4e53-9c4d-6210079417f4-secret-volume\") pod \"collect-profiles-29535255-7p5br\" (UID: \"e2d52259-8faf-4e53-9c4d-6210079417f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" Feb 26 14:15:00 crc kubenswrapper[4724]: I0226 14:15:00.450921 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwxjw\" (UniqueName: \"kubernetes.io/projected/e2d52259-8faf-4e53-9c4d-6210079417f4-kube-api-access-mwxjw\") pod \"collect-profiles-29535255-7p5br\" (UID: \"e2d52259-8faf-4e53-9c4d-6210079417f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" Feb 26 14:15:00 crc kubenswrapper[4724]: I0226 14:15:00.523170 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" Feb 26 14:15:01 crc kubenswrapper[4724]: I0226 14:15:01.231328 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br"] Feb 26 14:15:01 crc kubenswrapper[4724]: I0226 14:15:01.448070 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" event={"ID":"e2d52259-8faf-4e53-9c4d-6210079417f4","Type":"ContainerStarted","Data":"baeb3d919d4c39b74893e7b72daa14d84ef1c432d323fd8c07ed39527f5dfca6"} Feb 26 14:15:01 crc kubenswrapper[4724]: I0226 14:15:01.448122 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" event={"ID":"e2d52259-8faf-4e53-9c4d-6210079417f4","Type":"ContainerStarted","Data":"0f2fa628ff7a70c9877de178caa0411310dc0f998ffedb3193be56199ac63485"} Feb 26 14:15:01 crc kubenswrapper[4724]: I0226 14:15:01.485658 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" podStartSLOduration=1.485639352 podStartE2EDuration="1.485639352s" podCreationTimestamp="2026-02-26 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:15:01.471684777 +0000 UTC m=+11368.127423902" watchObservedRunningTime="2026-02-26 14:15:01.485639352 +0000 UTC m=+11368.141378467" Feb 26 14:15:02 crc kubenswrapper[4724]: I0226 14:15:02.458799 4724 generic.go:334] "Generic (PLEG): container finished" podID="e2d52259-8faf-4e53-9c4d-6210079417f4" containerID="baeb3d919d4c39b74893e7b72daa14d84ef1c432d323fd8c07ed39527f5dfca6" exitCode=0 Feb 26 14:15:02 crc kubenswrapper[4724]: I0226 14:15:02.458936 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" event={"ID":"e2d52259-8faf-4e53-9c4d-6210079417f4","Type":"ContainerDied","Data":"baeb3d919d4c39b74893e7b72daa14d84ef1c432d323fd8c07ed39527f5dfca6"} Feb 26 14:15:03 crc kubenswrapper[4724]: I0226 14:15:03.988951 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" Feb 26 14:15:04 crc kubenswrapper[4724]: I0226 14:15:04.096937 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d52259-8faf-4e53-9c4d-6210079417f4-config-volume\") pod \"e2d52259-8faf-4e53-9c4d-6210079417f4\" (UID: \"e2d52259-8faf-4e53-9c4d-6210079417f4\") " Feb 26 14:15:04 crc kubenswrapper[4724]: I0226 14:15:04.097026 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d52259-8faf-4e53-9c4d-6210079417f4-secret-volume\") pod \"e2d52259-8faf-4e53-9c4d-6210079417f4\" (UID: \"e2d52259-8faf-4e53-9c4d-6210079417f4\") " Feb 26 14:15:04 crc kubenswrapper[4724]: I0226 14:15:04.097100 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwxjw\" (UniqueName: \"kubernetes.io/projected/e2d52259-8faf-4e53-9c4d-6210079417f4-kube-api-access-mwxjw\") pod \"e2d52259-8faf-4e53-9c4d-6210079417f4\" (UID: \"e2d52259-8faf-4e53-9c4d-6210079417f4\") " Feb 26 14:15:04 crc kubenswrapper[4724]: I0226 14:15:04.098920 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d52259-8faf-4e53-9c4d-6210079417f4-config-volume" (OuterVolumeSpecName: "config-volume") pod "e2d52259-8faf-4e53-9c4d-6210079417f4" (UID: "e2d52259-8faf-4e53-9c4d-6210079417f4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:04 crc kubenswrapper[4724]: I0226 14:15:04.116399 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d52259-8faf-4e53-9c4d-6210079417f4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e2d52259-8faf-4e53-9c4d-6210079417f4" (UID: "e2d52259-8faf-4e53-9c4d-6210079417f4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:04 crc kubenswrapper[4724]: I0226 14:15:04.116477 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d52259-8faf-4e53-9c4d-6210079417f4-kube-api-access-mwxjw" (OuterVolumeSpecName: "kube-api-access-mwxjw") pod "e2d52259-8faf-4e53-9c4d-6210079417f4" (UID: "e2d52259-8faf-4e53-9c4d-6210079417f4"). InnerVolumeSpecName "kube-api-access-mwxjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:04 crc kubenswrapper[4724]: I0226 14:15:04.199952 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d52259-8faf-4e53-9c4d-6210079417f4-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:04 crc kubenswrapper[4724]: I0226 14:15:04.199985 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d52259-8faf-4e53-9c4d-6210079417f4-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:04 crc kubenswrapper[4724]: I0226 14:15:04.199996 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwxjw\" (UniqueName: \"kubernetes.io/projected/e2d52259-8faf-4e53-9c4d-6210079417f4-kube-api-access-mwxjw\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:04 crc kubenswrapper[4724]: I0226 14:15:04.350697 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6"] Feb 26 14:15:04 crc kubenswrapper[4724]: I0226 14:15:04.363170 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535210-sjlh6"] Feb 26 14:15:04 crc kubenswrapper[4724]: I0226 14:15:04.484118 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" event={"ID":"e2d52259-8faf-4e53-9c4d-6210079417f4","Type":"ContainerDied","Data":"0f2fa628ff7a70c9877de178caa0411310dc0f998ffedb3193be56199ac63485"} Feb 26 14:15:04 crc kubenswrapper[4724]: I0226 14:15:04.484156 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f2fa628ff7a70c9877de178caa0411310dc0f998ffedb3193be56199ac63485" Feb 26 14:15:04 crc kubenswrapper[4724]: I0226 14:15:04.484223 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br" Feb 26 14:15:05 crc kubenswrapper[4724]: I0226 14:15:05.988761 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d732943-e434-4bb5-b301-74a6f7f2ce09" path="/var/lib/kubelet/pods/3d732943-e434-4bb5-b301-74a6f7f2ce09/volumes" Feb 26 14:15:08 crc kubenswrapper[4724]: I0226 14:15:08.525704 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4j9ps" podUID="d687f75c-1ca6-4fcb-879c-6cf921851aff" containerName="registry-server" probeResult="failure" output=< Feb 26 14:15:08 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:15:08 crc kubenswrapper[4724]: > Feb 26 14:15:10 crc kubenswrapper[4724]: I0226 14:15:10.976881 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:15:10 crc kubenswrapper[4724]: E0226 14:15:10.977494 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:15:17 crc kubenswrapper[4724]: I0226 14:15:17.604870 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:15:17 crc kubenswrapper[4724]: I0226 14:15:17.655032 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:15:17 crc kubenswrapper[4724]: I0226 14:15:17.853016 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4j9ps"] Feb 26 14:15:18 crc kubenswrapper[4724]: I0226 14:15:18.633197 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4j9ps" podUID="d687f75c-1ca6-4fcb-879c-6cf921851aff" containerName="registry-server" containerID="cri-o://3adc20ff7aaf482907ed0e8792184e06ab32c158b1647d29a7049da6a69871fc" gracePeriod=2 Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.406034 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.527168 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d687f75c-1ca6-4fcb-879c-6cf921851aff-utilities\") pod \"d687f75c-1ca6-4fcb-879c-6cf921851aff\" (UID: \"d687f75c-1ca6-4fcb-879c-6cf921851aff\") " Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.527266 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d687f75c-1ca6-4fcb-879c-6cf921851aff-catalog-content\") pod \"d687f75c-1ca6-4fcb-879c-6cf921851aff\" (UID: \"d687f75c-1ca6-4fcb-879c-6cf921851aff\") " Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.527397 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxnk2\" (UniqueName: \"kubernetes.io/projected/d687f75c-1ca6-4fcb-879c-6cf921851aff-kube-api-access-rxnk2\") pod \"d687f75c-1ca6-4fcb-879c-6cf921851aff\" (UID: \"d687f75c-1ca6-4fcb-879c-6cf921851aff\") " Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.528199 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d687f75c-1ca6-4fcb-879c-6cf921851aff-utilities" (OuterVolumeSpecName: "utilities") pod "d687f75c-1ca6-4fcb-879c-6cf921851aff" (UID: "d687f75c-1ca6-4fcb-879c-6cf921851aff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.534605 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d687f75c-1ca6-4fcb-879c-6cf921851aff-kube-api-access-rxnk2" (OuterVolumeSpecName: "kube-api-access-rxnk2") pod "d687f75c-1ca6-4fcb-879c-6cf921851aff" (UID: "d687f75c-1ca6-4fcb-879c-6cf921851aff"). InnerVolumeSpecName "kube-api-access-rxnk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.598118 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d687f75c-1ca6-4fcb-879c-6cf921851aff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d687f75c-1ca6-4fcb-879c-6cf921851aff" (UID: "d687f75c-1ca6-4fcb-879c-6cf921851aff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.630933 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d687f75c-1ca6-4fcb-879c-6cf921851aff-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.630995 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d687f75c-1ca6-4fcb-879c-6cf921851aff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.631017 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxnk2\" (UniqueName: \"kubernetes.io/projected/d687f75c-1ca6-4fcb-879c-6cf921851aff-kube-api-access-rxnk2\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.649251 4724 generic.go:334] "Generic (PLEG): container finished" podID="d687f75c-1ca6-4fcb-879c-6cf921851aff" containerID="3adc20ff7aaf482907ed0e8792184e06ab32c158b1647d29a7049da6a69871fc" exitCode=0 Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.649315 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j9ps" event={"ID":"d687f75c-1ca6-4fcb-879c-6cf921851aff","Type":"ContainerDied","Data":"3adc20ff7aaf482907ed0e8792184e06ab32c158b1647d29a7049da6a69871fc"} Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.649388 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j9ps" event={"ID":"d687f75c-1ca6-4fcb-879c-6cf921851aff","Type":"ContainerDied","Data":"950cb00ff35b1721fedf01343fa352d0d1ff8dacdbab3d619d498ffa28b8abd4"} Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.649418 4724 scope.go:117] "RemoveContainer" containerID="3adc20ff7aaf482907ed0e8792184e06ab32c158b1647d29a7049da6a69871fc" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.650696 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j9ps" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.684780 4724 scope.go:117] "RemoveContainer" containerID="d1e4adb75275e2eafaedaf078ad642e8e24c2f63a6413a972720952be146d2a7" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.702951 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4j9ps"] Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.714007 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4j9ps"] Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.723540 4724 scope.go:117] "RemoveContainer" containerID="12a16c354c8b9ef06ea21ceac2abef58dfb1763825a283b04b1e2be01e12e867" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.757506 4724 scope.go:117] "RemoveContainer" containerID="3adc20ff7aaf482907ed0e8792184e06ab32c158b1647d29a7049da6a69871fc" Feb 26 14:15:19 crc kubenswrapper[4724]: E0226 14:15:19.758202 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3adc20ff7aaf482907ed0e8792184e06ab32c158b1647d29a7049da6a69871fc\": container with ID starting with 3adc20ff7aaf482907ed0e8792184e06ab32c158b1647d29a7049da6a69871fc not found: ID does not exist" containerID="3adc20ff7aaf482907ed0e8792184e06ab32c158b1647d29a7049da6a69871fc" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.758323 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3adc20ff7aaf482907ed0e8792184e06ab32c158b1647d29a7049da6a69871fc"} err="failed to get container status \"3adc20ff7aaf482907ed0e8792184e06ab32c158b1647d29a7049da6a69871fc\": rpc error: code = NotFound desc = could not find container \"3adc20ff7aaf482907ed0e8792184e06ab32c158b1647d29a7049da6a69871fc\": container with ID starting with 3adc20ff7aaf482907ed0e8792184e06ab32c158b1647d29a7049da6a69871fc not found: ID does not exist" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.758442 4724 scope.go:117] "RemoveContainer" containerID="d1e4adb75275e2eafaedaf078ad642e8e24c2f63a6413a972720952be146d2a7" Feb 26 14:15:19 crc kubenswrapper[4724]: E0226 14:15:19.759011 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1e4adb75275e2eafaedaf078ad642e8e24c2f63a6413a972720952be146d2a7\": container with ID starting with d1e4adb75275e2eafaedaf078ad642e8e24c2f63a6413a972720952be146d2a7 not found: ID does not exist" containerID="d1e4adb75275e2eafaedaf078ad642e8e24c2f63a6413a972720952be146d2a7" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.759042 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1e4adb75275e2eafaedaf078ad642e8e24c2f63a6413a972720952be146d2a7"} err="failed to get container status \"d1e4adb75275e2eafaedaf078ad642e8e24c2f63a6413a972720952be146d2a7\": rpc error: code = NotFound desc = could not find container \"d1e4adb75275e2eafaedaf078ad642e8e24c2f63a6413a972720952be146d2a7\": container with ID starting with d1e4adb75275e2eafaedaf078ad642e8e24c2f63a6413a972720952be146d2a7 not found: ID does not exist" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.759066 4724 scope.go:117] "RemoveContainer" containerID="12a16c354c8b9ef06ea21ceac2abef58dfb1763825a283b04b1e2be01e12e867" Feb 26 14:15:19 crc kubenswrapper[4724]: E0226 14:15:19.759337 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12a16c354c8b9ef06ea21ceac2abef58dfb1763825a283b04b1e2be01e12e867\": container with ID starting with 12a16c354c8b9ef06ea21ceac2abef58dfb1763825a283b04b1e2be01e12e867 not found: ID does not exist" containerID="12a16c354c8b9ef06ea21ceac2abef58dfb1763825a283b04b1e2be01e12e867" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.759359 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12a16c354c8b9ef06ea21ceac2abef58dfb1763825a283b04b1e2be01e12e867"} err="failed to get container status \"12a16c354c8b9ef06ea21ceac2abef58dfb1763825a283b04b1e2be01e12e867\": rpc error: code = NotFound desc = could not find container \"12a16c354c8b9ef06ea21ceac2abef58dfb1763825a283b04b1e2be01e12e867\": container with ID starting with 12a16c354c8b9ef06ea21ceac2abef58dfb1763825a283b04b1e2be01e12e867 not found: ID does not exist" Feb 26 14:15:19 crc kubenswrapper[4724]: I0226 14:15:19.992519 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d687f75c-1ca6-4fcb-879c-6cf921851aff" path="/var/lib/kubelet/pods/d687f75c-1ca6-4fcb-879c-6cf921851aff/volumes" Feb 26 14:15:21 crc kubenswrapper[4724]: I0226 14:15:21.975823 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:15:21 crc kubenswrapper[4724]: E0226 14:15:21.976531 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.461776 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5pz69"] Feb 26 14:15:23 crc kubenswrapper[4724]: E0226 14:15:23.463112 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d687f75c-1ca6-4fcb-879c-6cf921851aff" containerName="registry-server" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.463130 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d687f75c-1ca6-4fcb-879c-6cf921851aff" containerName="registry-server" Feb 26 14:15:23 crc kubenswrapper[4724]: E0226 14:15:23.463150 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d687f75c-1ca6-4fcb-879c-6cf921851aff" containerName="extract-utilities" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.463157 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d687f75c-1ca6-4fcb-879c-6cf921851aff" containerName="extract-utilities" Feb 26 14:15:23 crc kubenswrapper[4724]: E0226 14:15:23.463168 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d52259-8faf-4e53-9c4d-6210079417f4" containerName="collect-profiles" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.463196 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d52259-8faf-4e53-9c4d-6210079417f4" containerName="collect-profiles" Feb 26 14:15:23 crc kubenswrapper[4724]: E0226 14:15:23.463211 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d687f75c-1ca6-4fcb-879c-6cf921851aff" containerName="extract-content" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.463217 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d687f75c-1ca6-4fcb-879c-6cf921851aff" containerName="extract-content" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.466064 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d52259-8faf-4e53-9c4d-6210079417f4" containerName="collect-profiles" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.466086 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="d687f75c-1ca6-4fcb-879c-6cf921851aff" containerName="registry-server" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.467489 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.477838 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pz69"] Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.553979 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq5m6\" (UniqueName: \"kubernetes.io/projected/3924dfda-4613-482b-831c-c7ae9296d300-kube-api-access-lq5m6\") pod \"redhat-marketplace-5pz69\" (UID: \"3924dfda-4613-482b-831c-c7ae9296d300\") " pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.554064 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3924dfda-4613-482b-831c-c7ae9296d300-catalog-content\") pod \"redhat-marketplace-5pz69\" (UID: \"3924dfda-4613-482b-831c-c7ae9296d300\") " pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.554097 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3924dfda-4613-482b-831c-c7ae9296d300-utilities\") pod \"redhat-marketplace-5pz69\" (UID: \"3924dfda-4613-482b-831c-c7ae9296d300\") " pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.656308 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq5m6\" (UniqueName: \"kubernetes.io/projected/3924dfda-4613-482b-831c-c7ae9296d300-kube-api-access-lq5m6\") pod \"redhat-marketplace-5pz69\" (UID: \"3924dfda-4613-482b-831c-c7ae9296d300\") " pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.656467 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3924dfda-4613-482b-831c-c7ae9296d300-catalog-content\") pod \"redhat-marketplace-5pz69\" (UID: \"3924dfda-4613-482b-831c-c7ae9296d300\") " pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.656522 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3924dfda-4613-482b-831c-c7ae9296d300-utilities\") pod \"redhat-marketplace-5pz69\" (UID: \"3924dfda-4613-482b-831c-c7ae9296d300\") " pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.656979 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3924dfda-4613-482b-831c-c7ae9296d300-catalog-content\") pod \"redhat-marketplace-5pz69\" (UID: \"3924dfda-4613-482b-831c-c7ae9296d300\") " pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.657106 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3924dfda-4613-482b-831c-c7ae9296d300-utilities\") pod \"redhat-marketplace-5pz69\" (UID: \"3924dfda-4613-482b-831c-c7ae9296d300\") " pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.695150 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq5m6\" (UniqueName: \"kubernetes.io/projected/3924dfda-4613-482b-831c-c7ae9296d300-kube-api-access-lq5m6\") pod \"redhat-marketplace-5pz69\" (UID: \"3924dfda-4613-482b-831c-c7ae9296d300\") " pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:23 crc kubenswrapper[4724]: I0226 14:15:23.790734 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:24 crc kubenswrapper[4724]: I0226 14:15:24.536812 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pz69"] Feb 26 14:15:24 crc kubenswrapper[4724]: I0226 14:15:24.707469 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pz69" event={"ID":"3924dfda-4613-482b-831c-c7ae9296d300","Type":"ContainerStarted","Data":"390ffb5129fd5ef191e267e58db9deb8b89c8dac4e8dcccceea3ca5d54474f18"} Feb 26 14:15:25 crc kubenswrapper[4724]: I0226 14:15:25.717887 4724 generic.go:334] "Generic (PLEG): container finished" podID="3924dfda-4613-482b-831c-c7ae9296d300" containerID="a3b8ff0e1bc4292fde4bb74d81cf064dd7fb8dc7d1d3862eac21db94e4123fe2" exitCode=0 Feb 26 14:15:25 crc kubenswrapper[4724]: I0226 14:15:25.718197 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pz69" event={"ID":"3924dfda-4613-482b-831c-c7ae9296d300","Type":"ContainerDied","Data":"a3b8ff0e1bc4292fde4bb74d81cf064dd7fb8dc7d1d3862eac21db94e4123fe2"} Feb 26 14:15:27 crc kubenswrapper[4724]: I0226 14:15:27.744006 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pz69" event={"ID":"3924dfda-4613-482b-831c-c7ae9296d300","Type":"ContainerStarted","Data":"716fd9aae94658a505292fbe8e8ebfdc3b0471aed09a61f548339a3eb06cd219"} Feb 26 14:15:28 crc kubenswrapper[4724]: I0226 14:15:28.760563 4724 generic.go:334] "Generic (PLEG): container finished" podID="3924dfda-4613-482b-831c-c7ae9296d300" containerID="716fd9aae94658a505292fbe8e8ebfdc3b0471aed09a61f548339a3eb06cd219" exitCode=0 Feb 26 14:15:28 crc kubenswrapper[4724]: I0226 14:15:28.760699 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pz69" event={"ID":"3924dfda-4613-482b-831c-c7ae9296d300","Type":"ContainerDied","Data":"716fd9aae94658a505292fbe8e8ebfdc3b0471aed09a61f548339a3eb06cd219"} Feb 26 14:15:29 crc kubenswrapper[4724]: I0226 14:15:29.773482 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pz69" event={"ID":"3924dfda-4613-482b-831c-c7ae9296d300","Type":"ContainerStarted","Data":"d960cf96a5aa1cd6d38971754bf3a0e9686e2f1b78563d92371d465ceeb3fd96"} Feb 26 14:15:29 crc kubenswrapper[4724]: I0226 14:15:29.797942 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5pz69" podStartSLOduration=3.340877075 podStartE2EDuration="6.797921865s" podCreationTimestamp="2026-02-26 14:15:23 +0000 UTC" firstStartedPulling="2026-02-26 14:15:25.720122949 +0000 UTC m=+11392.375862064" lastFinishedPulling="2026-02-26 14:15:29.177167739 +0000 UTC m=+11395.832906854" observedRunningTime="2026-02-26 14:15:29.789550368 +0000 UTC m=+11396.445289483" watchObservedRunningTime="2026-02-26 14:15:29.797921865 +0000 UTC m=+11396.453660980" Feb 26 14:15:31 crc kubenswrapper[4724]: I0226 14:15:31.725305 4724 scope.go:117] "RemoveContainer" containerID="dae24897faed70ffc4c74ecc6cbab5243bfd8aa62952227dc78cf8a7cea0ca2d" Feb 26 14:15:33 crc kubenswrapper[4724]: I0226 14:15:33.791434 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:33 crc kubenswrapper[4724]: I0226 14:15:33.792016 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:34 crc kubenswrapper[4724]: I0226 14:15:34.861081 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-5pz69" podUID="3924dfda-4613-482b-831c-c7ae9296d300" containerName="registry-server" probeResult="failure" output=< Feb 26 14:15:34 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:15:34 crc kubenswrapper[4724]: > Feb 26 14:15:36 crc kubenswrapper[4724]: I0226 14:15:36.975907 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:15:36 crc kubenswrapper[4724]: E0226 14:15:36.976668 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:15:43 crc kubenswrapper[4724]: I0226 14:15:43.861122 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:43 crc kubenswrapper[4724]: I0226 14:15:43.907661 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:44 crc kubenswrapper[4724]: I0226 14:15:44.366739 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pz69"] Feb 26 14:15:44 crc kubenswrapper[4724]: I0226 14:15:44.906104 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5pz69" podUID="3924dfda-4613-482b-831c-c7ae9296d300" containerName="registry-server" containerID="cri-o://d960cf96a5aa1cd6d38971754bf3a0e9686e2f1b78563d92371d465ceeb3fd96" gracePeriod=2 Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.531088 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.563881 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3924dfda-4613-482b-831c-c7ae9296d300-catalog-content\") pod \"3924dfda-4613-482b-831c-c7ae9296d300\" (UID: \"3924dfda-4613-482b-831c-c7ae9296d300\") " Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.564050 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3924dfda-4613-482b-831c-c7ae9296d300-utilities\") pod \"3924dfda-4613-482b-831c-c7ae9296d300\" (UID: \"3924dfda-4613-482b-831c-c7ae9296d300\") " Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.564127 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq5m6\" (UniqueName: \"kubernetes.io/projected/3924dfda-4613-482b-831c-c7ae9296d300-kube-api-access-lq5m6\") pod \"3924dfda-4613-482b-831c-c7ae9296d300\" (UID: \"3924dfda-4613-482b-831c-c7ae9296d300\") " Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.565080 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3924dfda-4613-482b-831c-c7ae9296d300-utilities" (OuterVolumeSpecName: "utilities") pod "3924dfda-4613-482b-831c-c7ae9296d300" (UID: "3924dfda-4613-482b-831c-c7ae9296d300"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.584639 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3924dfda-4613-482b-831c-c7ae9296d300-kube-api-access-lq5m6" (OuterVolumeSpecName: "kube-api-access-lq5m6") pod "3924dfda-4613-482b-831c-c7ae9296d300" (UID: "3924dfda-4613-482b-831c-c7ae9296d300"). InnerVolumeSpecName "kube-api-access-lq5m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.603117 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3924dfda-4613-482b-831c-c7ae9296d300-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3924dfda-4613-482b-831c-c7ae9296d300" (UID: "3924dfda-4613-482b-831c-c7ae9296d300"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.666332 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq5m6\" (UniqueName: \"kubernetes.io/projected/3924dfda-4613-482b-831c-c7ae9296d300-kube-api-access-lq5m6\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.666367 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3924dfda-4613-482b-831c-c7ae9296d300-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.666380 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3924dfda-4613-482b-831c-c7ae9296d300-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.916392 4724 generic.go:334] "Generic (PLEG): container finished" podID="3924dfda-4613-482b-831c-c7ae9296d300" containerID="d960cf96a5aa1cd6d38971754bf3a0e9686e2f1b78563d92371d465ceeb3fd96" exitCode=0 Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.917638 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pz69" event={"ID":"3924dfda-4613-482b-831c-c7ae9296d300","Type":"ContainerDied","Data":"d960cf96a5aa1cd6d38971754bf3a0e9686e2f1b78563d92371d465ceeb3fd96"} Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.917732 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pz69" event={"ID":"3924dfda-4613-482b-831c-c7ae9296d300","Type":"ContainerDied","Data":"390ffb5129fd5ef191e267e58db9deb8b89c8dac4e8dcccceea3ca5d54474f18"} Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.917855 4724 scope.go:117] "RemoveContainer" containerID="d960cf96a5aa1cd6d38971754bf3a0e9686e2f1b78563d92371d465ceeb3fd96" Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.918111 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pz69" Feb 26 14:15:45 crc kubenswrapper[4724]: I0226 14:15:45.940157 4724 scope.go:117] "RemoveContainer" containerID="716fd9aae94658a505292fbe8e8ebfdc3b0471aed09a61f548339a3eb06cd219" Feb 26 14:15:46 crc kubenswrapper[4724]: I0226 14:15:46.017928 4724 scope.go:117] "RemoveContainer" containerID="a3b8ff0e1bc4292fde4bb74d81cf064dd7fb8dc7d1d3862eac21db94e4123fe2" Feb 26 14:15:46 crc kubenswrapper[4724]: I0226 14:15:46.030792 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pz69"] Feb 26 14:15:46 crc kubenswrapper[4724]: I0226 14:15:46.030969 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pz69"] Feb 26 14:15:46 crc kubenswrapper[4724]: I0226 14:15:46.046761 4724 scope.go:117] "RemoveContainer" containerID="d960cf96a5aa1cd6d38971754bf3a0e9686e2f1b78563d92371d465ceeb3fd96" Feb 26 14:15:46 crc kubenswrapper[4724]: E0226 14:15:46.047316 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d960cf96a5aa1cd6d38971754bf3a0e9686e2f1b78563d92371d465ceeb3fd96\": container with ID starting with d960cf96a5aa1cd6d38971754bf3a0e9686e2f1b78563d92371d465ceeb3fd96 not found: ID does not exist" containerID="d960cf96a5aa1cd6d38971754bf3a0e9686e2f1b78563d92371d465ceeb3fd96" Feb 26 14:15:46 crc kubenswrapper[4724]: I0226 14:15:46.047346 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d960cf96a5aa1cd6d38971754bf3a0e9686e2f1b78563d92371d465ceeb3fd96"} err="failed to get container status \"d960cf96a5aa1cd6d38971754bf3a0e9686e2f1b78563d92371d465ceeb3fd96\": rpc error: code = NotFound desc = could not find container \"d960cf96a5aa1cd6d38971754bf3a0e9686e2f1b78563d92371d465ceeb3fd96\": container with ID starting with d960cf96a5aa1cd6d38971754bf3a0e9686e2f1b78563d92371d465ceeb3fd96 not found: ID does not exist" Feb 26 14:15:46 crc kubenswrapper[4724]: I0226 14:15:46.047367 4724 scope.go:117] "RemoveContainer" containerID="716fd9aae94658a505292fbe8e8ebfdc3b0471aed09a61f548339a3eb06cd219" Feb 26 14:15:46 crc kubenswrapper[4724]: E0226 14:15:46.047571 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"716fd9aae94658a505292fbe8e8ebfdc3b0471aed09a61f548339a3eb06cd219\": container with ID starting with 716fd9aae94658a505292fbe8e8ebfdc3b0471aed09a61f548339a3eb06cd219 not found: ID does not exist" containerID="716fd9aae94658a505292fbe8e8ebfdc3b0471aed09a61f548339a3eb06cd219" Feb 26 14:15:46 crc kubenswrapper[4724]: I0226 14:15:46.047590 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"716fd9aae94658a505292fbe8e8ebfdc3b0471aed09a61f548339a3eb06cd219"} err="failed to get container status \"716fd9aae94658a505292fbe8e8ebfdc3b0471aed09a61f548339a3eb06cd219\": rpc error: code = NotFound desc = could not find container \"716fd9aae94658a505292fbe8e8ebfdc3b0471aed09a61f548339a3eb06cd219\": container with ID starting with 716fd9aae94658a505292fbe8e8ebfdc3b0471aed09a61f548339a3eb06cd219 not found: ID does not exist" Feb 26 14:15:46 crc kubenswrapper[4724]: I0226 14:15:46.047602 4724 scope.go:117] "RemoveContainer" containerID="a3b8ff0e1bc4292fde4bb74d81cf064dd7fb8dc7d1d3862eac21db94e4123fe2" Feb 26 14:15:46 crc kubenswrapper[4724]: E0226 14:15:46.047792 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3b8ff0e1bc4292fde4bb74d81cf064dd7fb8dc7d1d3862eac21db94e4123fe2\": container with ID starting with a3b8ff0e1bc4292fde4bb74d81cf064dd7fb8dc7d1d3862eac21db94e4123fe2 not found: ID does not exist" containerID="a3b8ff0e1bc4292fde4bb74d81cf064dd7fb8dc7d1d3862eac21db94e4123fe2" Feb 26 14:15:46 crc kubenswrapper[4724]: I0226 14:15:46.047810 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3b8ff0e1bc4292fde4bb74d81cf064dd7fb8dc7d1d3862eac21db94e4123fe2"} err="failed to get container status \"a3b8ff0e1bc4292fde4bb74d81cf064dd7fb8dc7d1d3862eac21db94e4123fe2\": rpc error: code = NotFound desc = could not find container \"a3b8ff0e1bc4292fde4bb74d81cf064dd7fb8dc7d1d3862eac21db94e4123fe2\": container with ID starting with a3b8ff0e1bc4292fde4bb74d81cf064dd7fb8dc7d1d3862eac21db94e4123fe2 not found: ID does not exist" Feb 26 14:15:47 crc kubenswrapper[4724]: I0226 14:15:47.975603 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:15:47 crc kubenswrapper[4724]: E0226 14:15:47.976940 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:15:47 crc kubenswrapper[4724]: I0226 14:15:47.987291 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3924dfda-4613-482b-831c-c7ae9296d300" path="/var/lib/kubelet/pods/3924dfda-4613-482b-831c-c7ae9296d300/volumes" Feb 26 14:16:00 crc kubenswrapper[4724]: I0226 14:16:00.166423 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535256-mt54d"] Feb 26 14:16:00 crc kubenswrapper[4724]: E0226 14:16:00.167552 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3924dfda-4613-482b-831c-c7ae9296d300" containerName="extract-utilities" Feb 26 14:16:00 crc kubenswrapper[4724]: I0226 14:16:00.167574 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3924dfda-4613-482b-831c-c7ae9296d300" containerName="extract-utilities" Feb 26 14:16:00 crc kubenswrapper[4724]: E0226 14:16:00.167604 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3924dfda-4613-482b-831c-c7ae9296d300" containerName="registry-server" Feb 26 14:16:00 crc kubenswrapper[4724]: I0226 14:16:00.167613 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3924dfda-4613-482b-831c-c7ae9296d300" containerName="registry-server" Feb 26 14:16:00 crc kubenswrapper[4724]: E0226 14:16:00.167645 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3924dfda-4613-482b-831c-c7ae9296d300" containerName="extract-content" Feb 26 14:16:00 crc kubenswrapper[4724]: I0226 14:16:00.167654 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3924dfda-4613-482b-831c-c7ae9296d300" containerName="extract-content" Feb 26 14:16:00 crc kubenswrapper[4724]: I0226 14:16:00.167908 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3924dfda-4613-482b-831c-c7ae9296d300" containerName="registry-server" Feb 26 14:16:00 crc kubenswrapper[4724]: I0226 14:16:00.168753 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535256-mt54d" Feb 26 14:16:00 crc kubenswrapper[4724]: I0226 14:16:00.171255 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:16:00 crc kubenswrapper[4724]: I0226 14:16:00.171459 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:16:00 crc kubenswrapper[4724]: I0226 14:16:00.172695 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:16:00 crc kubenswrapper[4724]: I0226 14:16:00.182524 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535256-mt54d"] Feb 26 14:16:00 crc kubenswrapper[4724]: I0226 14:16:00.214062 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwd7q\" (UniqueName: \"kubernetes.io/projected/f8480feb-cbed-4098-acd9-840e697cd0fa-kube-api-access-fwd7q\") pod \"auto-csr-approver-29535256-mt54d\" (UID: \"f8480feb-cbed-4098-acd9-840e697cd0fa\") " pod="openshift-infra/auto-csr-approver-29535256-mt54d" Feb 26 14:16:00 crc kubenswrapper[4724]: I0226 14:16:00.315998 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwd7q\" (UniqueName: \"kubernetes.io/projected/f8480feb-cbed-4098-acd9-840e697cd0fa-kube-api-access-fwd7q\") pod \"auto-csr-approver-29535256-mt54d\" (UID: \"f8480feb-cbed-4098-acd9-840e697cd0fa\") " pod="openshift-infra/auto-csr-approver-29535256-mt54d" Feb 26 14:16:00 crc kubenswrapper[4724]: I0226 14:16:00.372274 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwd7q\" (UniqueName: \"kubernetes.io/projected/f8480feb-cbed-4098-acd9-840e697cd0fa-kube-api-access-fwd7q\") pod \"auto-csr-approver-29535256-mt54d\" (UID: \"f8480feb-cbed-4098-acd9-840e697cd0fa\") " pod="openshift-infra/auto-csr-approver-29535256-mt54d" Feb 26 14:16:00 crc kubenswrapper[4724]: I0226 14:16:00.527574 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535256-mt54d" Feb 26 14:16:00 crc kubenswrapper[4724]: I0226 14:16:00.975196 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:16:00 crc kubenswrapper[4724]: E0226 14:16:00.975796 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:16:01 crc kubenswrapper[4724]: W0226 14:16:01.044059 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8480feb_cbed_4098_acd9_840e697cd0fa.slice/crio-cd25757caec6a3debc7413faab7be227bf3e0ad6f42d0ee9d3912b89d3105104 WatchSource:0}: Error finding container cd25757caec6a3debc7413faab7be227bf3e0ad6f42d0ee9d3912b89d3105104: Status 404 returned error can't find the container with id cd25757caec6a3debc7413faab7be227bf3e0ad6f42d0ee9d3912b89d3105104 Feb 26 14:16:01 crc kubenswrapper[4724]: I0226 14:16:01.043471 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535256-mt54d"] Feb 26 14:16:01 crc kubenswrapper[4724]: I0226 14:16:01.108834 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535256-mt54d" event={"ID":"f8480feb-cbed-4098-acd9-840e697cd0fa","Type":"ContainerStarted","Data":"cd25757caec6a3debc7413faab7be227bf3e0ad6f42d0ee9d3912b89d3105104"} Feb 26 14:16:03 crc kubenswrapper[4724]: I0226 14:16:03.131509 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535256-mt54d" event={"ID":"f8480feb-cbed-4098-acd9-840e697cd0fa","Type":"ContainerStarted","Data":"a301c385cef3cdc20d59f880bd2e311fc64d88fc6091040d4692fd927e4ab811"} Feb 26 14:16:03 crc kubenswrapper[4724]: I0226 14:16:03.153707 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535256-mt54d" podStartSLOduration=1.6703274129999999 podStartE2EDuration="3.153679001s" podCreationTimestamp="2026-02-26 14:16:00 +0000 UTC" firstStartedPulling="2026-02-26 14:16:01.047147206 +0000 UTC m=+11427.702886321" lastFinishedPulling="2026-02-26 14:16:02.530498794 +0000 UTC m=+11429.186237909" observedRunningTime="2026-02-26 14:16:03.145026747 +0000 UTC m=+11429.800765862" watchObservedRunningTime="2026-02-26 14:16:03.153679001 +0000 UTC m=+11429.809418116" Feb 26 14:16:05 crc kubenswrapper[4724]: I0226 14:16:05.162929 4724 generic.go:334] "Generic (PLEG): container finished" podID="f8480feb-cbed-4098-acd9-840e697cd0fa" containerID="a301c385cef3cdc20d59f880bd2e311fc64d88fc6091040d4692fd927e4ab811" exitCode=0 Feb 26 14:16:05 crc kubenswrapper[4724]: I0226 14:16:05.162993 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535256-mt54d" event={"ID":"f8480feb-cbed-4098-acd9-840e697cd0fa","Type":"ContainerDied","Data":"a301c385cef3cdc20d59f880bd2e311fc64d88fc6091040d4692fd927e4ab811"} Feb 26 14:16:06 crc kubenswrapper[4724]: I0226 14:16:06.854427 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535256-mt54d" Feb 26 14:16:06 crc kubenswrapper[4724]: I0226 14:16:06.883547 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwd7q\" (UniqueName: \"kubernetes.io/projected/f8480feb-cbed-4098-acd9-840e697cd0fa-kube-api-access-fwd7q\") pod \"f8480feb-cbed-4098-acd9-840e697cd0fa\" (UID: \"f8480feb-cbed-4098-acd9-840e697cd0fa\") " Feb 26 14:16:06 crc kubenswrapper[4724]: I0226 14:16:06.919868 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8480feb-cbed-4098-acd9-840e697cd0fa-kube-api-access-fwd7q" (OuterVolumeSpecName: "kube-api-access-fwd7q") pod "f8480feb-cbed-4098-acd9-840e697cd0fa" (UID: "f8480feb-cbed-4098-acd9-840e697cd0fa"). InnerVolumeSpecName "kube-api-access-fwd7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:16:06 crc kubenswrapper[4724]: I0226 14:16:06.986474 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwd7q\" (UniqueName: \"kubernetes.io/projected/f8480feb-cbed-4098-acd9-840e697cd0fa-kube-api-access-fwd7q\") on node \"crc\" DevicePath \"\"" Feb 26 14:16:07 crc kubenswrapper[4724]: I0226 14:16:07.201701 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535256-mt54d" event={"ID":"f8480feb-cbed-4098-acd9-840e697cd0fa","Type":"ContainerDied","Data":"cd25757caec6a3debc7413faab7be227bf3e0ad6f42d0ee9d3912b89d3105104"} Feb 26 14:16:07 crc kubenswrapper[4724]: I0226 14:16:07.201747 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd25757caec6a3debc7413faab7be227bf3e0ad6f42d0ee9d3912b89d3105104" Feb 26 14:16:07 crc kubenswrapper[4724]: I0226 14:16:07.201781 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535256-mt54d" Feb 26 14:16:07 crc kubenswrapper[4724]: I0226 14:16:07.273571 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535250-ftghv"] Feb 26 14:16:07 crc kubenswrapper[4724]: I0226 14:16:07.287849 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535250-ftghv"] Feb 26 14:16:07 crc kubenswrapper[4724]: I0226 14:16:07.990982 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fead7f94-8dc5-4b2a-8f4d-bdf5bb409677" path="/var/lib/kubelet/pods/fead7f94-8dc5-4b2a-8f4d-bdf5bb409677/volumes" Feb 26 14:16:13 crc kubenswrapper[4724]: I0226 14:16:13.986446 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:16:13 crc kubenswrapper[4724]: E0226 14:16:13.987539 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:16:28 crc kubenswrapper[4724]: I0226 14:16:28.976142 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:16:29 crc kubenswrapper[4724]: I0226 14:16:29.462409 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"1c3d682a84a5264f39b2717a5178f047f1df79558d9e564708b101a46fdb84ca"} Feb 26 14:16:31 crc kubenswrapper[4724]: I0226 14:16:31.829939 4724 scope.go:117] "RemoveContainer" containerID="5a548eab816a2ee242d737ae53b33d8218321c084d44002fa976b408a4a8e9a3" Feb 26 14:17:19 crc kubenswrapper[4724]: I0226 14:17:19.821615 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n7jr5"] Feb 26 14:17:19 crc kubenswrapper[4724]: E0226 14:17:19.823945 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8480feb-cbed-4098-acd9-840e697cd0fa" containerName="oc" Feb 26 14:17:19 crc kubenswrapper[4724]: I0226 14:17:19.823970 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8480feb-cbed-4098-acd9-840e697cd0fa" containerName="oc" Feb 26 14:17:19 crc kubenswrapper[4724]: I0226 14:17:19.824280 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8480feb-cbed-4098-acd9-840e697cd0fa" containerName="oc" Feb 26 14:17:19 crc kubenswrapper[4724]: I0226 14:17:19.831513 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:17:19 crc kubenswrapper[4724]: I0226 14:17:19.840037 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7jr5"] Feb 26 14:17:19 crc kubenswrapper[4724]: I0226 14:17:19.874758 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80e910d6-9bea-480a-b665-37a56a03e035-utilities\") pod \"redhat-operators-n7jr5\" (UID: \"80e910d6-9bea-480a-b665-37a56a03e035\") " pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:17:19 crc kubenswrapper[4724]: I0226 14:17:19.874848 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80e910d6-9bea-480a-b665-37a56a03e035-catalog-content\") pod \"redhat-operators-n7jr5\" (UID: \"80e910d6-9bea-480a-b665-37a56a03e035\") " pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:17:19 crc kubenswrapper[4724]: I0226 14:17:19.875079 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42cbm\" (UniqueName: \"kubernetes.io/projected/80e910d6-9bea-480a-b665-37a56a03e035-kube-api-access-42cbm\") pod \"redhat-operators-n7jr5\" (UID: \"80e910d6-9bea-480a-b665-37a56a03e035\") " pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:17:19 crc kubenswrapper[4724]: I0226 14:17:19.976378 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80e910d6-9bea-480a-b665-37a56a03e035-utilities\") pod \"redhat-operators-n7jr5\" (UID: \"80e910d6-9bea-480a-b665-37a56a03e035\") " pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:17:19 crc kubenswrapper[4724]: I0226 14:17:19.976425 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80e910d6-9bea-480a-b665-37a56a03e035-catalog-content\") pod \"redhat-operators-n7jr5\" (UID: \"80e910d6-9bea-480a-b665-37a56a03e035\") " pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:17:19 crc kubenswrapper[4724]: I0226 14:17:19.976496 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42cbm\" (UniqueName: \"kubernetes.io/projected/80e910d6-9bea-480a-b665-37a56a03e035-kube-api-access-42cbm\") pod \"redhat-operators-n7jr5\" (UID: \"80e910d6-9bea-480a-b665-37a56a03e035\") " pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:17:19 crc kubenswrapper[4724]: I0226 14:17:19.976860 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80e910d6-9bea-480a-b665-37a56a03e035-catalog-content\") pod \"redhat-operators-n7jr5\" (UID: \"80e910d6-9bea-480a-b665-37a56a03e035\") " pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:17:19 crc kubenswrapper[4724]: I0226 14:17:19.976920 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80e910d6-9bea-480a-b665-37a56a03e035-utilities\") pod \"redhat-operators-n7jr5\" (UID: \"80e910d6-9bea-480a-b665-37a56a03e035\") " pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:17:20 crc kubenswrapper[4724]: I0226 14:17:20.011533 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42cbm\" (UniqueName: \"kubernetes.io/projected/80e910d6-9bea-480a-b665-37a56a03e035-kube-api-access-42cbm\") pod \"redhat-operators-n7jr5\" (UID: \"80e910d6-9bea-480a-b665-37a56a03e035\") " pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:17:20 crc kubenswrapper[4724]: I0226 14:17:20.154060 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:17:21 crc kubenswrapper[4724]: I0226 14:17:21.091397 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7jr5"] Feb 26 14:17:22 crc kubenswrapper[4724]: I0226 14:17:22.011280 4724 generic.go:334] "Generic (PLEG): container finished" podID="80e910d6-9bea-480a-b665-37a56a03e035" containerID="093126d98d88aaa3cc68f19f47ff1cf7aef6607c82d77bf42baa987dc53b2eb0" exitCode=0 Feb 26 14:17:22 crc kubenswrapper[4724]: I0226 14:17:22.011373 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7jr5" event={"ID":"80e910d6-9bea-480a-b665-37a56a03e035","Type":"ContainerDied","Data":"093126d98d88aaa3cc68f19f47ff1cf7aef6607c82d77bf42baa987dc53b2eb0"} Feb 26 14:17:22 crc kubenswrapper[4724]: I0226 14:17:22.011620 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7jr5" event={"ID":"80e910d6-9bea-480a-b665-37a56a03e035","Type":"ContainerStarted","Data":"09acf88beeddad528352126e0ec30061a80295a5785081e11e8ce67a12ddceb8"} Feb 26 14:17:22 crc kubenswrapper[4724]: I0226 14:17:22.017449 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:17:24 crc kubenswrapper[4724]: I0226 14:17:24.028896 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7jr5" event={"ID":"80e910d6-9bea-480a-b665-37a56a03e035","Type":"ContainerStarted","Data":"8399841e2bb8e7dfd90be738d414a9f8001c40cac120ac2b0cae1fdc887e7934"} Feb 26 14:17:30 crc kubenswrapper[4724]: I0226 14:17:30.088113 4724 generic.go:334] "Generic (PLEG): container finished" podID="80e910d6-9bea-480a-b665-37a56a03e035" containerID="8399841e2bb8e7dfd90be738d414a9f8001c40cac120ac2b0cae1fdc887e7934" exitCode=0 Feb 26 14:17:30 crc kubenswrapper[4724]: I0226 14:17:30.088296 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7jr5" event={"ID":"80e910d6-9bea-480a-b665-37a56a03e035","Type":"ContainerDied","Data":"8399841e2bb8e7dfd90be738d414a9f8001c40cac120ac2b0cae1fdc887e7934"} Feb 26 14:17:31 crc kubenswrapper[4724]: I0226 14:17:31.099761 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7jr5" event={"ID":"80e910d6-9bea-480a-b665-37a56a03e035","Type":"ContainerStarted","Data":"f334925316fdff02edcf6432a0e88be44fd1a424821b6eeab0d6bb6b426e1827"} Feb 26 14:17:31 crc kubenswrapper[4724]: I0226 14:17:31.128909 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n7jr5" podStartSLOduration=3.363187919 podStartE2EDuration="12.128859783s" podCreationTimestamp="2026-02-26 14:17:19 +0000 UTC" firstStartedPulling="2026-02-26 14:17:22.01351619 +0000 UTC m=+11508.669255305" lastFinishedPulling="2026-02-26 14:17:30.779188054 +0000 UTC m=+11517.434927169" observedRunningTime="2026-02-26 14:17:31.122773753 +0000 UTC m=+11517.778512938" watchObservedRunningTime="2026-02-26 14:17:31.128859783 +0000 UTC m=+11517.784598918" Feb 26 14:17:40 crc kubenswrapper[4724]: I0226 14:17:40.155026 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:17:40 crc kubenswrapper[4724]: I0226 14:17:40.155826 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:17:41 crc kubenswrapper[4724]: I0226 14:17:41.211345 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n7jr5" podUID="80e910d6-9bea-480a-b665-37a56a03e035" containerName="registry-server" probeResult="failure" output=< Feb 26 14:17:41 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:17:41 crc kubenswrapper[4724]: > Feb 26 14:17:51 crc kubenswrapper[4724]: I0226 14:17:51.211307 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n7jr5" podUID="80e910d6-9bea-480a-b665-37a56a03e035" containerName="registry-server" probeResult="failure" output=< Feb 26 14:17:51 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:17:51 crc kubenswrapper[4724]: > Feb 26 14:18:00 crc kubenswrapper[4724]: I0226 14:18:00.159002 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535258-q9b5s"] Feb 26 14:18:00 crc kubenswrapper[4724]: I0226 14:18:00.162370 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535258-q9b5s" Feb 26 14:18:00 crc kubenswrapper[4724]: I0226 14:18:00.174265 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:18:00 crc kubenswrapper[4724]: I0226 14:18:00.174280 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:18:00 crc kubenswrapper[4724]: I0226 14:18:00.175348 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:18:00 crc kubenswrapper[4724]: I0226 14:18:00.181764 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jvbh\" (UniqueName: \"kubernetes.io/projected/6965f148-bc0b-4754-bb14-5246bec643c0-kube-api-access-4jvbh\") pod \"auto-csr-approver-29535258-q9b5s\" (UID: \"6965f148-bc0b-4754-bb14-5246bec643c0\") " pod="openshift-infra/auto-csr-approver-29535258-q9b5s" Feb 26 14:18:00 crc kubenswrapper[4724]: I0226 14:18:00.186376 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535258-q9b5s"] Feb 26 14:18:00 crc kubenswrapper[4724]: I0226 14:18:00.285691 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jvbh\" (UniqueName: \"kubernetes.io/projected/6965f148-bc0b-4754-bb14-5246bec643c0-kube-api-access-4jvbh\") pod \"auto-csr-approver-29535258-q9b5s\" (UID: \"6965f148-bc0b-4754-bb14-5246bec643c0\") " pod="openshift-infra/auto-csr-approver-29535258-q9b5s" Feb 26 14:18:00 crc kubenswrapper[4724]: I0226 14:18:00.324714 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jvbh\" (UniqueName: \"kubernetes.io/projected/6965f148-bc0b-4754-bb14-5246bec643c0-kube-api-access-4jvbh\") pod \"auto-csr-approver-29535258-q9b5s\" (UID: \"6965f148-bc0b-4754-bb14-5246bec643c0\") " pod="openshift-infra/auto-csr-approver-29535258-q9b5s" Feb 26 14:18:00 crc kubenswrapper[4724]: I0226 14:18:00.488154 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535258-q9b5s" Feb 26 14:18:01 crc kubenswrapper[4724]: I0226 14:18:01.236908 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n7jr5" podUID="80e910d6-9bea-480a-b665-37a56a03e035" containerName="registry-server" probeResult="failure" output=< Feb 26 14:18:01 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:18:01 crc kubenswrapper[4724]: > Feb 26 14:18:01 crc kubenswrapper[4724]: I0226 14:18:01.482887 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535258-q9b5s"] Feb 26 14:18:02 crc kubenswrapper[4724]: I0226 14:18:02.463646 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535258-q9b5s" event={"ID":"6965f148-bc0b-4754-bb14-5246bec643c0","Type":"ContainerStarted","Data":"d9e5ca8f5ae79136b035a0166ae100313d7e7001ad5e2fe88f828eab26c6928b"} Feb 26 14:18:04 crc kubenswrapper[4724]: I0226 14:18:04.486388 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535258-q9b5s" event={"ID":"6965f148-bc0b-4754-bb14-5246bec643c0","Type":"ContainerStarted","Data":"7f94da5f1cd491920870bcf89ba7c7c055bd5fd4e5867742234dfa4c469fd26f"} Feb 26 14:18:04 crc kubenswrapper[4724]: I0226 14:18:04.514914 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535258-q9b5s" podStartSLOduration=3.38444788 podStartE2EDuration="4.514888088s" podCreationTimestamp="2026-02-26 14:18:00 +0000 UTC" firstStartedPulling="2026-02-26 14:18:01.431530761 +0000 UTC m=+11548.087269876" lastFinishedPulling="2026-02-26 14:18:02.561970969 +0000 UTC m=+11549.217710084" observedRunningTime="2026-02-26 14:18:04.510776907 +0000 UTC m=+11551.166516022" watchObservedRunningTime="2026-02-26 14:18:04.514888088 +0000 UTC m=+11551.170627213" Feb 26 14:18:05 crc kubenswrapper[4724]: I0226 14:18:05.500975 4724 generic.go:334] "Generic (PLEG): container finished" podID="6965f148-bc0b-4754-bb14-5246bec643c0" containerID="7f94da5f1cd491920870bcf89ba7c7c055bd5fd4e5867742234dfa4c469fd26f" exitCode=0 Feb 26 14:18:05 crc kubenswrapper[4724]: I0226 14:18:05.501051 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535258-q9b5s" event={"ID":"6965f148-bc0b-4754-bb14-5246bec643c0","Type":"ContainerDied","Data":"7f94da5f1cd491920870bcf89ba7c7c055bd5fd4e5867742234dfa4c469fd26f"} Feb 26 14:18:07 crc kubenswrapper[4724]: I0226 14:18:07.528615 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535258-q9b5s" event={"ID":"6965f148-bc0b-4754-bb14-5246bec643c0","Type":"ContainerDied","Data":"d9e5ca8f5ae79136b035a0166ae100313d7e7001ad5e2fe88f828eab26c6928b"} Feb 26 14:18:07 crc kubenswrapper[4724]: I0226 14:18:07.529588 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9e5ca8f5ae79136b035a0166ae100313d7e7001ad5e2fe88f828eab26c6928b" Feb 26 14:18:07 crc kubenswrapper[4724]: I0226 14:18:07.613529 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535258-q9b5s" Feb 26 14:18:07 crc kubenswrapper[4724]: I0226 14:18:07.697368 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jvbh\" (UniqueName: \"kubernetes.io/projected/6965f148-bc0b-4754-bb14-5246bec643c0-kube-api-access-4jvbh\") pod \"6965f148-bc0b-4754-bb14-5246bec643c0\" (UID: \"6965f148-bc0b-4754-bb14-5246bec643c0\") " Feb 26 14:18:07 crc kubenswrapper[4724]: I0226 14:18:07.726887 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6965f148-bc0b-4754-bb14-5246bec643c0-kube-api-access-4jvbh" (OuterVolumeSpecName: "kube-api-access-4jvbh") pod "6965f148-bc0b-4754-bb14-5246bec643c0" (UID: "6965f148-bc0b-4754-bb14-5246bec643c0"). InnerVolumeSpecName "kube-api-access-4jvbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:18:07 crc kubenswrapper[4724]: I0226 14:18:07.801256 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jvbh\" (UniqueName: \"kubernetes.io/projected/6965f148-bc0b-4754-bb14-5246bec643c0-kube-api-access-4jvbh\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:08 crc kubenswrapper[4724]: I0226 14:18:08.538347 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535258-q9b5s" Feb 26 14:18:08 crc kubenswrapper[4724]: I0226 14:18:08.792992 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535252-9dvkv"] Feb 26 14:18:08 crc kubenswrapper[4724]: I0226 14:18:08.802796 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535252-9dvkv"] Feb 26 14:18:09 crc kubenswrapper[4724]: I0226 14:18:09.995711 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cee8124b-347c-4fb2-87c3-4c79c7c86e9d" path="/var/lib/kubelet/pods/cee8124b-347c-4fb2-87c3-4c79c7c86e9d/volumes" Feb 26 14:18:11 crc kubenswrapper[4724]: I0226 14:18:11.207663 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n7jr5" podUID="80e910d6-9bea-480a-b665-37a56a03e035" containerName="registry-server" probeResult="failure" output=< Feb 26 14:18:11 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:18:11 crc kubenswrapper[4724]: > Feb 26 14:18:21 crc kubenswrapper[4724]: I0226 14:18:21.210526 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n7jr5" podUID="80e910d6-9bea-480a-b665-37a56a03e035" containerName="registry-server" probeResult="failure" output=< Feb 26 14:18:21 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:18:21 crc kubenswrapper[4724]: > Feb 26 14:18:30 crc kubenswrapper[4724]: I0226 14:18:30.225736 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:18:30 crc kubenswrapper[4724]: I0226 14:18:30.284864 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:18:32 crc kubenswrapper[4724]: I0226 14:18:32.046830 4724 scope.go:117] "RemoveContainer" containerID="fd92026b84a583aa8b6673880bbc8bc31591598a8cdcf9000fa7565d4e709503" Feb 26 14:18:33 crc kubenswrapper[4724]: I0226 14:18:33.308141 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n7jr5"] Feb 26 14:18:33 crc kubenswrapper[4724]: I0226 14:18:33.310763 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n7jr5" podUID="80e910d6-9bea-480a-b665-37a56a03e035" containerName="registry-server" containerID="cri-o://f334925316fdff02edcf6432a0e88be44fd1a424821b6eeab0d6bb6b426e1827" gracePeriod=2 Feb 26 14:18:33 crc kubenswrapper[4724]: I0226 14:18:33.794804 4724 generic.go:334] "Generic (PLEG): container finished" podID="80e910d6-9bea-480a-b665-37a56a03e035" containerID="f334925316fdff02edcf6432a0e88be44fd1a424821b6eeab0d6bb6b426e1827" exitCode=0 Feb 26 14:18:33 crc kubenswrapper[4724]: I0226 14:18:33.794855 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7jr5" event={"ID":"80e910d6-9bea-480a-b665-37a56a03e035","Type":"ContainerDied","Data":"f334925316fdff02edcf6432a0e88be44fd1a424821b6eeab0d6bb6b426e1827"} Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.284050 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.435034 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80e910d6-9bea-480a-b665-37a56a03e035-utilities\") pod \"80e910d6-9bea-480a-b665-37a56a03e035\" (UID: \"80e910d6-9bea-480a-b665-37a56a03e035\") " Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.435125 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42cbm\" (UniqueName: \"kubernetes.io/projected/80e910d6-9bea-480a-b665-37a56a03e035-kube-api-access-42cbm\") pod \"80e910d6-9bea-480a-b665-37a56a03e035\" (UID: \"80e910d6-9bea-480a-b665-37a56a03e035\") " Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.435232 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80e910d6-9bea-480a-b665-37a56a03e035-catalog-content\") pod \"80e910d6-9bea-480a-b665-37a56a03e035\" (UID: \"80e910d6-9bea-480a-b665-37a56a03e035\") " Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.436516 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80e910d6-9bea-480a-b665-37a56a03e035-utilities" (OuterVolumeSpecName: "utilities") pod "80e910d6-9bea-480a-b665-37a56a03e035" (UID: "80e910d6-9bea-480a-b665-37a56a03e035"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.448530 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80e910d6-9bea-480a-b665-37a56a03e035-kube-api-access-42cbm" (OuterVolumeSpecName: "kube-api-access-42cbm") pod "80e910d6-9bea-480a-b665-37a56a03e035" (UID: "80e910d6-9bea-480a-b665-37a56a03e035"). InnerVolumeSpecName "kube-api-access-42cbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.537258 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80e910d6-9bea-480a-b665-37a56a03e035-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.537289 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42cbm\" (UniqueName: \"kubernetes.io/projected/80e910d6-9bea-480a-b665-37a56a03e035-kube-api-access-42cbm\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.580849 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80e910d6-9bea-480a-b665-37a56a03e035-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80e910d6-9bea-480a-b665-37a56a03e035" (UID: "80e910d6-9bea-480a-b665-37a56a03e035"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.639640 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80e910d6-9bea-480a-b665-37a56a03e035-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.806400 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7jr5" event={"ID":"80e910d6-9bea-480a-b665-37a56a03e035","Type":"ContainerDied","Data":"09acf88beeddad528352126e0ec30061a80295a5785081e11e8ce67a12ddceb8"} Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.806460 4724 scope.go:117] "RemoveContainer" containerID="f334925316fdff02edcf6432a0e88be44fd1a424821b6eeab0d6bb6b426e1827" Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.806601 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7jr5" Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.840962 4724 scope.go:117] "RemoveContainer" containerID="8399841e2bb8e7dfd90be738d414a9f8001c40cac120ac2b0cae1fdc887e7934" Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.848380 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n7jr5"] Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.861350 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n7jr5"] Feb 26 14:18:34 crc kubenswrapper[4724]: I0226 14:18:34.873435 4724 scope.go:117] "RemoveContainer" containerID="093126d98d88aaa3cc68f19f47ff1cf7aef6607c82d77bf42baa987dc53b2eb0" Feb 26 14:18:35 crc kubenswrapper[4724]: I0226 14:18:35.988209 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80e910d6-9bea-480a-b665-37a56a03e035" path="/var/lib/kubelet/pods/80e910d6-9bea-480a-b665-37a56a03e035/volumes" Feb 26 14:18:46 crc kubenswrapper[4724]: I0226 14:18:46.967994 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:18:46 crc kubenswrapper[4724]: I0226 14:18:46.972027 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:19:16 crc kubenswrapper[4724]: I0226 14:19:16.906223 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:19:16 crc kubenswrapper[4724]: I0226 14:19:16.906735 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:19:46 crc kubenswrapper[4724]: I0226 14:19:46.909734 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:19:46 crc kubenswrapper[4724]: I0226 14:19:46.910159 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:19:46 crc kubenswrapper[4724]: I0226 14:19:46.910218 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 14:19:46 crc kubenswrapper[4724]: I0226 14:19:46.910904 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c3d682a84a5264f39b2717a5178f047f1df79558d9e564708b101a46fdb84ca"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:19:46 crc kubenswrapper[4724]: I0226 14:19:46.910946 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://1c3d682a84a5264f39b2717a5178f047f1df79558d9e564708b101a46fdb84ca" gracePeriod=600 Feb 26 14:19:47 crc kubenswrapper[4724]: I0226 14:19:47.580482 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="1c3d682a84a5264f39b2717a5178f047f1df79558d9e564708b101a46fdb84ca" exitCode=0 Feb 26 14:19:47 crc kubenswrapper[4724]: I0226 14:19:47.580557 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"1c3d682a84a5264f39b2717a5178f047f1df79558d9e564708b101a46fdb84ca"} Feb 26 14:19:47 crc kubenswrapper[4724]: I0226 14:19:47.580825 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389"} Feb 26 14:19:47 crc kubenswrapper[4724]: I0226 14:19:47.580877 4724 scope.go:117] "RemoveContainer" containerID="55a336a790aed46e35f23e938c0185238d8933ec693a93050cd0236da6e069e4" Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.165198 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535260-zb5c6"] Feb 26 14:20:00 crc kubenswrapper[4724]: E0226 14:20:00.167900 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6965f148-bc0b-4754-bb14-5246bec643c0" containerName="oc" Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.167931 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="6965f148-bc0b-4754-bb14-5246bec643c0" containerName="oc" Feb 26 14:20:00 crc kubenswrapper[4724]: E0226 14:20:00.167994 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80e910d6-9bea-480a-b665-37a56a03e035" containerName="registry-server" Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.168007 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="80e910d6-9bea-480a-b665-37a56a03e035" containerName="registry-server" Feb 26 14:20:00 crc kubenswrapper[4724]: E0226 14:20:00.168031 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80e910d6-9bea-480a-b665-37a56a03e035" containerName="extract-utilities" Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.168040 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="80e910d6-9bea-480a-b665-37a56a03e035" containerName="extract-utilities" Feb 26 14:20:00 crc kubenswrapper[4724]: E0226 14:20:00.168051 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80e910d6-9bea-480a-b665-37a56a03e035" containerName="extract-content" Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.168062 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="80e910d6-9bea-480a-b665-37a56a03e035" containerName="extract-content" Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.168344 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="6965f148-bc0b-4754-bb14-5246bec643c0" containerName="oc" Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.168370 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="80e910d6-9bea-480a-b665-37a56a03e035" containerName="registry-server" Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.169288 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535260-zb5c6" Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.177211 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.177531 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.178036 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535260-zb5c6"] Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.181646 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.321473 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hpdm\" (UniqueName: \"kubernetes.io/projected/75b662c1-1e34-45c8-b790-e7ff1995d0e3-kube-api-access-8hpdm\") pod \"auto-csr-approver-29535260-zb5c6\" (UID: \"75b662c1-1e34-45c8-b790-e7ff1995d0e3\") " pod="openshift-infra/auto-csr-approver-29535260-zb5c6" Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.423579 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpdm\" (UniqueName: \"kubernetes.io/projected/75b662c1-1e34-45c8-b790-e7ff1995d0e3-kube-api-access-8hpdm\") pod \"auto-csr-approver-29535260-zb5c6\" (UID: \"75b662c1-1e34-45c8-b790-e7ff1995d0e3\") " pod="openshift-infra/auto-csr-approver-29535260-zb5c6" Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.448232 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hpdm\" (UniqueName: \"kubernetes.io/projected/75b662c1-1e34-45c8-b790-e7ff1995d0e3-kube-api-access-8hpdm\") pod \"auto-csr-approver-29535260-zb5c6\" (UID: \"75b662c1-1e34-45c8-b790-e7ff1995d0e3\") " pod="openshift-infra/auto-csr-approver-29535260-zb5c6" Feb 26 14:20:00 crc kubenswrapper[4724]: I0226 14:20:00.506155 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535260-zb5c6" Feb 26 14:20:01 crc kubenswrapper[4724]: I0226 14:20:01.021724 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535260-zb5c6"] Feb 26 14:20:01 crc kubenswrapper[4724]: I0226 14:20:01.709725 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535260-zb5c6" event={"ID":"75b662c1-1e34-45c8-b790-e7ff1995d0e3","Type":"ContainerStarted","Data":"1421c33cdfa6142a4246f6244d06073fa3dd17df2f2224c1c0a0a665bbdac3a8"} Feb 26 14:20:02 crc kubenswrapper[4724]: I0226 14:20:02.720252 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535260-zb5c6" event={"ID":"75b662c1-1e34-45c8-b790-e7ff1995d0e3","Type":"ContainerStarted","Data":"45169eda89aa3cccc6e42b3621549cf699e01872fbd5f5b22f099175e1cf2cc4"} Feb 26 14:20:04 crc kubenswrapper[4724]: I0226 14:20:04.739968 4724 generic.go:334] "Generic (PLEG): container finished" podID="75b662c1-1e34-45c8-b790-e7ff1995d0e3" containerID="45169eda89aa3cccc6e42b3621549cf699e01872fbd5f5b22f099175e1cf2cc4" exitCode=0 Feb 26 14:20:04 crc kubenswrapper[4724]: I0226 14:20:04.740053 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535260-zb5c6" event={"ID":"75b662c1-1e34-45c8-b790-e7ff1995d0e3","Type":"ContainerDied","Data":"45169eda89aa3cccc6e42b3621549cf699e01872fbd5f5b22f099175e1cf2cc4"} Feb 26 14:20:06 crc kubenswrapper[4724]: I0226 14:20:06.181332 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535260-zb5c6" Feb 26 14:20:06 crc kubenswrapper[4724]: I0226 14:20:06.340727 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hpdm\" (UniqueName: \"kubernetes.io/projected/75b662c1-1e34-45c8-b790-e7ff1995d0e3-kube-api-access-8hpdm\") pod \"75b662c1-1e34-45c8-b790-e7ff1995d0e3\" (UID: \"75b662c1-1e34-45c8-b790-e7ff1995d0e3\") " Feb 26 14:20:06 crc kubenswrapper[4724]: I0226 14:20:06.347814 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b662c1-1e34-45c8-b790-e7ff1995d0e3-kube-api-access-8hpdm" (OuterVolumeSpecName: "kube-api-access-8hpdm") pod "75b662c1-1e34-45c8-b790-e7ff1995d0e3" (UID: "75b662c1-1e34-45c8-b790-e7ff1995d0e3"). InnerVolumeSpecName "kube-api-access-8hpdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:20:06 crc kubenswrapper[4724]: I0226 14:20:06.443199 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hpdm\" (UniqueName: \"kubernetes.io/projected/75b662c1-1e34-45c8-b790-e7ff1995d0e3-kube-api-access-8hpdm\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:06 crc kubenswrapper[4724]: I0226 14:20:06.760360 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535260-zb5c6" event={"ID":"75b662c1-1e34-45c8-b790-e7ff1995d0e3","Type":"ContainerDied","Data":"1421c33cdfa6142a4246f6244d06073fa3dd17df2f2224c1c0a0a665bbdac3a8"} Feb 26 14:20:06 crc kubenswrapper[4724]: I0226 14:20:06.760420 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1421c33cdfa6142a4246f6244d06073fa3dd17df2f2224c1c0a0a665bbdac3a8" Feb 26 14:20:06 crc kubenswrapper[4724]: I0226 14:20:06.760445 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535260-zb5c6" Feb 26 14:20:06 crc kubenswrapper[4724]: I0226 14:20:06.885548 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535254-458tm"] Feb 26 14:20:06 crc kubenswrapper[4724]: I0226 14:20:06.896227 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535254-458tm"] Feb 26 14:20:07 crc kubenswrapper[4724]: I0226 14:20:07.990114 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4bbe523-5e73-4a67-898f-e22b51bcbb10" path="/var/lib/kubelet/pods/d4bbe523-5e73-4a67-898f-e22b51bcbb10/volumes" Feb 26 14:20:32 crc kubenswrapper[4724]: I0226 14:20:32.385230 4724 scope.go:117] "RemoveContainer" containerID="c4bdc974ce1093c89a5bebede21f962c70d18f1b396df837b590cc6e98050944" Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.085736 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p8ddt"] Feb 26 14:20:51 crc kubenswrapper[4724]: E0226 14:20:51.086677 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b662c1-1e34-45c8-b790-e7ff1995d0e3" containerName="oc" Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.086728 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b662c1-1e34-45c8-b790-e7ff1995d0e3" containerName="oc" Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.086928 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b662c1-1e34-45c8-b790-e7ff1995d0e3" containerName="oc" Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.088319 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.108228 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p8ddt"] Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.199471 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2690f792-8723-46f6-8b83-6c3f192e2a59-utilities\") pod \"certified-operators-p8ddt\" (UID: \"2690f792-8723-46f6-8b83-6c3f192e2a59\") " pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.199640 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2690f792-8723-46f6-8b83-6c3f192e2a59-catalog-content\") pod \"certified-operators-p8ddt\" (UID: \"2690f792-8723-46f6-8b83-6c3f192e2a59\") " pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.199949 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2h4k\" (UniqueName: \"kubernetes.io/projected/2690f792-8723-46f6-8b83-6c3f192e2a59-kube-api-access-s2h4k\") pod \"certified-operators-p8ddt\" (UID: \"2690f792-8723-46f6-8b83-6c3f192e2a59\") " pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.302584 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2h4k\" (UniqueName: \"kubernetes.io/projected/2690f792-8723-46f6-8b83-6c3f192e2a59-kube-api-access-s2h4k\") pod \"certified-operators-p8ddt\" (UID: \"2690f792-8723-46f6-8b83-6c3f192e2a59\") " pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.303086 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2690f792-8723-46f6-8b83-6c3f192e2a59-utilities\") pod \"certified-operators-p8ddt\" (UID: \"2690f792-8723-46f6-8b83-6c3f192e2a59\") " pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.303227 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2690f792-8723-46f6-8b83-6c3f192e2a59-catalog-content\") pod \"certified-operators-p8ddt\" (UID: \"2690f792-8723-46f6-8b83-6c3f192e2a59\") " pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.303544 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2690f792-8723-46f6-8b83-6c3f192e2a59-utilities\") pod \"certified-operators-p8ddt\" (UID: \"2690f792-8723-46f6-8b83-6c3f192e2a59\") " pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.303711 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2690f792-8723-46f6-8b83-6c3f192e2a59-catalog-content\") pod \"certified-operators-p8ddt\" (UID: \"2690f792-8723-46f6-8b83-6c3f192e2a59\") " pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.330594 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2h4k\" (UniqueName: \"kubernetes.io/projected/2690f792-8723-46f6-8b83-6c3f192e2a59-kube-api-access-s2h4k\") pod \"certified-operators-p8ddt\" (UID: \"2690f792-8723-46f6-8b83-6c3f192e2a59\") " pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.411359 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:20:51 crc kubenswrapper[4724]: I0226 14:20:51.905727 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p8ddt"] Feb 26 14:20:52 crc kubenswrapper[4724]: I0226 14:20:52.186285 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8ddt" event={"ID":"2690f792-8723-46f6-8b83-6c3f192e2a59","Type":"ContainerStarted","Data":"1f1c974370d0c5200dbf9d56edc811fea0315c0c14347d0da88e15454d6611d1"} Feb 26 14:20:53 crc kubenswrapper[4724]: I0226 14:20:53.197084 4724 generic.go:334] "Generic (PLEG): container finished" podID="2690f792-8723-46f6-8b83-6c3f192e2a59" containerID="900acc162be5ff390dcc5700e3269ea0950b21eccb0584ca6a54899a124dc0e8" exitCode=0 Feb 26 14:20:53 crc kubenswrapper[4724]: I0226 14:20:53.197224 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8ddt" event={"ID":"2690f792-8723-46f6-8b83-6c3f192e2a59","Type":"ContainerDied","Data":"900acc162be5ff390dcc5700e3269ea0950b21eccb0584ca6a54899a124dc0e8"} Feb 26 14:20:55 crc kubenswrapper[4724]: I0226 14:20:55.220475 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8ddt" event={"ID":"2690f792-8723-46f6-8b83-6c3f192e2a59","Type":"ContainerStarted","Data":"a6f5b02a18666399dea659c89e4c256260b8008660f45abda36a66dc94cdf214"} Feb 26 14:20:58 crc kubenswrapper[4724]: I0226 14:20:58.246070 4724 generic.go:334] "Generic (PLEG): container finished" podID="2690f792-8723-46f6-8b83-6c3f192e2a59" containerID="a6f5b02a18666399dea659c89e4c256260b8008660f45abda36a66dc94cdf214" exitCode=0 Feb 26 14:20:58 crc kubenswrapper[4724]: I0226 14:20:58.246151 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8ddt" event={"ID":"2690f792-8723-46f6-8b83-6c3f192e2a59","Type":"ContainerDied","Data":"a6f5b02a18666399dea659c89e4c256260b8008660f45abda36a66dc94cdf214"} Feb 26 14:20:59 crc kubenswrapper[4724]: I0226 14:20:59.257411 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8ddt" event={"ID":"2690f792-8723-46f6-8b83-6c3f192e2a59","Type":"ContainerStarted","Data":"4c5b6152d97b3a3c34b781e02372f7eb1598bfb47f0959ff20b151e2e06e1a73"} Feb 26 14:20:59 crc kubenswrapper[4724]: I0226 14:20:59.287585 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p8ddt" podStartSLOduration=2.787983195 podStartE2EDuration="8.287350501s" podCreationTimestamp="2026-02-26 14:20:51 +0000 UTC" firstStartedPulling="2026-02-26 14:20:53.200858289 +0000 UTC m=+11719.856597394" lastFinishedPulling="2026-02-26 14:20:58.700225585 +0000 UTC m=+11725.355964700" observedRunningTime="2026-02-26 14:20:59.277489247 +0000 UTC m=+11725.933228382" watchObservedRunningTime="2026-02-26 14:20:59.287350501 +0000 UTC m=+11725.943089616" Feb 26 14:21:01 crc kubenswrapper[4724]: I0226 14:21:01.413341 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:21:01 crc kubenswrapper[4724]: I0226 14:21:01.413945 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:21:02 crc kubenswrapper[4724]: I0226 14:21:02.468513 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-p8ddt" podUID="2690f792-8723-46f6-8b83-6c3f192e2a59" containerName="registry-server" probeResult="failure" output=< Feb 26 14:21:02 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:21:02 crc kubenswrapper[4724]: > Feb 26 14:21:12 crc kubenswrapper[4724]: I0226 14:21:12.469546 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-p8ddt" podUID="2690f792-8723-46f6-8b83-6c3f192e2a59" containerName="registry-server" probeResult="failure" output=< Feb 26 14:21:12 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:21:12 crc kubenswrapper[4724]: > Feb 26 14:21:22 crc kubenswrapper[4724]: I0226 14:21:22.463031 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-p8ddt" podUID="2690f792-8723-46f6-8b83-6c3f192e2a59" containerName="registry-server" probeResult="failure" output=< Feb 26 14:21:22 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:21:22 crc kubenswrapper[4724]: > Feb 26 14:21:31 crc kubenswrapper[4724]: I0226 14:21:31.467964 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:21:31 crc kubenswrapper[4724]: I0226 14:21:31.526883 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:21:31 crc kubenswrapper[4724]: I0226 14:21:31.707167 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p8ddt"] Feb 26 14:21:32 crc kubenswrapper[4724]: I0226 14:21:32.590368 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p8ddt" podUID="2690f792-8723-46f6-8b83-6c3f192e2a59" containerName="registry-server" containerID="cri-o://4c5b6152d97b3a3c34b781e02372f7eb1598bfb47f0959ff20b151e2e06e1a73" gracePeriod=2 Feb 26 14:21:33 crc kubenswrapper[4724]: I0226 14:21:33.623964 4724 generic.go:334] "Generic (PLEG): container finished" podID="2690f792-8723-46f6-8b83-6c3f192e2a59" containerID="4c5b6152d97b3a3c34b781e02372f7eb1598bfb47f0959ff20b151e2e06e1a73" exitCode=0 Feb 26 14:21:33 crc kubenswrapper[4724]: I0226 14:21:33.624052 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8ddt" event={"ID":"2690f792-8723-46f6-8b83-6c3f192e2a59","Type":"ContainerDied","Data":"4c5b6152d97b3a3c34b781e02372f7eb1598bfb47f0959ff20b151e2e06e1a73"} Feb 26 14:21:33 crc kubenswrapper[4724]: I0226 14:21:33.750288 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:21:33 crc kubenswrapper[4724]: I0226 14:21:33.818957 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2690f792-8723-46f6-8b83-6c3f192e2a59-catalog-content\") pod \"2690f792-8723-46f6-8b83-6c3f192e2a59\" (UID: \"2690f792-8723-46f6-8b83-6c3f192e2a59\") " Feb 26 14:21:33 crc kubenswrapper[4724]: I0226 14:21:33.819094 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2690f792-8723-46f6-8b83-6c3f192e2a59-utilities\") pod \"2690f792-8723-46f6-8b83-6c3f192e2a59\" (UID: \"2690f792-8723-46f6-8b83-6c3f192e2a59\") " Feb 26 14:21:33 crc kubenswrapper[4724]: I0226 14:21:33.819152 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2h4k\" (UniqueName: \"kubernetes.io/projected/2690f792-8723-46f6-8b83-6c3f192e2a59-kube-api-access-s2h4k\") pod \"2690f792-8723-46f6-8b83-6c3f192e2a59\" (UID: \"2690f792-8723-46f6-8b83-6c3f192e2a59\") " Feb 26 14:21:33 crc kubenswrapper[4724]: I0226 14:21:33.829847 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2690f792-8723-46f6-8b83-6c3f192e2a59-utilities" (OuterVolumeSpecName: "utilities") pod "2690f792-8723-46f6-8b83-6c3f192e2a59" (UID: "2690f792-8723-46f6-8b83-6c3f192e2a59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:21:33 crc kubenswrapper[4724]: I0226 14:21:33.873594 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2690f792-8723-46f6-8b83-6c3f192e2a59-kube-api-access-s2h4k" (OuterVolumeSpecName: "kube-api-access-s2h4k") pod "2690f792-8723-46f6-8b83-6c3f192e2a59" (UID: "2690f792-8723-46f6-8b83-6c3f192e2a59"). InnerVolumeSpecName "kube-api-access-s2h4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:21:33 crc kubenswrapper[4724]: I0226 14:21:33.892983 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2690f792-8723-46f6-8b83-6c3f192e2a59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2690f792-8723-46f6-8b83-6c3f192e2a59" (UID: "2690f792-8723-46f6-8b83-6c3f192e2a59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:21:33 crc kubenswrapper[4724]: I0226 14:21:33.921976 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2690f792-8723-46f6-8b83-6c3f192e2a59-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:33 crc kubenswrapper[4724]: I0226 14:21:33.922008 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2690f792-8723-46f6-8b83-6c3f192e2a59-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:33 crc kubenswrapper[4724]: I0226 14:21:33.922020 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2h4k\" (UniqueName: \"kubernetes.io/projected/2690f792-8723-46f6-8b83-6c3f192e2a59-kube-api-access-s2h4k\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:34 crc kubenswrapper[4724]: I0226 14:21:34.638294 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8ddt" event={"ID":"2690f792-8723-46f6-8b83-6c3f192e2a59","Type":"ContainerDied","Data":"1f1c974370d0c5200dbf9d56edc811fea0315c0c14347d0da88e15454d6611d1"} Feb 26 14:21:34 crc kubenswrapper[4724]: I0226 14:21:34.639194 4724 scope.go:117] "RemoveContainer" containerID="4c5b6152d97b3a3c34b781e02372f7eb1598bfb47f0959ff20b151e2e06e1a73" Feb 26 14:21:34 crc kubenswrapper[4724]: I0226 14:21:34.638573 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8ddt" Feb 26 14:21:34 crc kubenswrapper[4724]: I0226 14:21:34.680601 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p8ddt"] Feb 26 14:21:34 crc kubenswrapper[4724]: I0226 14:21:34.694671 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p8ddt"] Feb 26 14:21:34 crc kubenswrapper[4724]: I0226 14:21:34.698912 4724 scope.go:117] "RemoveContainer" containerID="a6f5b02a18666399dea659c89e4c256260b8008660f45abda36a66dc94cdf214" Feb 26 14:21:34 crc kubenswrapper[4724]: I0226 14:21:34.734518 4724 scope.go:117] "RemoveContainer" containerID="900acc162be5ff390dcc5700e3269ea0950b21eccb0584ca6a54899a124dc0e8" Feb 26 14:21:35 crc kubenswrapper[4724]: I0226 14:21:35.993556 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2690f792-8723-46f6-8b83-6c3f192e2a59" path="/var/lib/kubelet/pods/2690f792-8723-46f6-8b83-6c3f192e2a59/volumes" Feb 26 14:21:36 crc kubenswrapper[4724]: E0226 14:21:36.790732 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2690f792_8723_46f6_8b83_6c3f192e2a59.slice/crio-1f1c974370d0c5200dbf9d56edc811fea0315c0c14347d0da88e15454d6611d1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2690f792_8723_46f6_8b83_6c3f192e2a59.slice\": RecentStats: unable to find data in memory cache]" Feb 26 14:21:47 crc kubenswrapper[4724]: E0226 14:21:47.068660 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2690f792_8723_46f6_8b83_6c3f192e2a59.slice/crio-1f1c974370d0c5200dbf9d56edc811fea0315c0c14347d0da88e15454d6611d1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2690f792_8723_46f6_8b83_6c3f192e2a59.slice\": RecentStats: unable to find data in memory cache]" Feb 26 14:21:57 crc kubenswrapper[4724]: E0226 14:21:57.340478 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2690f792_8723_46f6_8b83_6c3f192e2a59.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2690f792_8723_46f6_8b83_6c3f192e2a59.slice/crio-1f1c974370d0c5200dbf9d56edc811fea0315c0c14347d0da88e15454d6611d1\": RecentStats: unable to find data in memory cache]" Feb 26 14:22:00 crc kubenswrapper[4724]: I0226 14:22:00.161979 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535262-9cq4w"] Feb 26 14:22:00 crc kubenswrapper[4724]: E0226 14:22:00.163145 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2690f792-8723-46f6-8b83-6c3f192e2a59" containerName="extract-utilities" Feb 26 14:22:00 crc kubenswrapper[4724]: I0226 14:22:00.163186 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2690f792-8723-46f6-8b83-6c3f192e2a59" containerName="extract-utilities" Feb 26 14:22:00 crc kubenswrapper[4724]: E0226 14:22:00.163228 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2690f792-8723-46f6-8b83-6c3f192e2a59" containerName="extract-content" Feb 26 14:22:00 crc kubenswrapper[4724]: I0226 14:22:00.163252 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2690f792-8723-46f6-8b83-6c3f192e2a59" containerName="extract-content" Feb 26 14:22:00 crc kubenswrapper[4724]: E0226 14:22:00.163271 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2690f792-8723-46f6-8b83-6c3f192e2a59" containerName="registry-server" Feb 26 14:22:00 crc kubenswrapper[4724]: I0226 14:22:00.163277 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2690f792-8723-46f6-8b83-6c3f192e2a59" containerName="registry-server" Feb 26 14:22:00 crc kubenswrapper[4724]: I0226 14:22:00.163519 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="2690f792-8723-46f6-8b83-6c3f192e2a59" containerName="registry-server" Feb 26 14:22:00 crc kubenswrapper[4724]: I0226 14:22:00.164677 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535262-9cq4w" Feb 26 14:22:00 crc kubenswrapper[4724]: I0226 14:22:00.169380 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:22:00 crc kubenswrapper[4724]: I0226 14:22:00.169493 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:22:00 crc kubenswrapper[4724]: I0226 14:22:00.169560 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:22:00 crc kubenswrapper[4724]: I0226 14:22:00.214039 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535262-9cq4w"] Feb 26 14:22:00 crc kubenswrapper[4724]: I0226 14:22:00.288199 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn7bp\" (UniqueName: \"kubernetes.io/projected/59969dce-471f-4172-8581-9f605d489c4f-kube-api-access-jn7bp\") pod \"auto-csr-approver-29535262-9cq4w\" (UID: \"59969dce-471f-4172-8581-9f605d489c4f\") " pod="openshift-infra/auto-csr-approver-29535262-9cq4w" Feb 26 14:22:00 crc kubenswrapper[4724]: I0226 14:22:00.390266 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn7bp\" (UniqueName: \"kubernetes.io/projected/59969dce-471f-4172-8581-9f605d489c4f-kube-api-access-jn7bp\") pod \"auto-csr-approver-29535262-9cq4w\" (UID: \"59969dce-471f-4172-8581-9f605d489c4f\") " pod="openshift-infra/auto-csr-approver-29535262-9cq4w" Feb 26 14:22:00 crc kubenswrapper[4724]: I0226 14:22:00.427605 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn7bp\" (UniqueName: \"kubernetes.io/projected/59969dce-471f-4172-8581-9f605d489c4f-kube-api-access-jn7bp\") pod \"auto-csr-approver-29535262-9cq4w\" (UID: \"59969dce-471f-4172-8581-9f605d489c4f\") " pod="openshift-infra/auto-csr-approver-29535262-9cq4w" Feb 26 14:22:00 crc kubenswrapper[4724]: I0226 14:22:00.498467 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535262-9cq4w" Feb 26 14:22:01 crc kubenswrapper[4724]: I0226 14:22:01.062850 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535262-9cq4w"] Feb 26 14:22:01 crc kubenswrapper[4724]: I0226 14:22:01.908828 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535262-9cq4w" event={"ID":"59969dce-471f-4172-8581-9f605d489c4f","Type":"ContainerStarted","Data":"816916e86ed5e2c4a5937f8854067a352b9ab12d064dc796b3310e073ab7880d"} Feb 26 14:22:03 crc kubenswrapper[4724]: I0226 14:22:03.933377 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535262-9cq4w" event={"ID":"59969dce-471f-4172-8581-9f605d489c4f","Type":"ContainerStarted","Data":"ce052d78551bac448a3cc6de4eaa682637309e321495edff29aa69a41727db73"} Feb 26 14:22:03 crc kubenswrapper[4724]: I0226 14:22:03.959356 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535262-9cq4w" podStartSLOduration=2.087932185 podStartE2EDuration="3.959321315s" podCreationTimestamp="2026-02-26 14:22:00 +0000 UTC" firstStartedPulling="2026-02-26 14:22:01.074745833 +0000 UTC m=+11787.730484948" lastFinishedPulling="2026-02-26 14:22:02.946134923 +0000 UTC m=+11789.601874078" observedRunningTime="2026-02-26 14:22:03.946758093 +0000 UTC m=+11790.602497218" watchObservedRunningTime="2026-02-26 14:22:03.959321315 +0000 UTC m=+11790.615060440" Feb 26 14:22:05 crc kubenswrapper[4724]: I0226 14:22:05.952386 4724 generic.go:334] "Generic (PLEG): container finished" podID="59969dce-471f-4172-8581-9f605d489c4f" containerID="ce052d78551bac448a3cc6de4eaa682637309e321495edff29aa69a41727db73" exitCode=0 Feb 26 14:22:05 crc kubenswrapper[4724]: I0226 14:22:05.952464 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535262-9cq4w" event={"ID":"59969dce-471f-4172-8581-9f605d489c4f","Type":"ContainerDied","Data":"ce052d78551bac448a3cc6de4eaa682637309e321495edff29aa69a41727db73"} Feb 26 14:22:07 crc kubenswrapper[4724]: I0226 14:22:07.400723 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535262-9cq4w" Feb 26 14:22:07 crc kubenswrapper[4724]: I0226 14:22:07.573490 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn7bp\" (UniqueName: \"kubernetes.io/projected/59969dce-471f-4172-8581-9f605d489c4f-kube-api-access-jn7bp\") pod \"59969dce-471f-4172-8581-9f605d489c4f\" (UID: \"59969dce-471f-4172-8581-9f605d489c4f\") " Feb 26 14:22:07 crc kubenswrapper[4724]: I0226 14:22:07.582631 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59969dce-471f-4172-8581-9f605d489c4f-kube-api-access-jn7bp" (OuterVolumeSpecName: "kube-api-access-jn7bp") pod "59969dce-471f-4172-8581-9f605d489c4f" (UID: "59969dce-471f-4172-8581-9f605d489c4f"). InnerVolumeSpecName "kube-api-access-jn7bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:22:07 crc kubenswrapper[4724]: E0226 14:22:07.596043 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2690f792_8723_46f6_8b83_6c3f192e2a59.slice/crio-1f1c974370d0c5200dbf9d56edc811fea0315c0c14347d0da88e15454d6611d1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2690f792_8723_46f6_8b83_6c3f192e2a59.slice\": RecentStats: unable to find data in memory cache]" Feb 26 14:22:07 crc kubenswrapper[4724]: I0226 14:22:07.676431 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn7bp\" (UniqueName: \"kubernetes.io/projected/59969dce-471f-4172-8581-9f605d489c4f-kube-api-access-jn7bp\") on node \"crc\" DevicePath \"\"" Feb 26 14:22:07 crc kubenswrapper[4724]: I0226 14:22:07.971270 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535262-9cq4w" event={"ID":"59969dce-471f-4172-8581-9f605d489c4f","Type":"ContainerDied","Data":"816916e86ed5e2c4a5937f8854067a352b9ab12d064dc796b3310e073ab7880d"} Feb 26 14:22:07 crc kubenswrapper[4724]: I0226 14:22:07.971318 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="816916e86ed5e2c4a5937f8854067a352b9ab12d064dc796b3310e073ab7880d" Feb 26 14:22:07 crc kubenswrapper[4724]: I0226 14:22:07.971376 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535262-9cq4w" Feb 26 14:22:08 crc kubenswrapper[4724]: I0226 14:22:08.038676 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535256-mt54d"] Feb 26 14:22:08 crc kubenswrapper[4724]: I0226 14:22:08.046884 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535256-mt54d"] Feb 26 14:22:09 crc kubenswrapper[4724]: I0226 14:22:09.993463 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8480feb-cbed-4098-acd9-840e697cd0fa" path="/var/lib/kubelet/pods/f8480feb-cbed-4098-acd9-840e697cd0fa/volumes" Feb 26 14:22:16 crc kubenswrapper[4724]: I0226 14:22:16.906514 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:22:16 crc kubenswrapper[4724]: I0226 14:22:16.907204 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:22:17 crc kubenswrapper[4724]: E0226 14:22:17.886472 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2690f792_8723_46f6_8b83_6c3f192e2a59.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2690f792_8723_46f6_8b83_6c3f192e2a59.slice/crio-1f1c974370d0c5200dbf9d56edc811fea0315c0c14347d0da88e15454d6611d1\": RecentStats: unable to find data in memory cache]" Feb 26 14:22:28 crc kubenswrapper[4724]: E0226 14:22:28.233636 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2690f792_8723_46f6_8b83_6c3f192e2a59.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2690f792_8723_46f6_8b83_6c3f192e2a59.slice/crio-1f1c974370d0c5200dbf9d56edc811fea0315c0c14347d0da88e15454d6611d1\": RecentStats: unable to find data in memory cache]" Feb 26 14:22:32 crc kubenswrapper[4724]: I0226 14:22:32.511791 4724 scope.go:117] "RemoveContainer" containerID="a301c385cef3cdc20d59f880bd2e311fc64d88fc6091040d4692fd927e4ab811" Feb 26 14:22:46 crc kubenswrapper[4724]: I0226 14:22:46.906917 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:22:46 crc kubenswrapper[4724]: I0226 14:22:46.907805 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:23:16 crc kubenswrapper[4724]: I0226 14:23:16.906486 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:23:16 crc kubenswrapper[4724]: I0226 14:23:16.907598 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:23:16 crc kubenswrapper[4724]: I0226 14:23:16.907704 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 14:23:16 crc kubenswrapper[4724]: I0226 14:23:16.909119 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:23:16 crc kubenswrapper[4724]: I0226 14:23:16.909220 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" gracePeriod=600 Feb 26 14:23:17 crc kubenswrapper[4724]: E0226 14:23:17.634632 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:23:18 crc kubenswrapper[4724]: I0226 14:23:18.097276 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" exitCode=0 Feb 26 14:23:18 crc kubenswrapper[4724]: I0226 14:23:18.097352 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389"} Feb 26 14:23:18 crc kubenswrapper[4724]: I0226 14:23:18.097408 4724 scope.go:117] "RemoveContainer" containerID="1c3d682a84a5264f39b2717a5178f047f1df79558d9e564708b101a46fdb84ca" Feb 26 14:23:18 crc kubenswrapper[4724]: I0226 14:23:18.098689 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:23:18 crc kubenswrapper[4724]: E0226 14:23:18.099045 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:23:29 crc kubenswrapper[4724]: I0226 14:23:29.975002 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:23:29 crc kubenswrapper[4724]: E0226 14:23:29.975710 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:23:40 crc kubenswrapper[4724]: I0226 14:23:40.976502 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:23:40 crc kubenswrapper[4724]: E0226 14:23:40.977607 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:23:51 crc kubenswrapper[4724]: I0226 14:23:51.975716 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:23:51 crc kubenswrapper[4724]: E0226 14:23:51.977036 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:24:00 crc kubenswrapper[4724]: I0226 14:24:00.156329 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535264-r7m5v"] Feb 26 14:24:00 crc kubenswrapper[4724]: E0226 14:24:00.157716 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59969dce-471f-4172-8581-9f605d489c4f" containerName="oc" Feb 26 14:24:00 crc kubenswrapper[4724]: I0226 14:24:00.157736 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="59969dce-471f-4172-8581-9f605d489c4f" containerName="oc" Feb 26 14:24:00 crc kubenswrapper[4724]: I0226 14:24:00.158018 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="59969dce-471f-4172-8581-9f605d489c4f" containerName="oc" Feb 26 14:24:00 crc kubenswrapper[4724]: I0226 14:24:00.162111 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535264-r7m5v" Feb 26 14:24:00 crc kubenswrapper[4724]: I0226 14:24:00.169425 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535264-r7m5v"] Feb 26 14:24:00 crc kubenswrapper[4724]: I0226 14:24:00.209246 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:24:00 crc kubenswrapper[4724]: I0226 14:24:00.209570 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:24:00 crc kubenswrapper[4724]: I0226 14:24:00.213247 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:24:00 crc kubenswrapper[4724]: I0226 14:24:00.315546 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwdp4\" (UniqueName: \"kubernetes.io/projected/6f70d2b3-0274-454b-9b68-88a1e8bd8342-kube-api-access-fwdp4\") pod \"auto-csr-approver-29535264-r7m5v\" (UID: \"6f70d2b3-0274-454b-9b68-88a1e8bd8342\") " pod="openshift-infra/auto-csr-approver-29535264-r7m5v" Feb 26 14:24:00 crc kubenswrapper[4724]: I0226 14:24:00.418649 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwdp4\" (UniqueName: \"kubernetes.io/projected/6f70d2b3-0274-454b-9b68-88a1e8bd8342-kube-api-access-fwdp4\") pod \"auto-csr-approver-29535264-r7m5v\" (UID: \"6f70d2b3-0274-454b-9b68-88a1e8bd8342\") " pod="openshift-infra/auto-csr-approver-29535264-r7m5v" Feb 26 14:24:00 crc kubenswrapper[4724]: I0226 14:24:00.443697 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwdp4\" (UniqueName: \"kubernetes.io/projected/6f70d2b3-0274-454b-9b68-88a1e8bd8342-kube-api-access-fwdp4\") pod \"auto-csr-approver-29535264-r7m5v\" (UID: \"6f70d2b3-0274-454b-9b68-88a1e8bd8342\") " pod="openshift-infra/auto-csr-approver-29535264-r7m5v" Feb 26 14:24:00 crc kubenswrapper[4724]: I0226 14:24:00.548606 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535264-r7m5v" Feb 26 14:24:01 crc kubenswrapper[4724]: I0226 14:24:01.129343 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535264-r7m5v"] Feb 26 14:24:01 crc kubenswrapper[4724]: I0226 14:24:01.149189 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:24:01 crc kubenswrapper[4724]: I0226 14:24:01.569220 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535264-r7m5v" event={"ID":"6f70d2b3-0274-454b-9b68-88a1e8bd8342","Type":"ContainerStarted","Data":"cc31135fcfc5af7c4af8fed99030fc0271e424139e6e09f5bdf670b4d55fcbc9"} Feb 26 14:24:02 crc kubenswrapper[4724]: I0226 14:24:02.580994 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535264-r7m5v" event={"ID":"6f70d2b3-0274-454b-9b68-88a1e8bd8342","Type":"ContainerStarted","Data":"aa03166c13d748bec0d954229d77b9e433469dd8a34a6bfc40042e2baab330fc"} Feb 26 14:24:02 crc kubenswrapper[4724]: I0226 14:24:02.600687 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535264-r7m5v" podStartSLOduration=1.550507077 podStartE2EDuration="2.600666818s" podCreationTimestamp="2026-02-26 14:24:00 +0000 UTC" firstStartedPulling="2026-02-26 14:24:01.134671407 +0000 UTC m=+11907.790410532" lastFinishedPulling="2026-02-26 14:24:02.184831148 +0000 UTC m=+11908.840570273" observedRunningTime="2026-02-26 14:24:02.593872799 +0000 UTC m=+11909.249611924" watchObservedRunningTime="2026-02-26 14:24:02.600666818 +0000 UTC m=+11909.256405933" Feb 26 14:24:03 crc kubenswrapper[4724]: I0226 14:24:03.595488 4724 generic.go:334] "Generic (PLEG): container finished" podID="6f70d2b3-0274-454b-9b68-88a1e8bd8342" containerID="aa03166c13d748bec0d954229d77b9e433469dd8a34a6bfc40042e2baab330fc" exitCode=0 Feb 26 14:24:03 crc kubenswrapper[4724]: I0226 14:24:03.596040 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535264-r7m5v" event={"ID":"6f70d2b3-0274-454b-9b68-88a1e8bd8342","Type":"ContainerDied","Data":"aa03166c13d748bec0d954229d77b9e433469dd8a34a6bfc40042e2baab330fc"} Feb 26 14:24:04 crc kubenswrapper[4724]: I0226 14:24:04.943439 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535264-r7m5v" Feb 26 14:24:05 crc kubenswrapper[4724]: I0226 14:24:05.121468 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwdp4\" (UniqueName: \"kubernetes.io/projected/6f70d2b3-0274-454b-9b68-88a1e8bd8342-kube-api-access-fwdp4\") pod \"6f70d2b3-0274-454b-9b68-88a1e8bd8342\" (UID: \"6f70d2b3-0274-454b-9b68-88a1e8bd8342\") " Feb 26 14:24:05 crc kubenswrapper[4724]: I0226 14:24:05.148195 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f70d2b3-0274-454b-9b68-88a1e8bd8342-kube-api-access-fwdp4" (OuterVolumeSpecName: "kube-api-access-fwdp4") pod "6f70d2b3-0274-454b-9b68-88a1e8bd8342" (UID: "6f70d2b3-0274-454b-9b68-88a1e8bd8342"). InnerVolumeSpecName "kube-api-access-fwdp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:24:05 crc kubenswrapper[4724]: I0226 14:24:05.224297 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwdp4\" (UniqueName: \"kubernetes.io/projected/6f70d2b3-0274-454b-9b68-88a1e8bd8342-kube-api-access-fwdp4\") on node \"crc\" DevicePath \"\"" Feb 26 14:24:05 crc kubenswrapper[4724]: I0226 14:24:05.637512 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535264-r7m5v" event={"ID":"6f70d2b3-0274-454b-9b68-88a1e8bd8342","Type":"ContainerDied","Data":"cc31135fcfc5af7c4af8fed99030fc0271e424139e6e09f5bdf670b4d55fcbc9"} Feb 26 14:24:05 crc kubenswrapper[4724]: I0226 14:24:05.637887 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc31135fcfc5af7c4af8fed99030fc0271e424139e6e09f5bdf670b4d55fcbc9" Feb 26 14:24:05 crc kubenswrapper[4724]: I0226 14:24:05.637644 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535264-r7m5v" Feb 26 14:24:05 crc kubenswrapper[4724]: I0226 14:24:05.698458 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535258-q9b5s"] Feb 26 14:24:05 crc kubenswrapper[4724]: I0226 14:24:05.709509 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535258-q9b5s"] Feb 26 14:24:05 crc kubenswrapper[4724]: I0226 14:24:05.986069 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6965f148-bc0b-4754-bb14-5246bec643c0" path="/var/lib/kubelet/pods/6965f148-bc0b-4754-bb14-5246bec643c0/volumes" Feb 26 14:24:06 crc kubenswrapper[4724]: I0226 14:24:06.977129 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:24:06 crc kubenswrapper[4724]: E0226 14:24:06.977354 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:24:17 crc kubenswrapper[4724]: I0226 14:24:17.976136 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:24:17 crc kubenswrapper[4724]: E0226 14:24:17.976912 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:24:31 crc kubenswrapper[4724]: I0226 14:24:31.975611 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:24:31 crc kubenswrapper[4724]: E0226 14:24:31.976300 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:24:32 crc kubenswrapper[4724]: I0226 14:24:32.617457 4724 scope.go:117] "RemoveContainer" containerID="7f94da5f1cd491920870bcf89ba7c7c055bd5fd4e5867742234dfa4c469fd26f" Feb 26 14:24:47 crc kubenswrapper[4724]: I0226 14:24:46.976427 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:24:47 crc kubenswrapper[4724]: E0226 14:24:46.978007 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:25:01 crc kubenswrapper[4724]: I0226 14:25:01.977312 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:25:01 crc kubenswrapper[4724]: E0226 14:25:01.979017 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:25:15 crc kubenswrapper[4724]: I0226 14:25:15.976568 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:25:15 crc kubenswrapper[4724]: E0226 14:25:15.977436 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.379439 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pmgf6"] Feb 26 14:25:29 crc kubenswrapper[4724]: E0226 14:25:29.380783 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f70d2b3-0274-454b-9b68-88a1e8bd8342" containerName="oc" Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.380812 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f70d2b3-0274-454b-9b68-88a1e8bd8342" containerName="oc" Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.381259 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f70d2b3-0274-454b-9b68-88a1e8bd8342" containerName="oc" Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.383858 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.396159 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pmgf6"] Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.418468 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r664r\" (UniqueName: \"kubernetes.io/projected/b9bb99a6-811e-4fa5-819f-89d20144957b-kube-api-access-r664r\") pod \"community-operators-pmgf6\" (UID: \"b9bb99a6-811e-4fa5-819f-89d20144957b\") " pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.418526 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bb99a6-811e-4fa5-819f-89d20144957b-utilities\") pod \"community-operators-pmgf6\" (UID: \"b9bb99a6-811e-4fa5-819f-89d20144957b\") " pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.418568 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bb99a6-811e-4fa5-819f-89d20144957b-catalog-content\") pod \"community-operators-pmgf6\" (UID: \"b9bb99a6-811e-4fa5-819f-89d20144957b\") " pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.519594 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bb99a6-811e-4fa5-819f-89d20144957b-utilities\") pod \"community-operators-pmgf6\" (UID: \"b9bb99a6-811e-4fa5-819f-89d20144957b\") " pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.519649 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bb99a6-811e-4fa5-819f-89d20144957b-catalog-content\") pod \"community-operators-pmgf6\" (UID: \"b9bb99a6-811e-4fa5-819f-89d20144957b\") " pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.519796 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r664r\" (UniqueName: \"kubernetes.io/projected/b9bb99a6-811e-4fa5-819f-89d20144957b-kube-api-access-r664r\") pod \"community-operators-pmgf6\" (UID: \"b9bb99a6-811e-4fa5-819f-89d20144957b\") " pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.520693 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bb99a6-811e-4fa5-819f-89d20144957b-utilities\") pod \"community-operators-pmgf6\" (UID: \"b9bb99a6-811e-4fa5-819f-89d20144957b\") " pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.520965 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bb99a6-811e-4fa5-819f-89d20144957b-catalog-content\") pod \"community-operators-pmgf6\" (UID: \"b9bb99a6-811e-4fa5-819f-89d20144957b\") " pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.556560 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r664r\" (UniqueName: \"kubernetes.io/projected/b9bb99a6-811e-4fa5-819f-89d20144957b-kube-api-access-r664r\") pod \"community-operators-pmgf6\" (UID: \"b9bb99a6-811e-4fa5-819f-89d20144957b\") " pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.706451 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:25:29 crc kubenswrapper[4724]: I0226 14:25:29.980945 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:25:29 crc kubenswrapper[4724]: E0226 14:25:29.982529 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:25:30 crc kubenswrapper[4724]: I0226 14:25:30.371872 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pmgf6"] Feb 26 14:25:30 crc kubenswrapper[4724]: I0226 14:25:30.679122 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmgf6" event={"ID":"b9bb99a6-811e-4fa5-819f-89d20144957b","Type":"ContainerStarted","Data":"21df728ce9ef51fb5cea755af5276352995ad11e0761e954b68984e2de6a6f8f"} Feb 26 14:25:31 crc kubenswrapper[4724]: I0226 14:25:31.694680 4724 generic.go:334] "Generic (PLEG): container finished" podID="b9bb99a6-811e-4fa5-819f-89d20144957b" containerID="61ac6b11c8a547d22a37411fc1a810856700b30fe446beb49706f1e50774f357" exitCode=0 Feb 26 14:25:31 crc kubenswrapper[4724]: I0226 14:25:31.694748 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmgf6" event={"ID":"b9bb99a6-811e-4fa5-819f-89d20144957b","Type":"ContainerDied","Data":"61ac6b11c8a547d22a37411fc1a810856700b30fe446beb49706f1e50774f357"} Feb 26 14:25:33 crc kubenswrapper[4724]: I0226 14:25:33.715166 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmgf6" event={"ID":"b9bb99a6-811e-4fa5-819f-89d20144957b","Type":"ContainerStarted","Data":"a7dba53e9e926a532e135dba67b6b212c101201f4b6cac3f9a169cb81c5175a8"} Feb 26 14:25:38 crc kubenswrapper[4724]: I0226 14:25:38.779301 4724 generic.go:334] "Generic (PLEG): container finished" podID="b9bb99a6-811e-4fa5-819f-89d20144957b" containerID="a7dba53e9e926a532e135dba67b6b212c101201f4b6cac3f9a169cb81c5175a8" exitCode=0 Feb 26 14:25:38 crc kubenswrapper[4724]: I0226 14:25:38.779369 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmgf6" event={"ID":"b9bb99a6-811e-4fa5-819f-89d20144957b","Type":"ContainerDied","Data":"a7dba53e9e926a532e135dba67b6b212c101201f4b6cac3f9a169cb81c5175a8"} Feb 26 14:25:40 crc kubenswrapper[4724]: I0226 14:25:40.800391 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmgf6" event={"ID":"b9bb99a6-811e-4fa5-819f-89d20144957b","Type":"ContainerStarted","Data":"f62fa8a93e6c32b0421a8bc78c7fdb54d4683bc94b7a04cb183d01cbea92553d"} Feb 26 14:25:40 crc kubenswrapper[4724]: I0226 14:25:40.821603 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pmgf6" podStartSLOduration=3.893903763 podStartE2EDuration="11.821566237s" podCreationTimestamp="2026-02-26 14:25:29 +0000 UTC" firstStartedPulling="2026-02-26 14:25:31.69806227 +0000 UTC m=+11998.353801385" lastFinishedPulling="2026-02-26 14:25:39.625724724 +0000 UTC m=+12006.281463859" observedRunningTime="2026-02-26 14:25:40.817939846 +0000 UTC m=+12007.473678981" watchObservedRunningTime="2026-02-26 14:25:40.821566237 +0000 UTC m=+12007.477305372" Feb 26 14:25:41 crc kubenswrapper[4724]: I0226 14:25:41.975852 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:25:41 crc kubenswrapper[4724]: E0226 14:25:41.976263 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:25:49 crc kubenswrapper[4724]: I0226 14:25:49.707680 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:25:49 crc kubenswrapper[4724]: I0226 14:25:49.708409 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:25:50 crc kubenswrapper[4724]: I0226 14:25:50.763023 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pmgf6" podUID="b9bb99a6-811e-4fa5-819f-89d20144957b" containerName="registry-server" probeResult="failure" output=< Feb 26 14:25:50 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:25:50 crc kubenswrapper[4724]: > Feb 26 14:25:56 crc kubenswrapper[4724]: I0226 14:25:56.975737 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:25:56 crc kubenswrapper[4724]: E0226 14:25:56.976549 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:26:00 crc kubenswrapper[4724]: I0226 14:26:00.186219 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535266-dcg72"] Feb 26 14:26:00 crc kubenswrapper[4724]: I0226 14:26:00.189853 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535266-dcg72" Feb 26 14:26:00 crc kubenswrapper[4724]: I0226 14:26:00.192575 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:26:00 crc kubenswrapper[4724]: I0226 14:26:00.192787 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:26:00 crc kubenswrapper[4724]: I0226 14:26:00.193052 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:26:00 crc kubenswrapper[4724]: I0226 14:26:00.198616 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535266-dcg72"] Feb 26 14:26:00 crc kubenswrapper[4724]: I0226 14:26:00.276991 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnndz\" (UniqueName: \"kubernetes.io/projected/db229f1d-ed59-4305-b965-af3d6239ff64-kube-api-access-jnndz\") pod \"auto-csr-approver-29535266-dcg72\" (UID: \"db229f1d-ed59-4305-b965-af3d6239ff64\") " pod="openshift-infra/auto-csr-approver-29535266-dcg72" Feb 26 14:26:00 crc kubenswrapper[4724]: I0226 14:26:00.380763 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnndz\" (UniqueName: \"kubernetes.io/projected/db229f1d-ed59-4305-b965-af3d6239ff64-kube-api-access-jnndz\") pod \"auto-csr-approver-29535266-dcg72\" (UID: \"db229f1d-ed59-4305-b965-af3d6239ff64\") " pod="openshift-infra/auto-csr-approver-29535266-dcg72" Feb 26 14:26:00 crc kubenswrapper[4724]: I0226 14:26:00.420805 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnndz\" (UniqueName: \"kubernetes.io/projected/db229f1d-ed59-4305-b965-af3d6239ff64-kube-api-access-jnndz\") pod \"auto-csr-approver-29535266-dcg72\" (UID: \"db229f1d-ed59-4305-b965-af3d6239ff64\") " pod="openshift-infra/auto-csr-approver-29535266-dcg72" Feb 26 14:26:00 crc kubenswrapper[4724]: I0226 14:26:00.522489 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535266-dcg72" Feb 26 14:26:00 crc kubenswrapper[4724]: I0226 14:26:00.812254 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pmgf6" podUID="b9bb99a6-811e-4fa5-819f-89d20144957b" containerName="registry-server" probeResult="failure" output=< Feb 26 14:26:00 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:26:00 crc kubenswrapper[4724]: > Feb 26 14:26:01 crc kubenswrapper[4724]: I0226 14:26:01.630257 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535266-dcg72"] Feb 26 14:26:02 crc kubenswrapper[4724]: I0226 14:26:02.047553 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535266-dcg72" event={"ID":"db229f1d-ed59-4305-b965-af3d6239ff64","Type":"ContainerStarted","Data":"7a2b5d73ec16be70891891f06b7cf1bc1963ae4da93b76fda8f409f13f5a770f"} Feb 26 14:26:04 crc kubenswrapper[4724]: I0226 14:26:04.077550 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535266-dcg72" event={"ID":"db229f1d-ed59-4305-b965-af3d6239ff64","Type":"ContainerStarted","Data":"e90bc583bc4edc4b0bf3041c5cc293a55e0113a1de9bdd5b67d13f0f35fbfb6d"} Feb 26 14:26:04 crc kubenswrapper[4724]: I0226 14:26:04.102432 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535266-dcg72" podStartSLOduration=3.07405881 podStartE2EDuration="4.102400449s" podCreationTimestamp="2026-02-26 14:26:00 +0000 UTC" firstStartedPulling="2026-02-26 14:26:01.650340701 +0000 UTC m=+12028.306079816" lastFinishedPulling="2026-02-26 14:26:02.67868234 +0000 UTC m=+12029.334421455" observedRunningTime="2026-02-26 14:26:04.095553199 +0000 UTC m=+12030.751292314" watchObservedRunningTime="2026-02-26 14:26:04.102400449 +0000 UTC m=+12030.758139564" Feb 26 14:26:06 crc kubenswrapper[4724]: I0226 14:26:06.101812 4724 generic.go:334] "Generic (PLEG): container finished" podID="db229f1d-ed59-4305-b965-af3d6239ff64" containerID="e90bc583bc4edc4b0bf3041c5cc293a55e0113a1de9bdd5b67d13f0f35fbfb6d" exitCode=0 Feb 26 14:26:06 crc kubenswrapper[4724]: I0226 14:26:06.101893 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535266-dcg72" event={"ID":"db229f1d-ed59-4305-b965-af3d6239ff64","Type":"ContainerDied","Data":"e90bc583bc4edc4b0bf3041c5cc293a55e0113a1de9bdd5b67d13f0f35fbfb6d"} Feb 26 14:26:07 crc kubenswrapper[4724]: I0226 14:26:07.658748 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535266-dcg72" Feb 26 14:26:07 crc kubenswrapper[4724]: I0226 14:26:07.805228 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnndz\" (UniqueName: \"kubernetes.io/projected/db229f1d-ed59-4305-b965-af3d6239ff64-kube-api-access-jnndz\") pod \"db229f1d-ed59-4305-b965-af3d6239ff64\" (UID: \"db229f1d-ed59-4305-b965-af3d6239ff64\") " Feb 26 14:26:07 crc kubenswrapper[4724]: I0226 14:26:07.827838 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db229f1d-ed59-4305-b965-af3d6239ff64-kube-api-access-jnndz" (OuterVolumeSpecName: "kube-api-access-jnndz") pod "db229f1d-ed59-4305-b965-af3d6239ff64" (UID: "db229f1d-ed59-4305-b965-af3d6239ff64"). InnerVolumeSpecName "kube-api-access-jnndz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:26:07 crc kubenswrapper[4724]: I0226 14:26:07.909791 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnndz\" (UniqueName: \"kubernetes.io/projected/db229f1d-ed59-4305-b965-af3d6239ff64-kube-api-access-jnndz\") on node \"crc\" DevicePath \"\"" Feb 26 14:26:08 crc kubenswrapper[4724]: I0226 14:26:08.127500 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535266-dcg72" event={"ID":"db229f1d-ed59-4305-b965-af3d6239ff64","Type":"ContainerDied","Data":"7a2b5d73ec16be70891891f06b7cf1bc1963ae4da93b76fda8f409f13f5a770f"} Feb 26 14:26:08 crc kubenswrapper[4724]: I0226 14:26:08.127551 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a2b5d73ec16be70891891f06b7cf1bc1963ae4da93b76fda8f409f13f5a770f" Feb 26 14:26:08 crc kubenswrapper[4724]: I0226 14:26:08.127559 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535266-dcg72" Feb 26 14:26:08 crc kubenswrapper[4724]: I0226 14:26:08.206607 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535260-zb5c6"] Feb 26 14:26:08 crc kubenswrapper[4724]: I0226 14:26:08.221759 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535260-zb5c6"] Feb 26 14:26:09 crc kubenswrapper[4724]: I0226 14:26:09.988616 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b662c1-1e34-45c8-b790-e7ff1995d0e3" path="/var/lib/kubelet/pods/75b662c1-1e34-45c8-b790-e7ff1995d0e3/volumes" Feb 26 14:26:10 crc kubenswrapper[4724]: I0226 14:26:10.764872 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pmgf6" podUID="b9bb99a6-811e-4fa5-819f-89d20144957b" containerName="registry-server" probeResult="failure" output=< Feb 26 14:26:10 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:26:10 crc kubenswrapper[4724]: > Feb 26 14:26:11 crc kubenswrapper[4724]: I0226 14:26:11.976851 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:26:11 crc kubenswrapper[4724]: E0226 14:26:11.977201 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:26:19 crc kubenswrapper[4724]: I0226 14:26:19.763809 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:26:20 crc kubenswrapper[4724]: I0226 14:26:20.060837 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:26:20 crc kubenswrapper[4724]: I0226 14:26:20.167389 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pmgf6"] Feb 26 14:26:21 crc kubenswrapper[4724]: I0226 14:26:21.306126 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pmgf6" podUID="b9bb99a6-811e-4fa5-819f-89d20144957b" containerName="registry-server" containerID="cri-o://f62fa8a93e6c32b0421a8bc78c7fdb54d4683bc94b7a04cb183d01cbea92553d" gracePeriod=2 Feb 26 14:26:21 crc kubenswrapper[4724]: I0226 14:26:21.897745 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:26:21 crc kubenswrapper[4724]: I0226 14:26:21.997972 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bb99a6-811e-4fa5-819f-89d20144957b-utilities\") pod \"b9bb99a6-811e-4fa5-819f-89d20144957b\" (UID: \"b9bb99a6-811e-4fa5-819f-89d20144957b\") " Feb 26 14:26:21 crc kubenswrapper[4724]: I0226 14:26:21.998327 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r664r\" (UniqueName: \"kubernetes.io/projected/b9bb99a6-811e-4fa5-819f-89d20144957b-kube-api-access-r664r\") pod \"b9bb99a6-811e-4fa5-819f-89d20144957b\" (UID: \"b9bb99a6-811e-4fa5-819f-89d20144957b\") " Feb 26 14:26:21 crc kubenswrapper[4724]: I0226 14:26:21.998469 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bb99a6-811e-4fa5-819f-89d20144957b-catalog-content\") pod \"b9bb99a6-811e-4fa5-819f-89d20144957b\" (UID: \"b9bb99a6-811e-4fa5-819f-89d20144957b\") " Feb 26 14:26:21 crc kubenswrapper[4724]: I0226 14:26:21.998734 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9bb99a6-811e-4fa5-819f-89d20144957b-utilities" (OuterVolumeSpecName: "utilities") pod "b9bb99a6-811e-4fa5-819f-89d20144957b" (UID: "b9bb99a6-811e-4fa5-819f-89d20144957b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.005611 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bb99a6-811e-4fa5-819f-89d20144957b-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.033572 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9bb99a6-811e-4fa5-819f-89d20144957b-kube-api-access-r664r" (OuterVolumeSpecName: "kube-api-access-r664r") pod "b9bb99a6-811e-4fa5-819f-89d20144957b" (UID: "b9bb99a6-811e-4fa5-819f-89d20144957b"). InnerVolumeSpecName "kube-api-access-r664r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.094071 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9bb99a6-811e-4fa5-819f-89d20144957b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9bb99a6-811e-4fa5-819f-89d20144957b" (UID: "b9bb99a6-811e-4fa5-819f-89d20144957b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.108930 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r664r\" (UniqueName: \"kubernetes.io/projected/b9bb99a6-811e-4fa5-819f-89d20144957b-kube-api-access-r664r\") on node \"crc\" DevicePath \"\"" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.108965 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bb99a6-811e-4fa5-819f-89d20144957b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.320765 4724 generic.go:334] "Generic (PLEG): container finished" podID="b9bb99a6-811e-4fa5-819f-89d20144957b" containerID="f62fa8a93e6c32b0421a8bc78c7fdb54d4683bc94b7a04cb183d01cbea92553d" exitCode=0 Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.320831 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmgf6" event={"ID":"b9bb99a6-811e-4fa5-819f-89d20144957b","Type":"ContainerDied","Data":"f62fa8a93e6c32b0421a8bc78c7fdb54d4683bc94b7a04cb183d01cbea92553d"} Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.320845 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pmgf6" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.320870 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmgf6" event={"ID":"b9bb99a6-811e-4fa5-819f-89d20144957b","Type":"ContainerDied","Data":"21df728ce9ef51fb5cea755af5276352995ad11e0761e954b68984e2de6a6f8f"} Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.320895 4724 scope.go:117] "RemoveContainer" containerID="f62fa8a93e6c32b0421a8bc78c7fdb54d4683bc94b7a04cb183d01cbea92553d" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.345943 4724 scope.go:117] "RemoveContainer" containerID="a7dba53e9e926a532e135dba67b6b212c101201f4b6cac3f9a169cb81c5175a8" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.368044 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pmgf6"] Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.380729 4724 scope.go:117] "RemoveContainer" containerID="61ac6b11c8a547d22a37411fc1a810856700b30fe446beb49706f1e50774f357" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.390144 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pmgf6"] Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.444858 4724 scope.go:117] "RemoveContainer" containerID="f62fa8a93e6c32b0421a8bc78c7fdb54d4683bc94b7a04cb183d01cbea92553d" Feb 26 14:26:22 crc kubenswrapper[4724]: E0226 14:26:22.476137 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f62fa8a93e6c32b0421a8bc78c7fdb54d4683bc94b7a04cb183d01cbea92553d\": container with ID starting with f62fa8a93e6c32b0421a8bc78c7fdb54d4683bc94b7a04cb183d01cbea92553d not found: ID does not exist" containerID="f62fa8a93e6c32b0421a8bc78c7fdb54d4683bc94b7a04cb183d01cbea92553d" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.476217 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f62fa8a93e6c32b0421a8bc78c7fdb54d4683bc94b7a04cb183d01cbea92553d"} err="failed to get container status \"f62fa8a93e6c32b0421a8bc78c7fdb54d4683bc94b7a04cb183d01cbea92553d\": rpc error: code = NotFound desc = could not find container \"f62fa8a93e6c32b0421a8bc78c7fdb54d4683bc94b7a04cb183d01cbea92553d\": container with ID starting with f62fa8a93e6c32b0421a8bc78c7fdb54d4683bc94b7a04cb183d01cbea92553d not found: ID does not exist" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.476246 4724 scope.go:117] "RemoveContainer" containerID="a7dba53e9e926a532e135dba67b6b212c101201f4b6cac3f9a169cb81c5175a8" Feb 26 14:26:22 crc kubenswrapper[4724]: E0226 14:26:22.477075 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7dba53e9e926a532e135dba67b6b212c101201f4b6cac3f9a169cb81c5175a8\": container with ID starting with a7dba53e9e926a532e135dba67b6b212c101201f4b6cac3f9a169cb81c5175a8 not found: ID does not exist" containerID="a7dba53e9e926a532e135dba67b6b212c101201f4b6cac3f9a169cb81c5175a8" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.477118 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7dba53e9e926a532e135dba67b6b212c101201f4b6cac3f9a169cb81c5175a8"} err="failed to get container status \"a7dba53e9e926a532e135dba67b6b212c101201f4b6cac3f9a169cb81c5175a8\": rpc error: code = NotFound desc = could not find container \"a7dba53e9e926a532e135dba67b6b212c101201f4b6cac3f9a169cb81c5175a8\": container with ID starting with a7dba53e9e926a532e135dba67b6b212c101201f4b6cac3f9a169cb81c5175a8 not found: ID does not exist" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.477130 4724 scope.go:117] "RemoveContainer" containerID="61ac6b11c8a547d22a37411fc1a810856700b30fe446beb49706f1e50774f357" Feb 26 14:26:22 crc kubenswrapper[4724]: E0226 14:26:22.477465 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61ac6b11c8a547d22a37411fc1a810856700b30fe446beb49706f1e50774f357\": container with ID starting with 61ac6b11c8a547d22a37411fc1a810856700b30fe446beb49706f1e50774f357 not found: ID does not exist" containerID="61ac6b11c8a547d22a37411fc1a810856700b30fe446beb49706f1e50774f357" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.477481 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61ac6b11c8a547d22a37411fc1a810856700b30fe446beb49706f1e50774f357"} err="failed to get container status \"61ac6b11c8a547d22a37411fc1a810856700b30fe446beb49706f1e50774f357\": rpc error: code = NotFound desc = could not find container \"61ac6b11c8a547d22a37411fc1a810856700b30fe446beb49706f1e50774f357\": container with ID starting with 61ac6b11c8a547d22a37411fc1a810856700b30fe446beb49706f1e50774f357 not found: ID does not exist" Feb 26 14:26:22 crc kubenswrapper[4724]: I0226 14:26:22.976140 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:26:22 crc kubenswrapper[4724]: E0226 14:26:22.976561 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:26:23 crc kubenswrapper[4724]: I0226 14:26:23.987609 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9bb99a6-811e-4fa5-819f-89d20144957b" path="/var/lib/kubelet/pods/b9bb99a6-811e-4fa5-819f-89d20144957b/volumes" Feb 26 14:26:32 crc kubenswrapper[4724]: I0226 14:26:32.739119 4724 scope.go:117] "RemoveContainer" containerID="45169eda89aa3cccc6e42b3621549cf699e01872fbd5f5b22f099175e1cf2cc4" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.168137 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hp45w"] Feb 26 14:26:33 crc kubenswrapper[4724]: E0226 14:26:33.169366 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bb99a6-811e-4fa5-819f-89d20144957b" containerName="extract-utilities" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.169394 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bb99a6-811e-4fa5-819f-89d20144957b" containerName="extract-utilities" Feb 26 14:26:33 crc kubenswrapper[4724]: E0226 14:26:33.169422 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bb99a6-811e-4fa5-819f-89d20144957b" containerName="extract-content" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.169430 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bb99a6-811e-4fa5-819f-89d20144957b" containerName="extract-content" Feb 26 14:26:33 crc kubenswrapper[4724]: E0226 14:26:33.169461 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db229f1d-ed59-4305-b965-af3d6239ff64" containerName="oc" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.169468 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="db229f1d-ed59-4305-b965-af3d6239ff64" containerName="oc" Feb 26 14:26:33 crc kubenswrapper[4724]: E0226 14:26:33.169485 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bb99a6-811e-4fa5-819f-89d20144957b" containerName="registry-server" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.169492 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bb99a6-811e-4fa5-819f-89d20144957b" containerName="registry-server" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.169765 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="db229f1d-ed59-4305-b965-af3d6239ff64" containerName="oc" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.169806 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9bb99a6-811e-4fa5-819f-89d20144957b" containerName="registry-server" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.171821 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.200285 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hp45w"] Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.337619 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s258s\" (UniqueName: \"kubernetes.io/projected/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-kube-api-access-s258s\") pod \"redhat-marketplace-hp45w\" (UID: \"1d4280bc-b39b-4c41-aabf-16b0a7583a6a\") " pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.337762 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-utilities\") pod \"redhat-marketplace-hp45w\" (UID: \"1d4280bc-b39b-4c41-aabf-16b0a7583a6a\") " pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.337834 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-catalog-content\") pod \"redhat-marketplace-hp45w\" (UID: \"1d4280bc-b39b-4c41-aabf-16b0a7583a6a\") " pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.443113 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s258s\" (UniqueName: \"kubernetes.io/projected/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-kube-api-access-s258s\") pod \"redhat-marketplace-hp45w\" (UID: \"1d4280bc-b39b-4c41-aabf-16b0a7583a6a\") " pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.443215 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-utilities\") pod \"redhat-marketplace-hp45w\" (UID: \"1d4280bc-b39b-4c41-aabf-16b0a7583a6a\") " pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.443274 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-catalog-content\") pod \"redhat-marketplace-hp45w\" (UID: \"1d4280bc-b39b-4c41-aabf-16b0a7583a6a\") " pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.444595 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-catalog-content\") pod \"redhat-marketplace-hp45w\" (UID: \"1d4280bc-b39b-4c41-aabf-16b0a7583a6a\") " pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.444867 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-utilities\") pod \"redhat-marketplace-hp45w\" (UID: \"1d4280bc-b39b-4c41-aabf-16b0a7583a6a\") " pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.470593 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s258s\" (UniqueName: \"kubernetes.io/projected/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-kube-api-access-s258s\") pod \"redhat-marketplace-hp45w\" (UID: \"1d4280bc-b39b-4c41-aabf-16b0a7583a6a\") " pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:26:33 crc kubenswrapper[4724]: I0226 14:26:33.499773 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:26:34 crc kubenswrapper[4724]: I0226 14:26:34.162003 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hp45w"] Feb 26 14:26:34 crc kubenswrapper[4724]: I0226 14:26:34.461387 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hp45w" event={"ID":"1d4280bc-b39b-4c41-aabf-16b0a7583a6a","Type":"ContainerStarted","Data":"905e1806cf171f68fadc9022e2f58289e145f8f4fd13bbac03e8f28a6011df18"} Feb 26 14:26:34 crc kubenswrapper[4724]: I0226 14:26:34.461469 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hp45w" event={"ID":"1d4280bc-b39b-4c41-aabf-16b0a7583a6a","Type":"ContainerStarted","Data":"af26596e2980dbf0d171cc1aa33a438f8131c7e726c3a9c0cd32bd737d380453"} Feb 26 14:26:35 crc kubenswrapper[4724]: I0226 14:26:35.473966 4724 generic.go:334] "Generic (PLEG): container finished" podID="1d4280bc-b39b-4c41-aabf-16b0a7583a6a" containerID="905e1806cf171f68fadc9022e2f58289e145f8f4fd13bbac03e8f28a6011df18" exitCode=0 Feb 26 14:26:35 crc kubenswrapper[4724]: I0226 14:26:35.474048 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hp45w" event={"ID":"1d4280bc-b39b-4c41-aabf-16b0a7583a6a","Type":"ContainerDied","Data":"905e1806cf171f68fadc9022e2f58289e145f8f4fd13bbac03e8f28a6011df18"} Feb 26 14:26:36 crc kubenswrapper[4724]: I0226 14:26:36.976008 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:26:36 crc kubenswrapper[4724]: E0226 14:26:36.977394 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:26:37 crc kubenswrapper[4724]: I0226 14:26:37.504284 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hp45w" event={"ID":"1d4280bc-b39b-4c41-aabf-16b0a7583a6a","Type":"ContainerStarted","Data":"9d459acbb226f752378f21c5ca04b751ac72c4b6deff178d392d3ed4a5b2440b"} Feb 26 14:26:39 crc kubenswrapper[4724]: I0226 14:26:39.528789 4724 generic.go:334] "Generic (PLEG): container finished" podID="1d4280bc-b39b-4c41-aabf-16b0a7583a6a" containerID="9d459acbb226f752378f21c5ca04b751ac72c4b6deff178d392d3ed4a5b2440b" exitCode=0 Feb 26 14:26:39 crc kubenswrapper[4724]: I0226 14:26:39.528889 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hp45w" event={"ID":"1d4280bc-b39b-4c41-aabf-16b0a7583a6a","Type":"ContainerDied","Data":"9d459acbb226f752378f21c5ca04b751ac72c4b6deff178d392d3ed4a5b2440b"} Feb 26 14:26:40 crc kubenswrapper[4724]: I0226 14:26:40.539961 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hp45w" event={"ID":"1d4280bc-b39b-4c41-aabf-16b0a7583a6a","Type":"ContainerStarted","Data":"4fddb580d39f9063e5ddbb6f828d9b8f1ffe4f598a40023c0f3d1b3ee5b49fe0"} Feb 26 14:26:40 crc kubenswrapper[4724]: I0226 14:26:40.570752 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hp45w" podStartSLOduration=3.102453798 podStartE2EDuration="7.570687666s" podCreationTimestamp="2026-02-26 14:26:33 +0000 UTC" firstStartedPulling="2026-02-26 14:26:35.478018611 +0000 UTC m=+12062.133757726" lastFinishedPulling="2026-02-26 14:26:39.946252479 +0000 UTC m=+12066.601991594" observedRunningTime="2026-02-26 14:26:40.558654546 +0000 UTC m=+12067.214393671" watchObservedRunningTime="2026-02-26 14:26:40.570687666 +0000 UTC m=+12067.226426781" Feb 26 14:26:43 crc kubenswrapper[4724]: I0226 14:26:43.500611 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:26:43 crc kubenswrapper[4724]: I0226 14:26:43.501002 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:26:44 crc kubenswrapper[4724]: I0226 14:26:44.559760 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hp45w" podUID="1d4280bc-b39b-4c41-aabf-16b0a7583a6a" containerName="registry-server" probeResult="failure" output=< Feb 26 14:26:44 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:26:44 crc kubenswrapper[4724]: > Feb 26 14:26:49 crc kubenswrapper[4724]: I0226 14:26:49.975663 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:26:49 crc kubenswrapper[4724]: E0226 14:26:49.976497 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:26:54 crc kubenswrapper[4724]: I0226 14:26:54.570886 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hp45w" podUID="1d4280bc-b39b-4c41-aabf-16b0a7583a6a" containerName="registry-server" probeResult="failure" output=< Feb 26 14:26:54 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:26:54 crc kubenswrapper[4724]: > Feb 26 14:27:00 crc kubenswrapper[4724]: I0226 14:27:00.976595 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:27:00 crc kubenswrapper[4724]: E0226 14:27:00.978721 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:27:03 crc kubenswrapper[4724]: I0226 14:27:03.567562 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:27:03 crc kubenswrapper[4724]: I0226 14:27:03.637523 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:27:04 crc kubenswrapper[4724]: I0226 14:27:04.372579 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hp45w"] Feb 26 14:27:04 crc kubenswrapper[4724]: I0226 14:27:04.787389 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hp45w" podUID="1d4280bc-b39b-4c41-aabf-16b0a7583a6a" containerName="registry-server" containerID="cri-o://4fddb580d39f9063e5ddbb6f828d9b8f1ffe4f598a40023c0f3d1b3ee5b49fe0" gracePeriod=2 Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.700664 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.787824 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-utilities\") pod \"1d4280bc-b39b-4c41-aabf-16b0a7583a6a\" (UID: \"1d4280bc-b39b-4c41-aabf-16b0a7583a6a\") " Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.787916 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s258s\" (UniqueName: \"kubernetes.io/projected/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-kube-api-access-s258s\") pod \"1d4280bc-b39b-4c41-aabf-16b0a7583a6a\" (UID: \"1d4280bc-b39b-4c41-aabf-16b0a7583a6a\") " Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.788223 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-catalog-content\") pod \"1d4280bc-b39b-4c41-aabf-16b0a7583a6a\" (UID: \"1d4280bc-b39b-4c41-aabf-16b0a7583a6a\") " Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.791682 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-utilities" (OuterVolumeSpecName: "utilities") pod "1d4280bc-b39b-4c41-aabf-16b0a7583a6a" (UID: "1d4280bc-b39b-4c41-aabf-16b0a7583a6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.813994 4724 generic.go:334] "Generic (PLEG): container finished" podID="1d4280bc-b39b-4c41-aabf-16b0a7583a6a" containerID="4fddb580d39f9063e5ddbb6f828d9b8f1ffe4f598a40023c0f3d1b3ee5b49fe0" exitCode=0 Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.814268 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hp45w" event={"ID":"1d4280bc-b39b-4c41-aabf-16b0a7583a6a","Type":"ContainerDied","Data":"4fddb580d39f9063e5ddbb6f828d9b8f1ffe4f598a40023c0f3d1b3ee5b49fe0"} Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.814368 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hp45w" event={"ID":"1d4280bc-b39b-4c41-aabf-16b0a7583a6a","Type":"ContainerDied","Data":"af26596e2980dbf0d171cc1aa33a438f8131c7e726c3a9c0cd32bd737d380453"} Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.814399 4724 scope.go:117] "RemoveContainer" containerID="4fddb580d39f9063e5ddbb6f828d9b8f1ffe4f598a40023c0f3d1b3ee5b49fe0" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.814464 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hp45w" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.825301 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-kube-api-access-s258s" (OuterVolumeSpecName: "kube-api-access-s258s") pod "1d4280bc-b39b-4c41-aabf-16b0a7583a6a" (UID: "1d4280bc-b39b-4c41-aabf-16b0a7583a6a"). InnerVolumeSpecName "kube-api-access-s258s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.851490 4724 scope.go:117] "RemoveContainer" containerID="9d459acbb226f752378f21c5ca04b751ac72c4b6deff178d392d3ed4a5b2440b" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.868295 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d4280bc-b39b-4c41-aabf-16b0a7583a6a" (UID: "1d4280bc-b39b-4c41-aabf-16b0a7583a6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.889656 4724 scope.go:117] "RemoveContainer" containerID="905e1806cf171f68fadc9022e2f58289e145f8f4fd13bbac03e8f28a6011df18" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.892742 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.892827 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s258s\" (UniqueName: \"kubernetes.io/projected/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-kube-api-access-s258s\") on node \"crc\" DevicePath \"\"" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.892846 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d4280bc-b39b-4c41-aabf-16b0a7583a6a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.942024 4724 scope.go:117] "RemoveContainer" containerID="4fddb580d39f9063e5ddbb6f828d9b8f1ffe4f598a40023c0f3d1b3ee5b49fe0" Feb 26 14:27:05 crc kubenswrapper[4724]: E0226 14:27:05.942737 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fddb580d39f9063e5ddbb6f828d9b8f1ffe4f598a40023c0f3d1b3ee5b49fe0\": container with ID starting with 4fddb580d39f9063e5ddbb6f828d9b8f1ffe4f598a40023c0f3d1b3ee5b49fe0 not found: ID does not exist" containerID="4fddb580d39f9063e5ddbb6f828d9b8f1ffe4f598a40023c0f3d1b3ee5b49fe0" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.942789 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fddb580d39f9063e5ddbb6f828d9b8f1ffe4f598a40023c0f3d1b3ee5b49fe0"} err="failed to get container status \"4fddb580d39f9063e5ddbb6f828d9b8f1ffe4f598a40023c0f3d1b3ee5b49fe0\": rpc error: code = NotFound desc = could not find container \"4fddb580d39f9063e5ddbb6f828d9b8f1ffe4f598a40023c0f3d1b3ee5b49fe0\": container with ID starting with 4fddb580d39f9063e5ddbb6f828d9b8f1ffe4f598a40023c0f3d1b3ee5b49fe0 not found: ID does not exist" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.942819 4724 scope.go:117] "RemoveContainer" containerID="9d459acbb226f752378f21c5ca04b751ac72c4b6deff178d392d3ed4a5b2440b" Feb 26 14:27:05 crc kubenswrapper[4724]: E0226 14:27:05.943393 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d459acbb226f752378f21c5ca04b751ac72c4b6deff178d392d3ed4a5b2440b\": container with ID starting with 9d459acbb226f752378f21c5ca04b751ac72c4b6deff178d392d3ed4a5b2440b not found: ID does not exist" containerID="9d459acbb226f752378f21c5ca04b751ac72c4b6deff178d392d3ed4a5b2440b" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.943422 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d459acbb226f752378f21c5ca04b751ac72c4b6deff178d392d3ed4a5b2440b"} err="failed to get container status \"9d459acbb226f752378f21c5ca04b751ac72c4b6deff178d392d3ed4a5b2440b\": rpc error: code = NotFound desc = could not find container \"9d459acbb226f752378f21c5ca04b751ac72c4b6deff178d392d3ed4a5b2440b\": container with ID starting with 9d459acbb226f752378f21c5ca04b751ac72c4b6deff178d392d3ed4a5b2440b not found: ID does not exist" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.943434 4724 scope.go:117] "RemoveContainer" containerID="905e1806cf171f68fadc9022e2f58289e145f8f4fd13bbac03e8f28a6011df18" Feb 26 14:27:05 crc kubenswrapper[4724]: E0226 14:27:05.943932 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"905e1806cf171f68fadc9022e2f58289e145f8f4fd13bbac03e8f28a6011df18\": container with ID starting with 905e1806cf171f68fadc9022e2f58289e145f8f4fd13bbac03e8f28a6011df18 not found: ID does not exist" containerID="905e1806cf171f68fadc9022e2f58289e145f8f4fd13bbac03e8f28a6011df18" Feb 26 14:27:05 crc kubenswrapper[4724]: I0226 14:27:05.944003 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"905e1806cf171f68fadc9022e2f58289e145f8f4fd13bbac03e8f28a6011df18"} err="failed to get container status \"905e1806cf171f68fadc9022e2f58289e145f8f4fd13bbac03e8f28a6011df18\": rpc error: code = NotFound desc = could not find container \"905e1806cf171f68fadc9022e2f58289e145f8f4fd13bbac03e8f28a6011df18\": container with ID starting with 905e1806cf171f68fadc9022e2f58289e145f8f4fd13bbac03e8f28a6011df18 not found: ID does not exist" Feb 26 14:27:06 crc kubenswrapper[4724]: I0226 14:27:06.162055 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hp45w"] Feb 26 14:27:06 crc kubenswrapper[4724]: I0226 14:27:06.175287 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hp45w"] Feb 26 14:27:07 crc kubenswrapper[4724]: I0226 14:27:07.998842 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d4280bc-b39b-4c41-aabf-16b0a7583a6a" path="/var/lib/kubelet/pods/1d4280bc-b39b-4c41-aabf-16b0a7583a6a/volumes" Feb 26 14:27:16 crc kubenswrapper[4724]: I0226 14:27:16.976946 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:27:16 crc kubenswrapper[4724]: E0226 14:27:16.979803 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:27:31 crc kubenswrapper[4724]: I0226 14:27:31.975844 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:27:31 crc kubenswrapper[4724]: E0226 14:27:31.976912 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:27:43 crc kubenswrapper[4724]: I0226 14:27:43.989834 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:27:43 crc kubenswrapper[4724]: E0226 14:27:43.991094 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:27:56 crc kubenswrapper[4724]: I0226 14:27:56.977096 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:27:56 crc kubenswrapper[4724]: E0226 14:27:56.978664 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:28:00 crc kubenswrapper[4724]: I0226 14:28:00.174861 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535268-vw8sv"] Feb 26 14:28:00 crc kubenswrapper[4724]: E0226 14:28:00.178086 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4280bc-b39b-4c41-aabf-16b0a7583a6a" containerName="extract-utilities" Feb 26 14:28:00 crc kubenswrapper[4724]: I0226 14:28:00.178117 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4280bc-b39b-4c41-aabf-16b0a7583a6a" containerName="extract-utilities" Feb 26 14:28:00 crc kubenswrapper[4724]: E0226 14:28:00.178128 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4280bc-b39b-4c41-aabf-16b0a7583a6a" containerName="registry-server" Feb 26 14:28:00 crc kubenswrapper[4724]: I0226 14:28:00.178136 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4280bc-b39b-4c41-aabf-16b0a7583a6a" containerName="registry-server" Feb 26 14:28:00 crc kubenswrapper[4724]: E0226 14:28:00.178200 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4280bc-b39b-4c41-aabf-16b0a7583a6a" containerName="extract-content" Feb 26 14:28:00 crc kubenswrapper[4724]: I0226 14:28:00.178207 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4280bc-b39b-4c41-aabf-16b0a7583a6a" containerName="extract-content" Feb 26 14:28:00 crc kubenswrapper[4724]: I0226 14:28:00.178411 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d4280bc-b39b-4c41-aabf-16b0a7583a6a" containerName="registry-server" Feb 26 14:28:00 crc kubenswrapper[4724]: I0226 14:28:00.179912 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535268-vw8sv" Feb 26 14:28:00 crc kubenswrapper[4724]: I0226 14:28:00.183084 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:28:00 crc kubenswrapper[4724]: I0226 14:28:00.184477 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:28:00 crc kubenswrapper[4724]: I0226 14:28:00.190094 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:28:00 crc kubenswrapper[4724]: I0226 14:28:00.203418 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535268-vw8sv"] Feb 26 14:28:00 crc kubenswrapper[4724]: I0226 14:28:00.294649 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52cbb\" (UniqueName: \"kubernetes.io/projected/7f9cce36-8e73-4532-a822-9be5c3933b75-kube-api-access-52cbb\") pod \"auto-csr-approver-29535268-vw8sv\" (UID: \"7f9cce36-8e73-4532-a822-9be5c3933b75\") " pod="openshift-infra/auto-csr-approver-29535268-vw8sv" Feb 26 14:28:00 crc kubenswrapper[4724]: I0226 14:28:00.398625 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52cbb\" (UniqueName: \"kubernetes.io/projected/7f9cce36-8e73-4532-a822-9be5c3933b75-kube-api-access-52cbb\") pod \"auto-csr-approver-29535268-vw8sv\" (UID: \"7f9cce36-8e73-4532-a822-9be5c3933b75\") " pod="openshift-infra/auto-csr-approver-29535268-vw8sv" Feb 26 14:28:00 crc kubenswrapper[4724]: I0226 14:28:00.439828 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52cbb\" (UniqueName: \"kubernetes.io/projected/7f9cce36-8e73-4532-a822-9be5c3933b75-kube-api-access-52cbb\") pod \"auto-csr-approver-29535268-vw8sv\" (UID: \"7f9cce36-8e73-4532-a822-9be5c3933b75\") " pod="openshift-infra/auto-csr-approver-29535268-vw8sv" Feb 26 14:28:00 crc kubenswrapper[4724]: I0226 14:28:00.513948 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535268-vw8sv" Feb 26 14:28:01 crc kubenswrapper[4724]: I0226 14:28:01.507515 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535268-vw8sv"] Feb 26 14:28:02 crc kubenswrapper[4724]: I0226 14:28:02.386933 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535268-vw8sv" event={"ID":"7f9cce36-8e73-4532-a822-9be5c3933b75","Type":"ContainerStarted","Data":"b80f7819524a4202b0dbb405308dfbce13766bae83670b9b7339320ec1fa5240"} Feb 26 14:28:03 crc kubenswrapper[4724]: I0226 14:28:03.398110 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535268-vw8sv" event={"ID":"7f9cce36-8e73-4532-a822-9be5c3933b75","Type":"ContainerStarted","Data":"8324c0fe97f379fed4001064b18769f3152293caf4750fb4974d009354ce259d"} Feb 26 14:28:03 crc kubenswrapper[4724]: I0226 14:28:03.440326 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535268-vw8sv" podStartSLOduration=2.177757898 podStartE2EDuration="3.44028798s" podCreationTimestamp="2026-02-26 14:28:00 +0000 UTC" firstStartedPulling="2026-02-26 14:28:01.518742803 +0000 UTC m=+12148.174481918" lastFinishedPulling="2026-02-26 14:28:02.781272885 +0000 UTC m=+12149.437012000" observedRunningTime="2026-02-26 14:28:03.419961534 +0000 UTC m=+12150.075700669" watchObservedRunningTime="2026-02-26 14:28:03.44028798 +0000 UTC m=+12150.096027095" Feb 26 14:28:07 crc kubenswrapper[4724]: I0226 14:28:07.444035 4724 generic.go:334] "Generic (PLEG): container finished" podID="7f9cce36-8e73-4532-a822-9be5c3933b75" containerID="8324c0fe97f379fed4001064b18769f3152293caf4750fb4974d009354ce259d" exitCode=0 Feb 26 14:28:07 crc kubenswrapper[4724]: I0226 14:28:07.444161 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535268-vw8sv" event={"ID":"7f9cce36-8e73-4532-a822-9be5c3933b75","Type":"ContainerDied","Data":"8324c0fe97f379fed4001064b18769f3152293caf4750fb4974d009354ce259d"} Feb 26 14:28:08 crc kubenswrapper[4724]: I0226 14:28:08.868840 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535268-vw8sv" Feb 26 14:28:08 crc kubenswrapper[4724]: I0226 14:28:08.911780 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52cbb\" (UniqueName: \"kubernetes.io/projected/7f9cce36-8e73-4532-a822-9be5c3933b75-kube-api-access-52cbb\") pod \"7f9cce36-8e73-4532-a822-9be5c3933b75\" (UID: \"7f9cce36-8e73-4532-a822-9be5c3933b75\") " Feb 26 14:28:08 crc kubenswrapper[4724]: I0226 14:28:08.939451 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f9cce36-8e73-4532-a822-9be5c3933b75-kube-api-access-52cbb" (OuterVolumeSpecName: "kube-api-access-52cbb") pod "7f9cce36-8e73-4532-a822-9be5c3933b75" (UID: "7f9cce36-8e73-4532-a822-9be5c3933b75"). InnerVolumeSpecName "kube-api-access-52cbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:28:09 crc kubenswrapper[4724]: I0226 14:28:09.015853 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52cbb\" (UniqueName: \"kubernetes.io/projected/7f9cce36-8e73-4532-a822-9be5c3933b75-kube-api-access-52cbb\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:09 crc kubenswrapper[4724]: I0226 14:28:09.468164 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535268-vw8sv" event={"ID":"7f9cce36-8e73-4532-a822-9be5c3933b75","Type":"ContainerDied","Data":"b80f7819524a4202b0dbb405308dfbce13766bae83670b9b7339320ec1fa5240"} Feb 26 14:28:09 crc kubenswrapper[4724]: I0226 14:28:09.468248 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b80f7819524a4202b0dbb405308dfbce13766bae83670b9b7339320ec1fa5240" Feb 26 14:28:09 crc kubenswrapper[4724]: I0226 14:28:09.468324 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535268-vw8sv" Feb 26 14:28:09 crc kubenswrapper[4724]: I0226 14:28:09.547258 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535262-9cq4w"] Feb 26 14:28:09 crc kubenswrapper[4724]: I0226 14:28:09.556713 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535262-9cq4w"] Feb 26 14:28:09 crc kubenswrapper[4724]: I0226 14:28:09.994100 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59969dce-471f-4172-8581-9f605d489c4f" path="/var/lib/kubelet/pods/59969dce-471f-4172-8581-9f605d489c4f/volumes" Feb 26 14:28:10 crc kubenswrapper[4724]: I0226 14:28:10.976154 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:28:10 crc kubenswrapper[4724]: E0226 14:28:10.976626 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:28:25 crc kubenswrapper[4724]: I0226 14:28:25.977418 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:28:26 crc kubenswrapper[4724]: I0226 14:28:26.651102 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"bb269d4bc37375bbed14056416a1ef4f2f56debb2a371c2163fb15897fe830d6"} Feb 26 14:28:32 crc kubenswrapper[4724]: I0226 14:28:32.885878 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cc56c757c-ds2pf"] Feb 26 14:28:32 crc kubenswrapper[4724]: E0226 14:28:32.887350 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f9cce36-8e73-4532-a822-9be5c3933b75" containerName="oc" Feb 26 14:28:32 crc kubenswrapper[4724]: I0226 14:28:32.887370 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f9cce36-8e73-4532-a822-9be5c3933b75" containerName="oc" Feb 26 14:28:32 crc kubenswrapper[4724]: I0226 14:28:32.887618 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f9cce36-8e73-4532-a822-9be5c3933b75" containerName="oc" Feb 26 14:28:32 crc kubenswrapper[4724]: I0226 14:28:32.889808 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:32 crc kubenswrapper[4724]: I0226 14:28:32.945617 4724 scope.go:117] "RemoveContainer" containerID="ce052d78551bac448a3cc6de4eaa682637309e321495edff29aa69a41727db73" Feb 26 14:28:32 crc kubenswrapper[4724]: I0226 14:28:32.963399 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cc56c757c-ds2pf"] Feb 26 14:28:32 crc kubenswrapper[4724]: I0226 14:28:32.973878 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6lq2\" (UniqueName: \"kubernetes.io/projected/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-kube-api-access-g6lq2\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:32 crc kubenswrapper[4724]: I0226 14:28:32.974656 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-public-tls-certs\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:32 crc kubenswrapper[4724]: I0226 14:28:32.974755 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-httpd-config\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:32 crc kubenswrapper[4724]: I0226 14:28:32.974828 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-config\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:32 crc kubenswrapper[4724]: I0226 14:28:32.974909 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-combined-ca-bundle\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:32 crc kubenswrapper[4724]: I0226 14:28:32.974985 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-ovndb-tls-certs\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:32 crc kubenswrapper[4724]: I0226 14:28:32.975074 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-internal-tls-certs\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:33 crc kubenswrapper[4724]: I0226 14:28:33.085004 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-ovndb-tls-certs\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:33 crc kubenswrapper[4724]: I0226 14:28:33.085225 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-internal-tls-certs\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:33 crc kubenswrapper[4724]: I0226 14:28:33.085378 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6lq2\" (UniqueName: \"kubernetes.io/projected/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-kube-api-access-g6lq2\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:33 crc kubenswrapper[4724]: I0226 14:28:33.085949 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-public-tls-certs\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:33 crc kubenswrapper[4724]: I0226 14:28:33.086012 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-httpd-config\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:33 crc kubenswrapper[4724]: I0226 14:28:33.086041 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-config\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:33 crc kubenswrapper[4724]: I0226 14:28:33.086098 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-combined-ca-bundle\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:33 crc kubenswrapper[4724]: I0226 14:28:33.104926 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-combined-ca-bundle\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:33 crc kubenswrapper[4724]: I0226 14:28:33.105932 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-internal-tls-certs\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:33 crc kubenswrapper[4724]: I0226 14:28:33.109975 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-httpd-config\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:33 crc kubenswrapper[4724]: I0226 14:28:33.110083 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6lq2\" (UniqueName: \"kubernetes.io/projected/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-kube-api-access-g6lq2\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:33 crc kubenswrapper[4724]: I0226 14:28:33.110515 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-ovndb-tls-certs\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:33 crc kubenswrapper[4724]: I0226 14:28:33.111567 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-public-tls-certs\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:33 crc kubenswrapper[4724]: I0226 14:28:33.118689 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf-config\") pod \"neutron-cc56c757c-ds2pf\" (UID: \"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf\") " pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:33 crc kubenswrapper[4724]: I0226 14:28:33.218357 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:35 crc kubenswrapper[4724]: I0226 14:28:35.631101 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cc56c757c-ds2pf"] Feb 26 14:28:35 crc kubenswrapper[4724]: I0226 14:28:35.779446 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc56c757c-ds2pf" event={"ID":"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf","Type":"ContainerStarted","Data":"ded90ca30680b5b10b525ea6df1fb729f884c947420421766b2b9b0dfe6a4b4a"} Feb 26 14:28:36 crc kubenswrapper[4724]: I0226 14:28:36.795900 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc56c757c-ds2pf" event={"ID":"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf","Type":"ContainerStarted","Data":"94886ca83b182e4d1f4a311df5dcf9542b91fea7f98278799056c26aa3b1ad07"} Feb 26 14:28:36 crc kubenswrapper[4724]: I0226 14:28:36.796770 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:28:36 crc kubenswrapper[4724]: I0226 14:28:36.796785 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc56c757c-ds2pf" event={"ID":"4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf","Type":"ContainerStarted","Data":"8995488f63a4be131d1dd4e8071563ff9c0f88ee6515cec2e797c0eeb7bc93dd"} Feb 26 14:28:36 crc kubenswrapper[4724]: I0226 14:28:36.840643 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cc56c757c-ds2pf" podStartSLOduration=4.840613461 podStartE2EDuration="4.840613461s" podCreationTimestamp="2026-02-26 14:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:28:36.831657599 +0000 UTC m=+12183.487396744" watchObservedRunningTime="2026-02-26 14:28:36.840613461 +0000 UTC m=+12183.496352576" Feb 26 14:28:44 crc kubenswrapper[4724]: I0226 14:28:44.831877 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xh4dd"] Feb 26 14:28:44 crc kubenswrapper[4724]: I0226 14:28:44.836741 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:28:44 crc kubenswrapper[4724]: I0226 14:28:44.845715 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xh4dd"] Feb 26 14:28:45 crc kubenswrapper[4724]: I0226 14:28:45.000089 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrj42\" (UniqueName: \"kubernetes.io/projected/b5ed1721-2470-41e4-aab3-bb275535dc47-kube-api-access-rrj42\") pod \"redhat-operators-xh4dd\" (UID: \"b5ed1721-2470-41e4-aab3-bb275535dc47\") " pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:28:45 crc kubenswrapper[4724]: I0226 14:28:45.000152 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ed1721-2470-41e4-aab3-bb275535dc47-catalog-content\") pod \"redhat-operators-xh4dd\" (UID: \"b5ed1721-2470-41e4-aab3-bb275535dc47\") " pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:28:45 crc kubenswrapper[4724]: I0226 14:28:45.000274 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ed1721-2470-41e4-aab3-bb275535dc47-utilities\") pod \"redhat-operators-xh4dd\" (UID: \"b5ed1721-2470-41e4-aab3-bb275535dc47\") " pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:28:45 crc kubenswrapper[4724]: I0226 14:28:45.101648 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrj42\" (UniqueName: \"kubernetes.io/projected/b5ed1721-2470-41e4-aab3-bb275535dc47-kube-api-access-rrj42\") pod \"redhat-operators-xh4dd\" (UID: \"b5ed1721-2470-41e4-aab3-bb275535dc47\") " pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:28:45 crc kubenswrapper[4724]: I0226 14:28:45.108003 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ed1721-2470-41e4-aab3-bb275535dc47-catalog-content\") pod \"redhat-operators-xh4dd\" (UID: \"b5ed1721-2470-41e4-aab3-bb275535dc47\") " pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:28:45 crc kubenswrapper[4724]: I0226 14:28:45.108549 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ed1721-2470-41e4-aab3-bb275535dc47-catalog-content\") pod \"redhat-operators-xh4dd\" (UID: \"b5ed1721-2470-41e4-aab3-bb275535dc47\") " pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:28:45 crc kubenswrapper[4724]: I0226 14:28:45.108611 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ed1721-2470-41e4-aab3-bb275535dc47-utilities\") pod \"redhat-operators-xh4dd\" (UID: \"b5ed1721-2470-41e4-aab3-bb275535dc47\") " pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:28:45 crc kubenswrapper[4724]: I0226 14:28:45.108952 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ed1721-2470-41e4-aab3-bb275535dc47-utilities\") pod \"redhat-operators-xh4dd\" (UID: \"b5ed1721-2470-41e4-aab3-bb275535dc47\") " pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:28:45 crc kubenswrapper[4724]: I0226 14:28:45.130188 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrj42\" (UniqueName: \"kubernetes.io/projected/b5ed1721-2470-41e4-aab3-bb275535dc47-kube-api-access-rrj42\") pod \"redhat-operators-xh4dd\" (UID: \"b5ed1721-2470-41e4-aab3-bb275535dc47\") " pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:28:45 crc kubenswrapper[4724]: I0226 14:28:45.158852 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:28:45 crc kubenswrapper[4724]: I0226 14:28:45.767407 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xh4dd"] Feb 26 14:28:45 crc kubenswrapper[4724]: W0226 14:28:45.778910 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5ed1721_2470_41e4_aab3_bb275535dc47.slice/crio-a043b5b1429662265899c00207b5e02ea3a9aec3a1cf9b1ae33d93164cd3d10d WatchSource:0}: Error finding container a043b5b1429662265899c00207b5e02ea3a9aec3a1cf9b1ae33d93164cd3d10d: Status 404 returned error can't find the container with id a043b5b1429662265899c00207b5e02ea3a9aec3a1cf9b1ae33d93164cd3d10d Feb 26 14:28:45 crc kubenswrapper[4724]: I0226 14:28:45.887510 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh4dd" event={"ID":"b5ed1721-2470-41e4-aab3-bb275535dc47","Type":"ContainerStarted","Data":"a043b5b1429662265899c00207b5e02ea3a9aec3a1cf9b1ae33d93164cd3d10d"} Feb 26 14:28:46 crc kubenswrapper[4724]: I0226 14:28:46.900962 4724 generic.go:334] "Generic (PLEG): container finished" podID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerID="1dc66ca087bd25a8a371d3f4d0ac8faa7daef300fa4e48c4ddf39b48f7fe9757" exitCode=0 Feb 26 14:28:46 crc kubenswrapper[4724]: I0226 14:28:46.904217 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh4dd" event={"ID":"b5ed1721-2470-41e4-aab3-bb275535dc47","Type":"ContainerDied","Data":"1dc66ca087bd25a8a371d3f4d0ac8faa7daef300fa4e48c4ddf39b48f7fe9757"} Feb 26 14:28:49 crc kubenswrapper[4724]: I0226 14:28:49.949456 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh4dd" event={"ID":"b5ed1721-2470-41e4-aab3-bb275535dc47","Type":"ContainerStarted","Data":"93ef160a88f849f263c2600ad7cf834ca6514ebdba89fff03823d0ea809f8d09"} Feb 26 14:29:02 crc kubenswrapper[4724]: I0226 14:29:02.084109 4724 generic.go:334] "Generic (PLEG): container finished" podID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerID="93ef160a88f849f263c2600ad7cf834ca6514ebdba89fff03823d0ea809f8d09" exitCode=0 Feb 26 14:29:02 crc kubenswrapper[4724]: I0226 14:29:02.084167 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh4dd" event={"ID":"b5ed1721-2470-41e4-aab3-bb275535dc47","Type":"ContainerDied","Data":"93ef160a88f849f263c2600ad7cf834ca6514ebdba89fff03823d0ea809f8d09"} Feb 26 14:29:02 crc kubenswrapper[4724]: I0226 14:29:02.101407 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:29:03 crc kubenswrapper[4724]: I0226 14:29:03.238649 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-cc56c757c-ds2pf" Feb 26 14:29:03 crc kubenswrapper[4724]: I0226 14:29:03.358766 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f468d56b9-wpq97"] Feb 26 14:29:03 crc kubenswrapper[4724]: I0226 14:29:03.366880 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f468d56b9-wpq97" podUID="b4d73817-96a8-4f4b-8900-777cd57d2d4c" containerName="neutron-api" containerID="cri-o://c488d0b7b83fc46589dec68ef32408ea8fb8c617255772f9efbe3c55705a0422" gracePeriod=30 Feb 26 14:29:03 crc kubenswrapper[4724]: I0226 14:29:03.367670 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f468d56b9-wpq97" podUID="b4d73817-96a8-4f4b-8900-777cd57d2d4c" containerName="neutron-httpd" containerID="cri-o://b0708210790c7874b094f3b7159c4d2badd22d6cd0d1ce6cf79ab92203079526" gracePeriod=30 Feb 26 14:29:04 crc kubenswrapper[4724]: I0226 14:29:04.106803 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh4dd" event={"ID":"b5ed1721-2470-41e4-aab3-bb275535dc47","Type":"ContainerStarted","Data":"bd10386107d6198a6875e06205855bd65508d6b5d2f53d123572c4f683af8a01"} Feb 26 14:29:04 crc kubenswrapper[4724]: I0226 14:29:04.132760 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xh4dd" podStartSLOduration=4.232760585 podStartE2EDuration="20.132718592s" podCreationTimestamp="2026-02-26 14:28:44 +0000 UTC" firstStartedPulling="2026-02-26 14:28:46.907554235 +0000 UTC m=+12193.563293350" lastFinishedPulling="2026-02-26 14:29:02.807512232 +0000 UTC m=+12209.463251357" observedRunningTime="2026-02-26 14:29:04.124731683 +0000 UTC m=+12210.780470798" watchObservedRunningTime="2026-02-26 14:29:04.132718592 +0000 UTC m=+12210.788457707" Feb 26 14:29:05 crc kubenswrapper[4724]: I0226 14:29:05.124366 4724 generic.go:334] "Generic (PLEG): container finished" podID="b4d73817-96a8-4f4b-8900-777cd57d2d4c" containerID="b0708210790c7874b094f3b7159c4d2badd22d6cd0d1ce6cf79ab92203079526" exitCode=0 Feb 26 14:29:05 crc kubenswrapper[4724]: I0226 14:29:05.124461 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f468d56b9-wpq97" event={"ID":"b4d73817-96a8-4f4b-8900-777cd57d2d4c","Type":"ContainerDied","Data":"b0708210790c7874b094f3b7159c4d2badd22d6cd0d1ce6cf79ab92203079526"} Feb 26 14:29:05 crc kubenswrapper[4724]: I0226 14:29:05.159208 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:29:05 crc kubenswrapper[4724]: I0226 14:29:05.159307 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:29:06 crc kubenswrapper[4724]: I0226 14:29:06.212187 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xh4dd" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="registry-server" probeResult="failure" output=< Feb 26 14:29:06 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:29:06 crc kubenswrapper[4724]: > Feb 26 14:29:11 crc kubenswrapper[4724]: I0226 14:29:11.181305 4724 generic.go:334] "Generic (PLEG): container finished" podID="b4d73817-96a8-4f4b-8900-777cd57d2d4c" containerID="c488d0b7b83fc46589dec68ef32408ea8fb8c617255772f9efbe3c55705a0422" exitCode=0 Feb 26 14:29:11 crc kubenswrapper[4724]: I0226 14:29:11.181364 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f468d56b9-wpq97" event={"ID":"b4d73817-96a8-4f4b-8900-777cd57d2d4c","Type":"ContainerDied","Data":"c488d0b7b83fc46589dec68ef32408ea8fb8c617255772f9efbe3c55705a0422"} Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.632363 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.729218 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-internal-tls-certs\") pod \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.729298 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-ovndb-tls-certs\") pod \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.729366 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-httpd-config\") pod \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.729474 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-public-tls-certs\") pod \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.729520 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-config\") pod \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.729579 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-combined-ca-bundle\") pod \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.729623 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb4w2\" (UniqueName: \"kubernetes.io/projected/b4d73817-96a8-4f4b-8900-777cd57d2d4c-kube-api-access-jb4w2\") pod \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\" (UID: \"b4d73817-96a8-4f4b-8900-777cd57d2d4c\") " Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.762632 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b4d73817-96a8-4f4b-8900-777cd57d2d4c" (UID: "b4d73817-96a8-4f4b-8900-777cd57d2d4c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.777538 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d73817-96a8-4f4b-8900-777cd57d2d4c-kube-api-access-jb4w2" (OuterVolumeSpecName: "kube-api-access-jb4w2") pod "b4d73817-96a8-4f4b-8900-777cd57d2d4c" (UID: "b4d73817-96a8-4f4b-8900-777cd57d2d4c"). InnerVolumeSpecName "kube-api-access-jb4w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.825043 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4d73817-96a8-4f4b-8900-777cd57d2d4c" (UID: "b4d73817-96a8-4f4b-8900-777cd57d2d4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.832492 4724 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.832554 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.832572 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb4w2\" (UniqueName: \"kubernetes.io/projected/b4d73817-96a8-4f4b-8900-777cd57d2d4c-kube-api-access-jb4w2\") on node \"crc\" DevicePath \"\"" Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.848565 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b4d73817-96a8-4f4b-8900-777cd57d2d4c" (UID: "b4d73817-96a8-4f4b-8900-777cd57d2d4c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.869300 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b4d73817-96a8-4f4b-8900-777cd57d2d4c" (UID: "b4d73817-96a8-4f4b-8900-777cd57d2d4c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.882689 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-config" (OuterVolumeSpecName: "config") pod "b4d73817-96a8-4f4b-8900-777cd57d2d4c" (UID: "b4d73817-96a8-4f4b-8900-777cd57d2d4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.884542 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b4d73817-96a8-4f4b-8900-777cd57d2d4c" (UID: "b4d73817-96a8-4f4b-8900-777cd57d2d4c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.934132 4724 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.934166 4724 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.934189 4724 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:29:12 crc kubenswrapper[4724]: I0226 14:29:12.934197 4724 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d73817-96a8-4f4b-8900-777cd57d2d4c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:29:13 crc kubenswrapper[4724]: I0226 14:29:13.198356 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f468d56b9-wpq97" event={"ID":"b4d73817-96a8-4f4b-8900-777cd57d2d4c","Type":"ContainerDied","Data":"cc9865120f619ab4c71d9789f05d65b9bb2e3b07b7cd9b82232cfda55c830dcb"} Feb 26 14:29:13 crc kubenswrapper[4724]: I0226 14:29:13.198406 4724 scope.go:117] "RemoveContainer" containerID="b0708210790c7874b094f3b7159c4d2badd22d6cd0d1ce6cf79ab92203079526" Feb 26 14:29:13 crc kubenswrapper[4724]: I0226 14:29:13.198460 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f468d56b9-wpq97" Feb 26 14:29:13 crc kubenswrapper[4724]: I0226 14:29:13.241262 4724 scope.go:117] "RemoveContainer" containerID="c488d0b7b83fc46589dec68ef32408ea8fb8c617255772f9efbe3c55705a0422" Feb 26 14:29:13 crc kubenswrapper[4724]: I0226 14:29:13.247866 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f468d56b9-wpq97"] Feb 26 14:29:13 crc kubenswrapper[4724]: I0226 14:29:13.266679 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6f468d56b9-wpq97"] Feb 26 14:29:13 crc kubenswrapper[4724]: I0226 14:29:13.988155 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d73817-96a8-4f4b-8900-777cd57d2d4c" path="/var/lib/kubelet/pods/b4d73817-96a8-4f4b-8900-777cd57d2d4c/volumes" Feb 26 14:29:16 crc kubenswrapper[4724]: I0226 14:29:16.220942 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xh4dd" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="registry-server" probeResult="failure" output=< Feb 26 14:29:16 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:29:16 crc kubenswrapper[4724]: > Feb 26 14:29:26 crc kubenswrapper[4724]: I0226 14:29:26.220730 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xh4dd" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="registry-server" probeResult="failure" output=< Feb 26 14:29:26 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:29:26 crc kubenswrapper[4724]: > Feb 26 14:29:36 crc kubenswrapper[4724]: I0226 14:29:36.227987 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xh4dd" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="registry-server" probeResult="failure" output=< Feb 26 14:29:36 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:29:36 crc kubenswrapper[4724]: > Feb 26 14:29:46 crc kubenswrapper[4724]: I0226 14:29:46.236317 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xh4dd" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="registry-server" probeResult="failure" output=< Feb 26 14:29:46 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:29:46 crc kubenswrapper[4724]: > Feb 26 14:29:56 crc kubenswrapper[4724]: I0226 14:29:56.213543 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xh4dd" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="registry-server" probeResult="failure" output=< Feb 26 14:29:56 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:29:56 crc kubenswrapper[4724]: > Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.230773 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w"] Feb 26 14:30:00 crc kubenswrapper[4724]: E0226 14:30:00.234433 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d73817-96a8-4f4b-8900-777cd57d2d4c" containerName="neutron-httpd" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.234491 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d73817-96a8-4f4b-8900-777cd57d2d4c" containerName="neutron-httpd" Feb 26 14:30:00 crc kubenswrapper[4724]: E0226 14:30:00.234558 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d73817-96a8-4f4b-8900-777cd57d2d4c" containerName="neutron-api" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.234566 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d73817-96a8-4f4b-8900-777cd57d2d4c" containerName="neutron-api" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.235523 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d73817-96a8-4f4b-8900-777cd57d2d4c" containerName="neutron-httpd" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.235564 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d73817-96a8-4f4b-8900-777cd57d2d4c" containerName="neutron-api" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.238167 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.252323 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w"] Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.258655 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.259030 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.305533 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsnqx\" (UniqueName: \"kubernetes.io/projected/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-kube-api-access-lsnqx\") pod \"collect-profiles-29535270-cbn7w\" (UID: \"c59ea030-3ba8-4b9f-9dd0-b071cf76659d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.305605 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-secret-volume\") pod \"collect-profiles-29535270-cbn7w\" (UID: \"c59ea030-3ba8-4b9f-9dd0-b071cf76659d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.305644 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-config-volume\") pod \"collect-profiles-29535270-cbn7w\" (UID: \"c59ea030-3ba8-4b9f-9dd0-b071cf76659d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.311675 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535270-xcrrj"] Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.314902 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535270-xcrrj" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.318550 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.320523 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.320530 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.324564 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535270-xcrrj"] Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.407896 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-config-volume\") pod \"collect-profiles-29535270-cbn7w\" (UID: \"c59ea030-3ba8-4b9f-9dd0-b071cf76659d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.408091 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbptl\" (UniqueName: \"kubernetes.io/projected/2e815183-e92b-4ff4-be6d-7aac3d026e88-kube-api-access-dbptl\") pod \"auto-csr-approver-29535270-xcrrj\" (UID: \"2e815183-e92b-4ff4-be6d-7aac3d026e88\") " pod="openshift-infra/auto-csr-approver-29535270-xcrrj" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.408368 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsnqx\" (UniqueName: \"kubernetes.io/projected/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-kube-api-access-lsnqx\") pod \"collect-profiles-29535270-cbn7w\" (UID: \"c59ea030-3ba8-4b9f-9dd0-b071cf76659d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.408504 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-secret-volume\") pod \"collect-profiles-29535270-cbn7w\" (UID: \"c59ea030-3ba8-4b9f-9dd0-b071cf76659d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.409112 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-config-volume\") pod \"collect-profiles-29535270-cbn7w\" (UID: \"c59ea030-3ba8-4b9f-9dd0-b071cf76659d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.436564 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-secret-volume\") pod \"collect-profiles-29535270-cbn7w\" (UID: \"c59ea030-3ba8-4b9f-9dd0-b071cf76659d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.438540 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsnqx\" (UniqueName: \"kubernetes.io/projected/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-kube-api-access-lsnqx\") pod \"collect-profiles-29535270-cbn7w\" (UID: \"c59ea030-3ba8-4b9f-9dd0-b071cf76659d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.509462 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbptl\" (UniqueName: \"kubernetes.io/projected/2e815183-e92b-4ff4-be6d-7aac3d026e88-kube-api-access-dbptl\") pod \"auto-csr-approver-29535270-xcrrj\" (UID: \"2e815183-e92b-4ff4-be6d-7aac3d026e88\") " pod="openshift-infra/auto-csr-approver-29535270-xcrrj" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.531227 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbptl\" (UniqueName: \"kubernetes.io/projected/2e815183-e92b-4ff4-be6d-7aac3d026e88-kube-api-access-dbptl\") pod \"auto-csr-approver-29535270-xcrrj\" (UID: \"2e815183-e92b-4ff4-be6d-7aac3d026e88\") " pod="openshift-infra/auto-csr-approver-29535270-xcrrj" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.582707 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" Feb 26 14:30:00 crc kubenswrapper[4724]: I0226 14:30:00.630311 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535270-xcrrj" Feb 26 14:30:01 crc kubenswrapper[4724]: I0226 14:30:01.802487 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w"] Feb 26 14:30:01 crc kubenswrapper[4724]: I0226 14:30:01.813095 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535270-xcrrj"] Feb 26 14:30:02 crc kubenswrapper[4724]: I0226 14:30:02.797595 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" event={"ID":"c59ea030-3ba8-4b9f-9dd0-b071cf76659d","Type":"ContainerStarted","Data":"71fbaabfee157b70c79e0ca32d22fbd8d8a5d8eab69805ea8155f7d38cc95d79"} Feb 26 14:30:02 crc kubenswrapper[4724]: I0226 14:30:02.798123 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" event={"ID":"c59ea030-3ba8-4b9f-9dd0-b071cf76659d","Type":"ContainerStarted","Data":"5c73bc7674c699f358c69f69f491275e49817ae88430cca55bf8ac7593f596bf"} Feb 26 14:30:02 crc kubenswrapper[4724]: I0226 14:30:02.801802 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535270-xcrrj" event={"ID":"2e815183-e92b-4ff4-be6d-7aac3d026e88","Type":"ContainerStarted","Data":"8de88007e0afe6af8a871e5111826d223731c8231802c8607e5c45ed61121f1e"} Feb 26 14:30:02 crc kubenswrapper[4724]: I0226 14:30:02.883611 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" podStartSLOduration=2.882077432 podStartE2EDuration="2.882077432s" podCreationTimestamp="2026-02-26 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:30:02.876168616 +0000 UTC m=+12269.531907731" watchObservedRunningTime="2026-02-26 14:30:02.882077432 +0000 UTC m=+12269.537816547" Feb 26 14:30:04 crc kubenswrapper[4724]: I0226 14:30:04.848898 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" event={"ID":"c59ea030-3ba8-4b9f-9dd0-b071cf76659d","Type":"ContainerDied","Data":"71fbaabfee157b70c79e0ca32d22fbd8d8a5d8eab69805ea8155f7d38cc95d79"} Feb 26 14:30:04 crc kubenswrapper[4724]: I0226 14:30:04.849540 4724 generic.go:334] "Generic (PLEG): container finished" podID="c59ea030-3ba8-4b9f-9dd0-b071cf76659d" containerID="71fbaabfee157b70c79e0ca32d22fbd8d8a5d8eab69805ea8155f7d38cc95d79" exitCode=0 Feb 26 14:30:05 crc kubenswrapper[4724]: I0226 14:30:05.861994 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535270-xcrrj" event={"ID":"2e815183-e92b-4ff4-be6d-7aac3d026e88","Type":"ContainerStarted","Data":"4670d55d4f295923f56469e47e49bb0b2fc10e908ec7957066322fd420e84748"} Feb 26 14:30:06 crc kubenswrapper[4724]: I0226 14:30:06.301784 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xh4dd" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="registry-server" probeResult="failure" output=< Feb 26 14:30:06 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:30:06 crc kubenswrapper[4724]: > Feb 26 14:30:06 crc kubenswrapper[4724]: I0226 14:30:06.466738 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" Feb 26 14:30:06 crc kubenswrapper[4724]: I0226 14:30:06.535854 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535270-xcrrj" podStartSLOduration=3.924602143 podStartE2EDuration="6.532759093s" podCreationTimestamp="2026-02-26 14:30:00 +0000 UTC" firstStartedPulling="2026-02-26 14:30:01.845572291 +0000 UTC m=+12268.501311406" lastFinishedPulling="2026-02-26 14:30:04.453729241 +0000 UTC m=+12271.109468356" observedRunningTime="2026-02-26 14:30:05.88431258 +0000 UTC m=+12272.540051715" watchObservedRunningTime="2026-02-26 14:30:06.532759093 +0000 UTC m=+12273.188498208" Feb 26 14:30:06 crc kubenswrapper[4724]: I0226 14:30:06.582076 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsnqx\" (UniqueName: \"kubernetes.io/projected/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-kube-api-access-lsnqx\") pod \"c59ea030-3ba8-4b9f-9dd0-b071cf76659d\" (UID: \"c59ea030-3ba8-4b9f-9dd0-b071cf76659d\") " Feb 26 14:30:06 crc kubenswrapper[4724]: I0226 14:30:06.582569 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-config-volume\") pod \"c59ea030-3ba8-4b9f-9dd0-b071cf76659d\" (UID: \"c59ea030-3ba8-4b9f-9dd0-b071cf76659d\") " Feb 26 14:30:06 crc kubenswrapper[4724]: I0226 14:30:06.582704 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-secret-volume\") pod \"c59ea030-3ba8-4b9f-9dd0-b071cf76659d\" (UID: \"c59ea030-3ba8-4b9f-9dd0-b071cf76659d\") " Feb 26 14:30:06 crc kubenswrapper[4724]: I0226 14:30:06.585595 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-config-volume" (OuterVolumeSpecName: "config-volume") pod "c59ea030-3ba8-4b9f-9dd0-b071cf76659d" (UID: "c59ea030-3ba8-4b9f-9dd0-b071cf76659d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:30:06 crc kubenswrapper[4724]: I0226 14:30:06.600948 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-kube-api-access-lsnqx" (OuterVolumeSpecName: "kube-api-access-lsnqx") pod "c59ea030-3ba8-4b9f-9dd0-b071cf76659d" (UID: "c59ea030-3ba8-4b9f-9dd0-b071cf76659d"). InnerVolumeSpecName "kube-api-access-lsnqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:30:06 crc kubenswrapper[4724]: I0226 14:30:06.600653 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c59ea030-3ba8-4b9f-9dd0-b071cf76659d" (UID: "c59ea030-3ba8-4b9f-9dd0-b071cf76659d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:30:06 crc kubenswrapper[4724]: I0226 14:30:06.690895 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsnqx\" (UniqueName: \"kubernetes.io/projected/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-kube-api-access-lsnqx\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:06 crc kubenswrapper[4724]: I0226 14:30:06.690950 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:06 crc kubenswrapper[4724]: I0226 14:30:06.690965 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c59ea030-3ba8-4b9f-9dd0-b071cf76659d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:06 crc kubenswrapper[4724]: I0226 14:30:06.874632 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" event={"ID":"c59ea030-3ba8-4b9f-9dd0-b071cf76659d","Type":"ContainerDied","Data":"5c73bc7674c699f358c69f69f491275e49817ae88430cca55bf8ac7593f596bf"} Feb 26 14:30:06 crc kubenswrapper[4724]: I0226 14:30:06.874679 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-cbn7w" Feb 26 14:30:06 crc kubenswrapper[4724]: I0226 14:30:06.883773 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c73bc7674c699f358c69f69f491275e49817ae88430cca55bf8ac7593f596bf" Feb 26 14:30:07 crc kubenswrapper[4724]: I0226 14:30:07.743839 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx"] Feb 26 14:30:07 crc kubenswrapper[4724]: I0226 14:30:07.767532 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535225-qwlqx"] Feb 26 14:30:07 crc kubenswrapper[4724]: I0226 14:30:07.990817 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e24f3f4-e351-45ec-b54c-61eff2e0db52" path="/var/lib/kubelet/pods/4e24f3f4-e351-45ec-b54c-61eff2e0db52/volumes" Feb 26 14:30:08 crc kubenswrapper[4724]: I0226 14:30:08.898780 4724 generic.go:334] "Generic (PLEG): container finished" podID="2e815183-e92b-4ff4-be6d-7aac3d026e88" containerID="4670d55d4f295923f56469e47e49bb0b2fc10e908ec7957066322fd420e84748" exitCode=0 Feb 26 14:30:08 crc kubenswrapper[4724]: I0226 14:30:08.898820 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535270-xcrrj" event={"ID":"2e815183-e92b-4ff4-be6d-7aac3d026e88","Type":"ContainerDied","Data":"4670d55d4f295923f56469e47e49bb0b2fc10e908ec7957066322fd420e84748"} Feb 26 14:30:10 crc kubenswrapper[4724]: I0226 14:30:10.368737 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535270-xcrrj" Feb 26 14:30:10 crc kubenswrapper[4724]: I0226 14:30:10.503845 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbptl\" (UniqueName: \"kubernetes.io/projected/2e815183-e92b-4ff4-be6d-7aac3d026e88-kube-api-access-dbptl\") pod \"2e815183-e92b-4ff4-be6d-7aac3d026e88\" (UID: \"2e815183-e92b-4ff4-be6d-7aac3d026e88\") " Feb 26 14:30:10 crc kubenswrapper[4724]: I0226 14:30:10.575906 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e815183-e92b-4ff4-be6d-7aac3d026e88-kube-api-access-dbptl" (OuterVolumeSpecName: "kube-api-access-dbptl") pod "2e815183-e92b-4ff4-be6d-7aac3d026e88" (UID: "2e815183-e92b-4ff4-be6d-7aac3d026e88"). InnerVolumeSpecName "kube-api-access-dbptl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:30:10 crc kubenswrapper[4724]: I0226 14:30:10.606772 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbptl\" (UniqueName: \"kubernetes.io/projected/2e815183-e92b-4ff4-be6d-7aac3d026e88-kube-api-access-dbptl\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:10 crc kubenswrapper[4724]: I0226 14:30:10.924552 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535270-xcrrj" event={"ID":"2e815183-e92b-4ff4-be6d-7aac3d026e88","Type":"ContainerDied","Data":"8de88007e0afe6af8a871e5111826d223731c8231802c8607e5c45ed61121f1e"} Feb 26 14:30:10 crc kubenswrapper[4724]: I0226 14:30:10.924620 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8de88007e0afe6af8a871e5111826d223731c8231802c8607e5c45ed61121f1e" Feb 26 14:30:10 crc kubenswrapper[4724]: I0226 14:30:10.924629 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535270-xcrrj" Feb 26 14:30:11 crc kubenswrapper[4724]: I0226 14:30:11.006616 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535264-r7m5v"] Feb 26 14:30:11 crc kubenswrapper[4724]: I0226 14:30:11.023658 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535264-r7m5v"] Feb 26 14:30:11 crc kubenswrapper[4724]: I0226 14:30:11.988118 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f70d2b3-0274-454b-9b68-88a1e8bd8342" path="/var/lib/kubelet/pods/6f70d2b3-0274-454b-9b68-88a1e8bd8342/volumes" Feb 26 14:30:16 crc kubenswrapper[4724]: I0226 14:30:16.213501 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xh4dd" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="registry-server" probeResult="failure" output=< Feb 26 14:30:16 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:30:16 crc kubenswrapper[4724]: > Feb 26 14:30:26 crc kubenswrapper[4724]: I0226 14:30:26.226483 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xh4dd" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="registry-server" probeResult="failure" output=< Feb 26 14:30:26 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:30:26 crc kubenswrapper[4724]: > Feb 26 14:30:33 crc kubenswrapper[4724]: I0226 14:30:33.193050 4724 scope.go:117] "RemoveContainer" containerID="aa03166c13d748bec0d954229d77b9e433469dd8a34a6bfc40042e2baab330fc" Feb 26 14:30:33 crc kubenswrapper[4724]: I0226 14:30:33.263749 4724 scope.go:117] "RemoveContainer" containerID="49e79f8a0dab22bf44243342edaad1a388ba3deab46da2a5474608a44279997b" Feb 26 14:30:35 crc kubenswrapper[4724]: I0226 14:30:35.268936 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:30:35 crc kubenswrapper[4724]: I0226 14:30:35.336338 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:30:35 crc kubenswrapper[4724]: I0226 14:30:35.513399 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xh4dd"] Feb 26 14:30:37 crc kubenswrapper[4724]: I0226 14:30:37.207074 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xh4dd" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="registry-server" containerID="cri-o://bd10386107d6198a6875e06205855bd65508d6b5d2f53d123572c4f683af8a01" gracePeriod=2 Feb 26 14:30:37 crc kubenswrapper[4724]: I0226 14:30:37.821054 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:30:37 crc kubenswrapper[4724]: I0226 14:30:37.938051 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrj42\" (UniqueName: \"kubernetes.io/projected/b5ed1721-2470-41e4-aab3-bb275535dc47-kube-api-access-rrj42\") pod \"b5ed1721-2470-41e4-aab3-bb275535dc47\" (UID: \"b5ed1721-2470-41e4-aab3-bb275535dc47\") " Feb 26 14:30:37 crc kubenswrapper[4724]: I0226 14:30:37.938673 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ed1721-2470-41e4-aab3-bb275535dc47-utilities\") pod \"b5ed1721-2470-41e4-aab3-bb275535dc47\" (UID: \"b5ed1721-2470-41e4-aab3-bb275535dc47\") " Feb 26 14:30:37 crc kubenswrapper[4724]: I0226 14:30:37.939051 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ed1721-2470-41e4-aab3-bb275535dc47-catalog-content\") pod \"b5ed1721-2470-41e4-aab3-bb275535dc47\" (UID: \"b5ed1721-2470-41e4-aab3-bb275535dc47\") " Feb 26 14:30:37 crc kubenswrapper[4724]: I0226 14:30:37.939288 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ed1721-2470-41e4-aab3-bb275535dc47-utilities" (OuterVolumeSpecName: "utilities") pod "b5ed1721-2470-41e4-aab3-bb275535dc47" (UID: "b5ed1721-2470-41e4-aab3-bb275535dc47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:30:37 crc kubenswrapper[4724]: I0226 14:30:37.941471 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ed1721-2470-41e4-aab3-bb275535dc47-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:37 crc kubenswrapper[4724]: I0226 14:30:37.960764 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ed1721-2470-41e4-aab3-bb275535dc47-kube-api-access-rrj42" (OuterVolumeSpecName: "kube-api-access-rrj42") pod "b5ed1721-2470-41e4-aab3-bb275535dc47" (UID: "b5ed1721-2470-41e4-aab3-bb275535dc47"). InnerVolumeSpecName "kube-api-access-rrj42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.044861 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrj42\" (UniqueName: \"kubernetes.io/projected/b5ed1721-2470-41e4-aab3-bb275535dc47-kube-api-access-rrj42\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.088776 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ed1721-2470-41e4-aab3-bb275535dc47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5ed1721-2470-41e4-aab3-bb275535dc47" (UID: "b5ed1721-2470-41e4-aab3-bb275535dc47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.146754 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ed1721-2470-41e4-aab3-bb275535dc47-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.217676 4724 generic.go:334] "Generic (PLEG): container finished" podID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerID="bd10386107d6198a6875e06205855bd65508d6b5d2f53d123572c4f683af8a01" exitCode=0 Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.217716 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh4dd" event={"ID":"b5ed1721-2470-41e4-aab3-bb275535dc47","Type":"ContainerDied","Data":"bd10386107d6198a6875e06205855bd65508d6b5d2f53d123572c4f683af8a01"} Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.217751 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh4dd" event={"ID":"b5ed1721-2470-41e4-aab3-bb275535dc47","Type":"ContainerDied","Data":"a043b5b1429662265899c00207b5e02ea3a9aec3a1cf9b1ae33d93164cd3d10d"} Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.217765 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh4dd" Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.217767 4724 scope.go:117] "RemoveContainer" containerID="bd10386107d6198a6875e06205855bd65508d6b5d2f53d123572c4f683af8a01" Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.267608 4724 scope.go:117] "RemoveContainer" containerID="93ef160a88f849f263c2600ad7cf834ca6514ebdba89fff03823d0ea809f8d09" Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.272876 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xh4dd"] Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.293507 4724 scope.go:117] "RemoveContainer" containerID="1dc66ca087bd25a8a371d3f4d0ac8faa7daef300fa4e48c4ddf39b48f7fe9757" Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.296899 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xh4dd"] Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.334083 4724 scope.go:117] "RemoveContainer" containerID="bd10386107d6198a6875e06205855bd65508d6b5d2f53d123572c4f683af8a01" Feb 26 14:30:38 crc kubenswrapper[4724]: E0226 14:30:38.338837 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd10386107d6198a6875e06205855bd65508d6b5d2f53d123572c4f683af8a01\": container with ID starting with bd10386107d6198a6875e06205855bd65508d6b5d2f53d123572c4f683af8a01 not found: ID does not exist" containerID="bd10386107d6198a6875e06205855bd65508d6b5d2f53d123572c4f683af8a01" Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.338897 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd10386107d6198a6875e06205855bd65508d6b5d2f53d123572c4f683af8a01"} err="failed to get container status \"bd10386107d6198a6875e06205855bd65508d6b5d2f53d123572c4f683af8a01\": rpc error: code = NotFound desc = could not find container \"bd10386107d6198a6875e06205855bd65508d6b5d2f53d123572c4f683af8a01\": container with ID starting with bd10386107d6198a6875e06205855bd65508d6b5d2f53d123572c4f683af8a01 not found: ID does not exist" Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.338920 4724 scope.go:117] "RemoveContainer" containerID="93ef160a88f849f263c2600ad7cf834ca6514ebdba89fff03823d0ea809f8d09" Feb 26 14:30:38 crc kubenswrapper[4724]: E0226 14:30:38.339373 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ef160a88f849f263c2600ad7cf834ca6514ebdba89fff03823d0ea809f8d09\": container with ID starting with 93ef160a88f849f263c2600ad7cf834ca6514ebdba89fff03823d0ea809f8d09 not found: ID does not exist" containerID="93ef160a88f849f263c2600ad7cf834ca6514ebdba89fff03823d0ea809f8d09" Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.339447 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ef160a88f849f263c2600ad7cf834ca6514ebdba89fff03823d0ea809f8d09"} err="failed to get container status \"93ef160a88f849f263c2600ad7cf834ca6514ebdba89fff03823d0ea809f8d09\": rpc error: code = NotFound desc = could not find container \"93ef160a88f849f263c2600ad7cf834ca6514ebdba89fff03823d0ea809f8d09\": container with ID starting with 93ef160a88f849f263c2600ad7cf834ca6514ebdba89fff03823d0ea809f8d09 not found: ID does not exist" Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.339497 4724 scope.go:117] "RemoveContainer" containerID="1dc66ca087bd25a8a371d3f4d0ac8faa7daef300fa4e48c4ddf39b48f7fe9757" Feb 26 14:30:38 crc kubenswrapper[4724]: E0226 14:30:38.339853 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dc66ca087bd25a8a371d3f4d0ac8faa7daef300fa4e48c4ddf39b48f7fe9757\": container with ID starting with 1dc66ca087bd25a8a371d3f4d0ac8faa7daef300fa4e48c4ddf39b48f7fe9757 not found: ID does not exist" containerID="1dc66ca087bd25a8a371d3f4d0ac8faa7daef300fa4e48c4ddf39b48f7fe9757" Feb 26 14:30:38 crc kubenswrapper[4724]: I0226 14:30:38.339893 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dc66ca087bd25a8a371d3f4d0ac8faa7daef300fa4e48c4ddf39b48f7fe9757"} err="failed to get container status \"1dc66ca087bd25a8a371d3f4d0ac8faa7daef300fa4e48c4ddf39b48f7fe9757\": rpc error: code = NotFound desc = could not find container \"1dc66ca087bd25a8a371d3f4d0ac8faa7daef300fa4e48c4ddf39b48f7fe9757\": container with ID starting with 1dc66ca087bd25a8a371d3f4d0ac8faa7daef300fa4e48c4ddf39b48f7fe9757 not found: ID does not exist" Feb 26 14:30:39 crc kubenswrapper[4724]: I0226 14:30:39.989101 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" path="/var/lib/kubelet/pods/b5ed1721-2470-41e4-aab3-bb275535dc47/volumes" Feb 26 14:30:46 crc kubenswrapper[4724]: I0226 14:30:46.906025 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:30:46 crc kubenswrapper[4724]: I0226 14:30:46.906744 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:31:16 crc kubenswrapper[4724]: I0226 14:31:16.906566 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:31:16 crc kubenswrapper[4724]: I0226 14:31:16.907170 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:31:46 crc kubenswrapper[4724]: I0226 14:31:46.906673 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:31:46 crc kubenswrapper[4724]: I0226 14:31:46.907255 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:31:46 crc kubenswrapper[4724]: I0226 14:31:46.907304 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 14:31:46 crc kubenswrapper[4724]: I0226 14:31:46.908528 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb269d4bc37375bbed14056416a1ef4f2f56debb2a371c2163fb15897fe830d6"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:31:46 crc kubenswrapper[4724]: I0226 14:31:46.908599 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://bb269d4bc37375bbed14056416a1ef4f2f56debb2a371c2163fb15897fe830d6" gracePeriod=600 Feb 26 14:31:47 crc kubenswrapper[4724]: I0226 14:31:47.897201 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="bb269d4bc37375bbed14056416a1ef4f2f56debb2a371c2163fb15897fe830d6" exitCode=0 Feb 26 14:31:47 crc kubenswrapper[4724]: I0226 14:31:47.897403 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"bb269d4bc37375bbed14056416a1ef4f2f56debb2a371c2163fb15897fe830d6"} Feb 26 14:31:47 crc kubenswrapper[4724]: I0226 14:31:47.897526 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845"} Feb 26 14:31:47 crc kubenswrapper[4724]: I0226 14:31:47.897564 4724 scope.go:117] "RemoveContainer" containerID="76efc6873fad97d1a06f5725db6664c82165b3c3d69b85e0ded0a117ac902389" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.216096 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t6f54"] Feb 26 14:31:51 crc kubenswrapper[4724]: E0226 14:31:51.219769 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="registry-server" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.219819 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="registry-server" Feb 26 14:31:51 crc kubenswrapper[4724]: E0226 14:31:51.219864 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e815183-e92b-4ff4-be6d-7aac3d026e88" containerName="oc" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.219875 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e815183-e92b-4ff4-be6d-7aac3d026e88" containerName="oc" Feb 26 14:31:51 crc kubenswrapper[4724]: E0226 14:31:51.219893 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="extract-utilities" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.219903 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="extract-utilities" Feb 26 14:31:51 crc kubenswrapper[4724]: E0226 14:31:51.219926 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59ea030-3ba8-4b9f-9dd0-b071cf76659d" containerName="collect-profiles" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.219934 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59ea030-3ba8-4b9f-9dd0-b071cf76659d" containerName="collect-profiles" Feb 26 14:31:51 crc kubenswrapper[4724]: E0226 14:31:51.219950 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="extract-content" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.219957 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="extract-content" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.220576 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e815183-e92b-4ff4-be6d-7aac3d026e88" containerName="oc" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.220595 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59ea030-3ba8-4b9f-9dd0-b071cf76659d" containerName="collect-profiles" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.220619 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5ed1721-2470-41e4-aab3-bb275535dc47" containerName="registry-server" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.223799 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.280453 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6f54"] Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.402621 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61e59946-a7f4-46cd-9edb-10b83748beef-utilities\") pod \"certified-operators-t6f54\" (UID: \"61e59946-a7f4-46cd-9edb-10b83748beef\") " pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.402710 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61e59946-a7f4-46cd-9edb-10b83748beef-catalog-content\") pod \"certified-operators-t6f54\" (UID: \"61e59946-a7f4-46cd-9edb-10b83748beef\") " pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.402884 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h48zl\" (UniqueName: \"kubernetes.io/projected/61e59946-a7f4-46cd-9edb-10b83748beef-kube-api-access-h48zl\") pod \"certified-operators-t6f54\" (UID: \"61e59946-a7f4-46cd-9edb-10b83748beef\") " pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.506682 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61e59946-a7f4-46cd-9edb-10b83748beef-utilities\") pod \"certified-operators-t6f54\" (UID: \"61e59946-a7f4-46cd-9edb-10b83748beef\") " pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.506772 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61e59946-a7f4-46cd-9edb-10b83748beef-utilities\") pod \"certified-operators-t6f54\" (UID: \"61e59946-a7f4-46cd-9edb-10b83748beef\") " pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.506806 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61e59946-a7f4-46cd-9edb-10b83748beef-catalog-content\") pod \"certified-operators-t6f54\" (UID: \"61e59946-a7f4-46cd-9edb-10b83748beef\") " pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.506877 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h48zl\" (UniqueName: \"kubernetes.io/projected/61e59946-a7f4-46cd-9edb-10b83748beef-kube-api-access-h48zl\") pod \"certified-operators-t6f54\" (UID: \"61e59946-a7f4-46cd-9edb-10b83748beef\") " pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.507768 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61e59946-a7f4-46cd-9edb-10b83748beef-catalog-content\") pod \"certified-operators-t6f54\" (UID: \"61e59946-a7f4-46cd-9edb-10b83748beef\") " pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.535865 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h48zl\" (UniqueName: \"kubernetes.io/projected/61e59946-a7f4-46cd-9edb-10b83748beef-kube-api-access-h48zl\") pod \"certified-operators-t6f54\" (UID: \"61e59946-a7f4-46cd-9edb-10b83748beef\") " pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:31:51 crc kubenswrapper[4724]: I0226 14:31:51.550132 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:31:52 crc kubenswrapper[4724]: I0226 14:31:52.260576 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6f54"] Feb 26 14:31:52 crc kubenswrapper[4724]: I0226 14:31:52.971892 4724 generic.go:334] "Generic (PLEG): container finished" podID="61e59946-a7f4-46cd-9edb-10b83748beef" containerID="b3acecb67ed0be273f882a3f9bb86b8732092ee0716660281b6257b15f417da0" exitCode=0 Feb 26 14:31:52 crc kubenswrapper[4724]: I0226 14:31:52.972035 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6f54" event={"ID":"61e59946-a7f4-46cd-9edb-10b83748beef","Type":"ContainerDied","Data":"b3acecb67ed0be273f882a3f9bb86b8732092ee0716660281b6257b15f417da0"} Feb 26 14:31:52 crc kubenswrapper[4724]: I0226 14:31:52.972531 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6f54" event={"ID":"61e59946-a7f4-46cd-9edb-10b83748beef","Type":"ContainerStarted","Data":"758ac1e848ad5fde69568b9d166954c66e9c3030fd99e4674774cffbddfc5904"} Feb 26 14:31:54 crc kubenswrapper[4724]: I0226 14:31:54.066585 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6f54" event={"ID":"61e59946-a7f4-46cd-9edb-10b83748beef","Type":"ContainerStarted","Data":"868981535dcad5d6b35145a8b81fe6a3ab0f0c7bdc68aadb78cbe42807d69ce7"} Feb 26 14:31:58 crc kubenswrapper[4724]: I0226 14:31:58.123026 4724 generic.go:334] "Generic (PLEG): container finished" podID="61e59946-a7f4-46cd-9edb-10b83748beef" containerID="868981535dcad5d6b35145a8b81fe6a3ab0f0c7bdc68aadb78cbe42807d69ce7" exitCode=0 Feb 26 14:31:58 crc kubenswrapper[4724]: I0226 14:31:58.123361 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6f54" event={"ID":"61e59946-a7f4-46cd-9edb-10b83748beef","Type":"ContainerDied","Data":"868981535dcad5d6b35145a8b81fe6a3ab0f0c7bdc68aadb78cbe42807d69ce7"} Feb 26 14:31:59 crc kubenswrapper[4724]: I0226 14:31:59.133122 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6f54" event={"ID":"61e59946-a7f4-46cd-9edb-10b83748beef","Type":"ContainerStarted","Data":"5fd25ead76ececd5234381c76808b38e47ce3586960dd040e7772a18de28d7c9"} Feb 26 14:31:59 crc kubenswrapper[4724]: I0226 14:31:59.156487 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t6f54" podStartSLOduration=2.5485875399999998 podStartE2EDuration="8.156435442s" podCreationTimestamp="2026-02-26 14:31:51 +0000 UTC" firstStartedPulling="2026-02-26 14:31:52.974599618 +0000 UTC m=+12379.630338733" lastFinishedPulling="2026-02-26 14:31:58.58244752 +0000 UTC m=+12385.238186635" observedRunningTime="2026-02-26 14:31:59.153372655 +0000 UTC m=+12385.809111790" watchObservedRunningTime="2026-02-26 14:31:59.156435442 +0000 UTC m=+12385.812174567" Feb 26 14:32:00 crc kubenswrapper[4724]: I0226 14:32:00.226148 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535272-hzvjj"] Feb 26 14:32:00 crc kubenswrapper[4724]: I0226 14:32:00.227591 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535272-hzvjj" Feb 26 14:32:00 crc kubenswrapper[4724]: I0226 14:32:00.241912 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535272-hzvjj"] Feb 26 14:32:00 crc kubenswrapper[4724]: I0226 14:32:00.244483 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:32:00 crc kubenswrapper[4724]: I0226 14:32:00.244571 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:32:00 crc kubenswrapper[4724]: I0226 14:32:00.244740 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:32:00 crc kubenswrapper[4724]: I0226 14:32:00.314078 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htzdk\" (UniqueName: \"kubernetes.io/projected/b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e-kube-api-access-htzdk\") pod \"auto-csr-approver-29535272-hzvjj\" (UID: \"b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e\") " pod="openshift-infra/auto-csr-approver-29535272-hzvjj" Feb 26 14:32:00 crc kubenswrapper[4724]: I0226 14:32:00.419367 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htzdk\" (UniqueName: \"kubernetes.io/projected/b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e-kube-api-access-htzdk\") pod \"auto-csr-approver-29535272-hzvjj\" (UID: \"b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e\") " pod="openshift-infra/auto-csr-approver-29535272-hzvjj" Feb 26 14:32:00 crc kubenswrapper[4724]: I0226 14:32:00.463193 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htzdk\" (UniqueName: \"kubernetes.io/projected/b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e-kube-api-access-htzdk\") pod \"auto-csr-approver-29535272-hzvjj\" (UID: \"b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e\") " pod="openshift-infra/auto-csr-approver-29535272-hzvjj" Feb 26 14:32:00 crc kubenswrapper[4724]: I0226 14:32:00.574085 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535272-hzvjj" Feb 26 14:32:01 crc kubenswrapper[4724]: I0226 14:32:01.550410 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:32:01 crc kubenswrapper[4724]: I0226 14:32:01.550781 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:32:01 crc kubenswrapper[4724]: I0226 14:32:01.907780 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535272-hzvjj"] Feb 26 14:32:02 crc kubenswrapper[4724]: I0226 14:32:02.173948 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535272-hzvjj" event={"ID":"b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e","Type":"ContainerStarted","Data":"3f5eb07bdcfa7ef501b1f41fc8f04873b753f7443b9174ba8431304a54b20931"} Feb 26 14:32:02 crc kubenswrapper[4724]: I0226 14:32:02.617022 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-t6f54" podUID="61e59946-a7f4-46cd-9edb-10b83748beef" containerName="registry-server" probeResult="failure" output=< Feb 26 14:32:02 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:32:02 crc kubenswrapper[4724]: > Feb 26 14:32:04 crc kubenswrapper[4724]: I0226 14:32:04.201469 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535272-hzvjj" event={"ID":"b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e","Type":"ContainerStarted","Data":"924b9e48cd84e128b9fa0fb53d9e2850b03be8cda4e56dc8ae1caed1c2fd0459"} Feb 26 14:32:04 crc kubenswrapper[4724]: I0226 14:32:04.216571 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535272-hzvjj" podStartSLOduration=3.082357365 podStartE2EDuration="4.216548626s" podCreationTimestamp="2026-02-26 14:32:00 +0000 UTC" firstStartedPulling="2026-02-26 14:32:01.911574035 +0000 UTC m=+12388.567313150" lastFinishedPulling="2026-02-26 14:32:03.045765296 +0000 UTC m=+12389.701504411" observedRunningTime="2026-02-26 14:32:04.216534675 +0000 UTC m=+12390.872273790" watchObservedRunningTime="2026-02-26 14:32:04.216548626 +0000 UTC m=+12390.872287751" Feb 26 14:32:05 crc kubenswrapper[4724]: I0226 14:32:05.215942 4724 generic.go:334] "Generic (PLEG): container finished" podID="b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e" containerID="924b9e48cd84e128b9fa0fb53d9e2850b03be8cda4e56dc8ae1caed1c2fd0459" exitCode=0 Feb 26 14:32:05 crc kubenswrapper[4724]: I0226 14:32:05.216000 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535272-hzvjj" event={"ID":"b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e","Type":"ContainerDied","Data":"924b9e48cd84e128b9fa0fb53d9e2850b03be8cda4e56dc8ae1caed1c2fd0459"} Feb 26 14:32:06 crc kubenswrapper[4724]: I0226 14:32:06.595038 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535272-hzvjj" Feb 26 14:32:06 crc kubenswrapper[4724]: I0226 14:32:06.752127 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htzdk\" (UniqueName: \"kubernetes.io/projected/b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e-kube-api-access-htzdk\") pod \"b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e\" (UID: \"b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e\") " Feb 26 14:32:06 crc kubenswrapper[4724]: I0226 14:32:06.766656 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e-kube-api-access-htzdk" (OuterVolumeSpecName: "kube-api-access-htzdk") pod "b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e" (UID: "b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e"). InnerVolumeSpecName "kube-api-access-htzdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:32:06 crc kubenswrapper[4724]: I0226 14:32:06.854433 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htzdk\" (UniqueName: \"kubernetes.io/projected/b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e-kube-api-access-htzdk\") on node \"crc\" DevicePath \"\"" Feb 26 14:32:07 crc kubenswrapper[4724]: I0226 14:32:07.091735 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535266-dcg72"] Feb 26 14:32:07 crc kubenswrapper[4724]: I0226 14:32:07.104414 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535266-dcg72"] Feb 26 14:32:07 crc kubenswrapper[4724]: I0226 14:32:07.234307 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535272-hzvjj" event={"ID":"b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e","Type":"ContainerDied","Data":"3f5eb07bdcfa7ef501b1f41fc8f04873b753f7443b9174ba8431304a54b20931"} Feb 26 14:32:07 crc kubenswrapper[4724]: I0226 14:32:07.234744 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f5eb07bdcfa7ef501b1f41fc8f04873b753f7443b9174ba8431304a54b20931" Feb 26 14:32:07 crc kubenswrapper[4724]: I0226 14:32:07.234370 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535272-hzvjj" Feb 26 14:32:07 crc kubenswrapper[4724]: I0226 14:32:07.987914 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db229f1d-ed59-4305-b965-af3d6239ff64" path="/var/lib/kubelet/pods/db229f1d-ed59-4305-b965-af3d6239ff64/volumes" Feb 26 14:32:12 crc kubenswrapper[4724]: I0226 14:32:12.614863 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-t6f54" podUID="61e59946-a7f4-46cd-9edb-10b83748beef" containerName="registry-server" probeResult="failure" output=< Feb 26 14:32:12 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:32:12 crc kubenswrapper[4724]: > Feb 26 14:32:21 crc kubenswrapper[4724]: I0226 14:32:21.610223 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:32:21 crc kubenswrapper[4724]: I0226 14:32:21.697039 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:32:22 crc kubenswrapper[4724]: I0226 14:32:22.419814 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6f54"] Feb 26 14:32:23 crc kubenswrapper[4724]: I0226 14:32:23.389929 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t6f54" podUID="61e59946-a7f4-46cd-9edb-10b83748beef" containerName="registry-server" containerID="cri-o://5fd25ead76ececd5234381c76808b38e47ce3586960dd040e7772a18de28d7c9" gracePeriod=2 Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.133598 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.218529 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h48zl\" (UniqueName: \"kubernetes.io/projected/61e59946-a7f4-46cd-9edb-10b83748beef-kube-api-access-h48zl\") pod \"61e59946-a7f4-46cd-9edb-10b83748beef\" (UID: \"61e59946-a7f4-46cd-9edb-10b83748beef\") " Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.219845 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61e59946-a7f4-46cd-9edb-10b83748beef-catalog-content\") pod \"61e59946-a7f4-46cd-9edb-10b83748beef\" (UID: \"61e59946-a7f4-46cd-9edb-10b83748beef\") " Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.219887 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61e59946-a7f4-46cd-9edb-10b83748beef-utilities\") pod \"61e59946-a7f4-46cd-9edb-10b83748beef\" (UID: \"61e59946-a7f4-46cd-9edb-10b83748beef\") " Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.223739 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61e59946-a7f4-46cd-9edb-10b83748beef-utilities" (OuterVolumeSpecName: "utilities") pod "61e59946-a7f4-46cd-9edb-10b83748beef" (UID: "61e59946-a7f4-46cd-9edb-10b83748beef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.259655 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61e59946-a7f4-46cd-9edb-10b83748beef-kube-api-access-h48zl" (OuterVolumeSpecName: "kube-api-access-h48zl") pod "61e59946-a7f4-46cd-9edb-10b83748beef" (UID: "61e59946-a7f4-46cd-9edb-10b83748beef"). InnerVolumeSpecName "kube-api-access-h48zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.326700 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h48zl\" (UniqueName: \"kubernetes.io/projected/61e59946-a7f4-46cd-9edb-10b83748beef-kube-api-access-h48zl\") on node \"crc\" DevicePath \"\"" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.326744 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61e59946-a7f4-46cd-9edb-10b83748beef-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.332489 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61e59946-a7f4-46cd-9edb-10b83748beef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61e59946-a7f4-46cd-9edb-10b83748beef" (UID: "61e59946-a7f4-46cd-9edb-10b83748beef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.404481 4724 generic.go:334] "Generic (PLEG): container finished" podID="61e59946-a7f4-46cd-9edb-10b83748beef" containerID="5fd25ead76ececd5234381c76808b38e47ce3586960dd040e7772a18de28d7c9" exitCode=0 Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.404531 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6f54" event={"ID":"61e59946-a7f4-46cd-9edb-10b83748beef","Type":"ContainerDied","Data":"5fd25ead76ececd5234381c76808b38e47ce3586960dd040e7772a18de28d7c9"} Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.404565 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6f54" event={"ID":"61e59946-a7f4-46cd-9edb-10b83748beef","Type":"ContainerDied","Data":"758ac1e848ad5fde69568b9d166954c66e9c3030fd99e4674774cffbddfc5904"} Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.404586 4724 scope.go:117] "RemoveContainer" containerID="5fd25ead76ececd5234381c76808b38e47ce3586960dd040e7772a18de28d7c9" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.404596 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6f54" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.430396 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61e59946-a7f4-46cd-9edb-10b83748beef-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.457652 4724 scope.go:117] "RemoveContainer" containerID="868981535dcad5d6b35145a8b81fe6a3ab0f0c7bdc68aadb78cbe42807d69ce7" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.488649 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6f54"] Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.527142 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t6f54"] Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.530034 4724 scope.go:117] "RemoveContainer" containerID="b3acecb67ed0be273f882a3f9bb86b8732092ee0716660281b6257b15f417da0" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.574749 4724 scope.go:117] "RemoveContainer" containerID="5fd25ead76ececd5234381c76808b38e47ce3586960dd040e7772a18de28d7c9" Feb 26 14:32:24 crc kubenswrapper[4724]: E0226 14:32:24.577789 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fd25ead76ececd5234381c76808b38e47ce3586960dd040e7772a18de28d7c9\": container with ID starting with 5fd25ead76ececd5234381c76808b38e47ce3586960dd040e7772a18de28d7c9 not found: ID does not exist" containerID="5fd25ead76ececd5234381c76808b38e47ce3586960dd040e7772a18de28d7c9" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.577833 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd25ead76ececd5234381c76808b38e47ce3586960dd040e7772a18de28d7c9"} err="failed to get container status \"5fd25ead76ececd5234381c76808b38e47ce3586960dd040e7772a18de28d7c9\": rpc error: code = NotFound desc = could not find container \"5fd25ead76ececd5234381c76808b38e47ce3586960dd040e7772a18de28d7c9\": container with ID starting with 5fd25ead76ececd5234381c76808b38e47ce3586960dd040e7772a18de28d7c9 not found: ID does not exist" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.577862 4724 scope.go:117] "RemoveContainer" containerID="868981535dcad5d6b35145a8b81fe6a3ab0f0c7bdc68aadb78cbe42807d69ce7" Feb 26 14:32:24 crc kubenswrapper[4724]: E0226 14:32:24.578083 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"868981535dcad5d6b35145a8b81fe6a3ab0f0c7bdc68aadb78cbe42807d69ce7\": container with ID starting with 868981535dcad5d6b35145a8b81fe6a3ab0f0c7bdc68aadb78cbe42807d69ce7 not found: ID does not exist" containerID="868981535dcad5d6b35145a8b81fe6a3ab0f0c7bdc68aadb78cbe42807d69ce7" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.578101 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"868981535dcad5d6b35145a8b81fe6a3ab0f0c7bdc68aadb78cbe42807d69ce7"} err="failed to get container status \"868981535dcad5d6b35145a8b81fe6a3ab0f0c7bdc68aadb78cbe42807d69ce7\": rpc error: code = NotFound desc = could not find container \"868981535dcad5d6b35145a8b81fe6a3ab0f0c7bdc68aadb78cbe42807d69ce7\": container with ID starting with 868981535dcad5d6b35145a8b81fe6a3ab0f0c7bdc68aadb78cbe42807d69ce7 not found: ID does not exist" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.578114 4724 scope.go:117] "RemoveContainer" containerID="b3acecb67ed0be273f882a3f9bb86b8732092ee0716660281b6257b15f417da0" Feb 26 14:32:24 crc kubenswrapper[4724]: E0226 14:32:24.578324 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3acecb67ed0be273f882a3f9bb86b8732092ee0716660281b6257b15f417da0\": container with ID starting with b3acecb67ed0be273f882a3f9bb86b8732092ee0716660281b6257b15f417da0 not found: ID does not exist" containerID="b3acecb67ed0be273f882a3f9bb86b8732092ee0716660281b6257b15f417da0" Feb 26 14:32:24 crc kubenswrapper[4724]: I0226 14:32:24.578340 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3acecb67ed0be273f882a3f9bb86b8732092ee0716660281b6257b15f417da0"} err="failed to get container status \"b3acecb67ed0be273f882a3f9bb86b8732092ee0716660281b6257b15f417da0\": rpc error: code = NotFound desc = could not find container \"b3acecb67ed0be273f882a3f9bb86b8732092ee0716660281b6257b15f417da0\": container with ID starting with b3acecb67ed0be273f882a3f9bb86b8732092ee0716660281b6257b15f417da0 not found: ID does not exist" Feb 26 14:32:25 crc kubenswrapper[4724]: I0226 14:32:25.987530 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61e59946-a7f4-46cd-9edb-10b83748beef" path="/var/lib/kubelet/pods/61e59946-a7f4-46cd-9edb-10b83748beef/volumes" Feb 26 14:32:33 crc kubenswrapper[4724]: I0226 14:32:33.446726 4724 scope.go:117] "RemoveContainer" containerID="e90bc583bc4edc4b0bf3041c5cc293a55e0113a1de9bdd5b67d13f0f35fbfb6d" Feb 26 14:33:43 crc kubenswrapper[4724]: I0226 14:33:43.517642 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-746558bfbf-gbdpm" podUID="acbb8b99-0b04-48c7-904e-a5c5304813a3" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.188479 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535274-bw9w6"] Feb 26 14:34:00 crc kubenswrapper[4724]: E0226 14:34:00.190317 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e59946-a7f4-46cd-9edb-10b83748beef" containerName="extract-content" Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.190384 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e59946-a7f4-46cd-9edb-10b83748beef" containerName="extract-content" Feb 26 14:34:00 crc kubenswrapper[4724]: E0226 14:34:00.190454 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e" containerName="oc" Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.190465 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e" containerName="oc" Feb 26 14:34:00 crc kubenswrapper[4724]: E0226 14:34:00.190499 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e59946-a7f4-46cd-9edb-10b83748beef" containerName="extract-utilities" Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.190510 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e59946-a7f4-46cd-9edb-10b83748beef" containerName="extract-utilities" Feb 26 14:34:00 crc kubenswrapper[4724]: E0226 14:34:00.190538 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e59946-a7f4-46cd-9edb-10b83748beef" containerName="registry-server" Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.190546 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e59946-a7f4-46cd-9edb-10b83748beef" containerName="registry-server" Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.192326 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e" containerName="oc" Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.192387 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="61e59946-a7f4-46cd-9edb-10b83748beef" containerName="registry-server" Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.193927 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535274-bw9w6" Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.211957 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.212554 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.213507 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.266581 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535274-bw9w6"] Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.325466 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvjmm\" (UniqueName: \"kubernetes.io/projected/388796e4-125b-47b9-b97f-8c7c0feff370-kube-api-access-lvjmm\") pod \"auto-csr-approver-29535274-bw9w6\" (UID: \"388796e4-125b-47b9-b97f-8c7c0feff370\") " pod="openshift-infra/auto-csr-approver-29535274-bw9w6" Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.428845 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvjmm\" (UniqueName: \"kubernetes.io/projected/388796e4-125b-47b9-b97f-8c7c0feff370-kube-api-access-lvjmm\") pod \"auto-csr-approver-29535274-bw9w6\" (UID: \"388796e4-125b-47b9-b97f-8c7c0feff370\") " pod="openshift-infra/auto-csr-approver-29535274-bw9w6" Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.454167 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvjmm\" (UniqueName: \"kubernetes.io/projected/388796e4-125b-47b9-b97f-8c7c0feff370-kube-api-access-lvjmm\") pod \"auto-csr-approver-29535274-bw9w6\" (UID: \"388796e4-125b-47b9-b97f-8c7c0feff370\") " pod="openshift-infra/auto-csr-approver-29535274-bw9w6" Feb 26 14:34:00 crc kubenswrapper[4724]: I0226 14:34:00.536504 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535274-bw9w6" Feb 26 14:34:01 crc kubenswrapper[4724]: I0226 14:34:01.079381 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535274-bw9w6"] Feb 26 14:34:01 crc kubenswrapper[4724]: I0226 14:34:01.393835 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535274-bw9w6" event={"ID":"388796e4-125b-47b9-b97f-8c7c0feff370","Type":"ContainerStarted","Data":"8a40a7d4e06a46e11fbc02b7eca6ebefb0c0f0efe12e32f2217397b5f6f0ad63"} Feb 26 14:34:03 crc kubenswrapper[4724]: I0226 14:34:03.415075 4724 generic.go:334] "Generic (PLEG): container finished" podID="388796e4-125b-47b9-b97f-8c7c0feff370" containerID="618635b6f586f1b9d35303a873406dbe6a57dcb5e6734f648e5c5378c56e998f" exitCode=0 Feb 26 14:34:03 crc kubenswrapper[4724]: I0226 14:34:03.415133 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535274-bw9w6" event={"ID":"388796e4-125b-47b9-b97f-8c7c0feff370","Type":"ContainerDied","Data":"618635b6f586f1b9d35303a873406dbe6a57dcb5e6734f648e5c5378c56e998f"} Feb 26 14:34:04 crc kubenswrapper[4724]: I0226 14:34:04.763325 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535274-bw9w6" Feb 26 14:34:04 crc kubenswrapper[4724]: I0226 14:34:04.818676 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvjmm\" (UniqueName: \"kubernetes.io/projected/388796e4-125b-47b9-b97f-8c7c0feff370-kube-api-access-lvjmm\") pod \"388796e4-125b-47b9-b97f-8c7c0feff370\" (UID: \"388796e4-125b-47b9-b97f-8c7c0feff370\") " Feb 26 14:34:04 crc kubenswrapper[4724]: I0226 14:34:04.825384 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/388796e4-125b-47b9-b97f-8c7c0feff370-kube-api-access-lvjmm" (OuterVolumeSpecName: "kube-api-access-lvjmm") pod "388796e4-125b-47b9-b97f-8c7c0feff370" (UID: "388796e4-125b-47b9-b97f-8c7c0feff370"). InnerVolumeSpecName "kube-api-access-lvjmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:34:04 crc kubenswrapper[4724]: I0226 14:34:04.921837 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvjmm\" (UniqueName: \"kubernetes.io/projected/388796e4-125b-47b9-b97f-8c7c0feff370-kube-api-access-lvjmm\") on node \"crc\" DevicePath \"\"" Feb 26 14:34:05 crc kubenswrapper[4724]: I0226 14:34:05.435833 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535274-bw9w6" event={"ID":"388796e4-125b-47b9-b97f-8c7c0feff370","Type":"ContainerDied","Data":"8a40a7d4e06a46e11fbc02b7eca6ebefb0c0f0efe12e32f2217397b5f6f0ad63"} Feb 26 14:34:05 crc kubenswrapper[4724]: I0226 14:34:05.435881 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a40a7d4e06a46e11fbc02b7eca6ebefb0c0f0efe12e32f2217397b5f6f0ad63" Feb 26 14:34:05 crc kubenswrapper[4724]: I0226 14:34:05.435963 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535274-bw9w6" Feb 26 14:34:05 crc kubenswrapper[4724]: I0226 14:34:05.841488 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535268-vw8sv"] Feb 26 14:34:05 crc kubenswrapper[4724]: I0226 14:34:05.854953 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535268-vw8sv"] Feb 26 14:34:05 crc kubenswrapper[4724]: I0226 14:34:05.987999 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f9cce36-8e73-4532-a822-9be5c3933b75" path="/var/lib/kubelet/pods/7f9cce36-8e73-4532-a822-9be5c3933b75/volumes" Feb 26 14:34:16 crc kubenswrapper[4724]: I0226 14:34:16.908617 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:34:16 crc kubenswrapper[4724]: I0226 14:34:16.909126 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:34:34 crc kubenswrapper[4724]: I0226 14:34:34.166765 4724 scope.go:117] "RemoveContainer" containerID="8324c0fe97f379fed4001064b18769f3152293caf4750fb4974d009354ce259d" Feb 26 14:34:46 crc kubenswrapper[4724]: I0226 14:34:46.905841 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:34:46 crc kubenswrapper[4724]: I0226 14:34:46.906276 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:35:16 crc kubenswrapper[4724]: I0226 14:35:16.906558 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:35:16 crc kubenswrapper[4724]: I0226 14:35:16.907129 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:35:16 crc kubenswrapper[4724]: I0226 14:35:16.907191 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 14:35:16 crc kubenswrapper[4724]: I0226 14:35:16.908314 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:35:16 crc kubenswrapper[4724]: I0226 14:35:16.908381 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" gracePeriod=600 Feb 26 14:35:17 crc kubenswrapper[4724]: E0226 14:35:17.034298 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:35:17 crc kubenswrapper[4724]: I0226 14:35:17.127876 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" exitCode=0 Feb 26 14:35:17 crc kubenswrapper[4724]: I0226 14:35:17.127971 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845"} Feb 26 14:35:17 crc kubenswrapper[4724]: I0226 14:35:17.128339 4724 scope.go:117] "RemoveContainer" containerID="bb269d4bc37375bbed14056416a1ef4f2f56debb2a371c2163fb15897fe830d6" Feb 26 14:35:17 crc kubenswrapper[4724]: I0226 14:35:17.129035 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:35:17 crc kubenswrapper[4724]: E0226 14:35:17.129411 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:35:29 crc kubenswrapper[4724]: I0226 14:35:29.975545 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:35:29 crc kubenswrapper[4724]: E0226 14:35:29.976369 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:35:41 crc kubenswrapper[4724]: I0226 14:35:41.976403 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:35:41 crc kubenswrapper[4724]: E0226 14:35:41.978442 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:35:53 crc kubenswrapper[4724]: I0226 14:35:53.982712 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:35:53 crc kubenswrapper[4724]: E0226 14:35:53.983631 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:36:00 crc kubenswrapper[4724]: I0226 14:36:00.158286 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535276-r2zf9"] Feb 26 14:36:00 crc kubenswrapper[4724]: E0226 14:36:00.159782 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388796e4-125b-47b9-b97f-8c7c0feff370" containerName="oc" Feb 26 14:36:00 crc kubenswrapper[4724]: I0226 14:36:00.159802 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="388796e4-125b-47b9-b97f-8c7c0feff370" containerName="oc" Feb 26 14:36:00 crc kubenswrapper[4724]: I0226 14:36:00.160073 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="388796e4-125b-47b9-b97f-8c7c0feff370" containerName="oc" Feb 26 14:36:00 crc kubenswrapper[4724]: I0226 14:36:00.160939 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535276-r2zf9" Feb 26 14:36:00 crc kubenswrapper[4724]: I0226 14:36:00.165777 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:36:00 crc kubenswrapper[4724]: I0226 14:36:00.165940 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:36:00 crc kubenswrapper[4724]: I0226 14:36:00.167934 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:36:00 crc kubenswrapper[4724]: I0226 14:36:00.186352 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535276-r2zf9"] Feb 26 14:36:00 crc kubenswrapper[4724]: I0226 14:36:00.214449 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlmxf\" (UniqueName: \"kubernetes.io/projected/d16ef40b-5c00-4c7b-afc0-28f98836bbd5-kube-api-access-dlmxf\") pod \"auto-csr-approver-29535276-r2zf9\" (UID: \"d16ef40b-5c00-4c7b-afc0-28f98836bbd5\") " pod="openshift-infra/auto-csr-approver-29535276-r2zf9" Feb 26 14:36:00 crc kubenswrapper[4724]: I0226 14:36:00.317222 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlmxf\" (UniqueName: \"kubernetes.io/projected/d16ef40b-5c00-4c7b-afc0-28f98836bbd5-kube-api-access-dlmxf\") pod \"auto-csr-approver-29535276-r2zf9\" (UID: \"d16ef40b-5c00-4c7b-afc0-28f98836bbd5\") " pod="openshift-infra/auto-csr-approver-29535276-r2zf9" Feb 26 14:36:00 crc kubenswrapper[4724]: I0226 14:36:00.349324 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlmxf\" (UniqueName: \"kubernetes.io/projected/d16ef40b-5c00-4c7b-afc0-28f98836bbd5-kube-api-access-dlmxf\") pod \"auto-csr-approver-29535276-r2zf9\" (UID: \"d16ef40b-5c00-4c7b-afc0-28f98836bbd5\") " pod="openshift-infra/auto-csr-approver-29535276-r2zf9" Feb 26 14:36:00 crc kubenswrapper[4724]: I0226 14:36:00.499472 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535276-r2zf9" Feb 26 14:36:00 crc kubenswrapper[4724]: I0226 14:36:00.990690 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535276-r2zf9"] Feb 26 14:36:01 crc kubenswrapper[4724]: I0226 14:36:01.001980 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:36:01 crc kubenswrapper[4724]: I0226 14:36:01.592031 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535276-r2zf9" event={"ID":"d16ef40b-5c00-4c7b-afc0-28f98836bbd5","Type":"ContainerStarted","Data":"5ab2fddf59cf4e69d7d520c67394abf4ca06443d2e1ae31cb3274169c8585192"} Feb 26 14:36:02 crc kubenswrapper[4724]: I0226 14:36:02.603154 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535276-r2zf9" event={"ID":"d16ef40b-5c00-4c7b-afc0-28f98836bbd5","Type":"ContainerStarted","Data":"7fca827e7923fa6da4424bc957db4a04ac19388de983addcce957ade6db62760"} Feb 26 14:36:02 crc kubenswrapper[4724]: I0226 14:36:02.620683 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535276-r2zf9" podStartSLOduration=1.75299175 podStartE2EDuration="2.620651623s" podCreationTimestamp="2026-02-26 14:36:00 +0000 UTC" firstStartedPulling="2026-02-26 14:36:00.99689706 +0000 UTC m=+12627.652636175" lastFinishedPulling="2026-02-26 14:36:01.864556933 +0000 UTC m=+12628.520296048" observedRunningTime="2026-02-26 14:36:02.617745131 +0000 UTC m=+12629.273484246" watchObservedRunningTime="2026-02-26 14:36:02.620651623 +0000 UTC m=+12629.276390728" Feb 26 14:36:04 crc kubenswrapper[4724]: I0226 14:36:04.624582 4724 generic.go:334] "Generic (PLEG): container finished" podID="d16ef40b-5c00-4c7b-afc0-28f98836bbd5" containerID="7fca827e7923fa6da4424bc957db4a04ac19388de983addcce957ade6db62760" exitCode=0 Feb 26 14:36:04 crc kubenswrapper[4724]: I0226 14:36:04.624627 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535276-r2zf9" event={"ID":"d16ef40b-5c00-4c7b-afc0-28f98836bbd5","Type":"ContainerDied","Data":"7fca827e7923fa6da4424bc957db4a04ac19388de983addcce957ade6db62760"} Feb 26 14:36:05 crc kubenswrapper[4724]: I0226 14:36:05.989224 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535276-r2zf9" Feb 26 14:36:06 crc kubenswrapper[4724]: I0226 14:36:06.143395 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlmxf\" (UniqueName: \"kubernetes.io/projected/d16ef40b-5c00-4c7b-afc0-28f98836bbd5-kube-api-access-dlmxf\") pod \"d16ef40b-5c00-4c7b-afc0-28f98836bbd5\" (UID: \"d16ef40b-5c00-4c7b-afc0-28f98836bbd5\") " Feb 26 14:36:06 crc kubenswrapper[4724]: I0226 14:36:06.151735 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16ef40b-5c00-4c7b-afc0-28f98836bbd5-kube-api-access-dlmxf" (OuterVolumeSpecName: "kube-api-access-dlmxf") pod "d16ef40b-5c00-4c7b-afc0-28f98836bbd5" (UID: "d16ef40b-5c00-4c7b-afc0-28f98836bbd5"). InnerVolumeSpecName "kube-api-access-dlmxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:36:06 crc kubenswrapper[4724]: I0226 14:36:06.246251 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlmxf\" (UniqueName: \"kubernetes.io/projected/d16ef40b-5c00-4c7b-afc0-28f98836bbd5-kube-api-access-dlmxf\") on node \"crc\" DevicePath \"\"" Feb 26 14:36:06 crc kubenswrapper[4724]: I0226 14:36:06.644056 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535276-r2zf9" event={"ID":"d16ef40b-5c00-4c7b-afc0-28f98836bbd5","Type":"ContainerDied","Data":"5ab2fddf59cf4e69d7d520c67394abf4ca06443d2e1ae31cb3274169c8585192"} Feb 26 14:36:06 crc kubenswrapper[4724]: I0226 14:36:06.644119 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ab2fddf59cf4e69d7d520c67394abf4ca06443d2e1ae31cb3274169c8585192" Feb 26 14:36:06 crc kubenswrapper[4724]: I0226 14:36:06.644216 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535276-r2zf9" Feb 26 14:36:06 crc kubenswrapper[4724]: I0226 14:36:06.727797 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535270-xcrrj"] Feb 26 14:36:06 crc kubenswrapper[4724]: I0226 14:36:06.739144 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535270-xcrrj"] Feb 26 14:36:06 crc kubenswrapper[4724]: I0226 14:36:06.981220 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:36:06 crc kubenswrapper[4724]: E0226 14:36:06.981748 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:36:07 crc kubenswrapper[4724]: I0226 14:36:07.994745 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e815183-e92b-4ff4-be6d-7aac3d026e88" path="/var/lib/kubelet/pods/2e815183-e92b-4ff4-be6d-7aac3d026e88/volumes" Feb 26 14:36:21 crc kubenswrapper[4724]: I0226 14:36:21.975631 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:36:21 crc kubenswrapper[4724]: E0226 14:36:21.976357 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:36:32 crc kubenswrapper[4724]: I0226 14:36:32.975387 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:36:32 crc kubenswrapper[4724]: E0226 14:36:32.976513 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:36:34 crc kubenswrapper[4724]: I0226 14:36:34.258223 4724 scope.go:117] "RemoveContainer" containerID="4670d55d4f295923f56469e47e49bb0b2fc10e908ec7957066322fd420e84748" Feb 26 14:36:43 crc kubenswrapper[4724]: I0226 14:36:43.576969 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l67bw"] Feb 26 14:36:43 crc kubenswrapper[4724]: E0226 14:36:43.579103 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16ef40b-5c00-4c7b-afc0-28f98836bbd5" containerName="oc" Feb 26 14:36:43 crc kubenswrapper[4724]: I0226 14:36:43.579218 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16ef40b-5c00-4c7b-afc0-28f98836bbd5" containerName="oc" Feb 26 14:36:43 crc kubenswrapper[4724]: I0226 14:36:43.579489 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16ef40b-5c00-4c7b-afc0-28f98836bbd5" containerName="oc" Feb 26 14:36:43 crc kubenswrapper[4724]: I0226 14:36:43.582628 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:36:43 crc kubenswrapper[4724]: I0226 14:36:43.596900 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l67bw"] Feb 26 14:36:43 crc kubenswrapper[4724]: I0226 14:36:43.689541 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478bd469-fb16-4cab-a3de-eed03f6919c4-utilities\") pod \"community-operators-l67bw\" (UID: \"478bd469-fb16-4cab-a3de-eed03f6919c4\") " pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:36:43 crc kubenswrapper[4724]: I0226 14:36:43.689583 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478bd469-fb16-4cab-a3de-eed03f6919c4-catalog-content\") pod \"community-operators-l67bw\" (UID: \"478bd469-fb16-4cab-a3de-eed03f6919c4\") " pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:36:43 crc kubenswrapper[4724]: I0226 14:36:43.689714 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4ts7\" (UniqueName: \"kubernetes.io/projected/478bd469-fb16-4cab-a3de-eed03f6919c4-kube-api-access-q4ts7\") pod \"community-operators-l67bw\" (UID: \"478bd469-fb16-4cab-a3de-eed03f6919c4\") " pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:36:43 crc kubenswrapper[4724]: I0226 14:36:43.790716 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4ts7\" (UniqueName: \"kubernetes.io/projected/478bd469-fb16-4cab-a3de-eed03f6919c4-kube-api-access-q4ts7\") pod \"community-operators-l67bw\" (UID: \"478bd469-fb16-4cab-a3de-eed03f6919c4\") " pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:36:43 crc kubenswrapper[4724]: I0226 14:36:43.790879 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478bd469-fb16-4cab-a3de-eed03f6919c4-utilities\") pod \"community-operators-l67bw\" (UID: \"478bd469-fb16-4cab-a3de-eed03f6919c4\") " pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:36:43 crc kubenswrapper[4724]: I0226 14:36:43.790915 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478bd469-fb16-4cab-a3de-eed03f6919c4-catalog-content\") pod \"community-operators-l67bw\" (UID: \"478bd469-fb16-4cab-a3de-eed03f6919c4\") " pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:36:43 crc kubenswrapper[4724]: I0226 14:36:43.791487 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478bd469-fb16-4cab-a3de-eed03f6919c4-utilities\") pod \"community-operators-l67bw\" (UID: \"478bd469-fb16-4cab-a3de-eed03f6919c4\") " pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:36:43 crc kubenswrapper[4724]: I0226 14:36:43.791518 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478bd469-fb16-4cab-a3de-eed03f6919c4-catalog-content\") pod \"community-operators-l67bw\" (UID: \"478bd469-fb16-4cab-a3de-eed03f6919c4\") " pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:36:43 crc kubenswrapper[4724]: I0226 14:36:43.819172 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4ts7\" (UniqueName: \"kubernetes.io/projected/478bd469-fb16-4cab-a3de-eed03f6919c4-kube-api-access-q4ts7\") pod \"community-operators-l67bw\" (UID: \"478bd469-fb16-4cab-a3de-eed03f6919c4\") " pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:36:43 crc kubenswrapper[4724]: I0226 14:36:43.979002 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:36:44 crc kubenswrapper[4724]: I0226 14:36:44.975518 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:36:44 crc kubenswrapper[4724]: E0226 14:36:44.976303 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:36:45 crc kubenswrapper[4724]: I0226 14:36:45.189514 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l67bw"] Feb 26 14:36:46 crc kubenswrapper[4724]: I0226 14:36:46.059955 4724 generic.go:334] "Generic (PLEG): container finished" podID="478bd469-fb16-4cab-a3de-eed03f6919c4" containerID="43058f8424ba13871d012c0ec173b78eecb235ecd171835cb836cf777354cf77" exitCode=0 Feb 26 14:36:46 crc kubenswrapper[4724]: I0226 14:36:46.060503 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67bw" event={"ID":"478bd469-fb16-4cab-a3de-eed03f6919c4","Type":"ContainerDied","Data":"43058f8424ba13871d012c0ec173b78eecb235ecd171835cb836cf777354cf77"} Feb 26 14:36:46 crc kubenswrapper[4724]: I0226 14:36:46.060549 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67bw" event={"ID":"478bd469-fb16-4cab-a3de-eed03f6919c4","Type":"ContainerStarted","Data":"082fbffde41015d85576ed5ce7d74dbd6deef931de4bd6c96e4b5605f52d09f9"} Feb 26 14:36:47 crc kubenswrapper[4724]: I0226 14:36:47.070033 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67bw" event={"ID":"478bd469-fb16-4cab-a3de-eed03f6919c4","Type":"ContainerStarted","Data":"2f67590ba9492b2cecd585d19fd8b9b96191297f637628fea61490e25c4269b5"} Feb 26 14:36:50 crc kubenswrapper[4724]: I0226 14:36:50.103066 4724 generic.go:334] "Generic (PLEG): container finished" podID="478bd469-fb16-4cab-a3de-eed03f6919c4" containerID="2f67590ba9492b2cecd585d19fd8b9b96191297f637628fea61490e25c4269b5" exitCode=0 Feb 26 14:36:50 crc kubenswrapper[4724]: I0226 14:36:50.103152 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67bw" event={"ID":"478bd469-fb16-4cab-a3de-eed03f6919c4","Type":"ContainerDied","Data":"2f67590ba9492b2cecd585d19fd8b9b96191297f637628fea61490e25c4269b5"} Feb 26 14:36:51 crc kubenswrapper[4724]: I0226 14:36:51.137937 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67bw" event={"ID":"478bd469-fb16-4cab-a3de-eed03f6919c4","Type":"ContainerStarted","Data":"884f4c4e9757feea976aa1f68b11447f8e545f25fd098408e1d12c3d5afee811"} Feb 26 14:36:51 crc kubenswrapper[4724]: I0226 14:36:51.172013 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l67bw" podStartSLOduration=3.755115539 podStartE2EDuration="8.17197421s" podCreationTimestamp="2026-02-26 14:36:43 +0000 UTC" firstStartedPulling="2026-02-26 14:36:46.062848036 +0000 UTC m=+12672.718587171" lastFinishedPulling="2026-02-26 14:36:50.479706727 +0000 UTC m=+12677.135445842" observedRunningTime="2026-02-26 14:36:51.158981547 +0000 UTC m=+12677.814720672" watchObservedRunningTime="2026-02-26 14:36:51.17197421 +0000 UTC m=+12677.827713325" Feb 26 14:36:53 crc kubenswrapper[4724]: I0226 14:36:53.986987 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:36:53 crc kubenswrapper[4724]: I0226 14:36:53.987453 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:36:55 crc kubenswrapper[4724]: I0226 14:36:55.027574 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-l67bw" podUID="478bd469-fb16-4cab-a3de-eed03f6919c4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:36:55 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:36:55 crc kubenswrapper[4724]: > Feb 26 14:36:55 crc kubenswrapper[4724]: I0226 14:36:55.975874 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:36:55 crc kubenswrapper[4724]: E0226 14:36:55.976512 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:37:05 crc kubenswrapper[4724]: I0226 14:37:05.042629 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-l67bw" podUID="478bd469-fb16-4cab-a3de-eed03f6919c4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:37:05 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:37:05 crc kubenswrapper[4724]: > Feb 26 14:37:07 crc kubenswrapper[4724]: I0226 14:37:07.978148 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:37:07 crc kubenswrapper[4724]: E0226 14:37:07.978736 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:37:14 crc kubenswrapper[4724]: I0226 14:37:14.053563 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:37:14 crc kubenswrapper[4724]: I0226 14:37:14.115277 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:37:14 crc kubenswrapper[4724]: I0226 14:37:14.823066 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l67bw"] Feb 26 14:37:15 crc kubenswrapper[4724]: I0226 14:37:15.370086 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l67bw" podUID="478bd469-fb16-4cab-a3de-eed03f6919c4" containerName="registry-server" containerID="cri-o://884f4c4e9757feea976aa1f68b11447f8e545f25fd098408e1d12c3d5afee811" gracePeriod=2 Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.228436 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.289135 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478bd469-fb16-4cab-a3de-eed03f6919c4-catalog-content\") pod \"478bd469-fb16-4cab-a3de-eed03f6919c4\" (UID: \"478bd469-fb16-4cab-a3de-eed03f6919c4\") " Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.289326 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4ts7\" (UniqueName: \"kubernetes.io/projected/478bd469-fb16-4cab-a3de-eed03f6919c4-kube-api-access-q4ts7\") pod \"478bd469-fb16-4cab-a3de-eed03f6919c4\" (UID: \"478bd469-fb16-4cab-a3de-eed03f6919c4\") " Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.289442 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478bd469-fb16-4cab-a3de-eed03f6919c4-utilities\") pod \"478bd469-fb16-4cab-a3de-eed03f6919c4\" (UID: \"478bd469-fb16-4cab-a3de-eed03f6919c4\") " Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.290916 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/478bd469-fb16-4cab-a3de-eed03f6919c4-utilities" (OuterVolumeSpecName: "utilities") pod "478bd469-fb16-4cab-a3de-eed03f6919c4" (UID: "478bd469-fb16-4cab-a3de-eed03f6919c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.336467 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/478bd469-fb16-4cab-a3de-eed03f6919c4-kube-api-access-q4ts7" (OuterVolumeSpecName: "kube-api-access-q4ts7") pod "478bd469-fb16-4cab-a3de-eed03f6919c4" (UID: "478bd469-fb16-4cab-a3de-eed03f6919c4"). InnerVolumeSpecName "kube-api-access-q4ts7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.375893 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/478bd469-fb16-4cab-a3de-eed03f6919c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "478bd469-fb16-4cab-a3de-eed03f6919c4" (UID: "478bd469-fb16-4cab-a3de-eed03f6919c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.383257 4724 generic.go:334] "Generic (PLEG): container finished" podID="478bd469-fb16-4cab-a3de-eed03f6919c4" containerID="884f4c4e9757feea976aa1f68b11447f8e545f25fd098408e1d12c3d5afee811" exitCode=0 Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.383381 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l67bw" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.383418 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67bw" event={"ID":"478bd469-fb16-4cab-a3de-eed03f6919c4","Type":"ContainerDied","Data":"884f4c4e9757feea976aa1f68b11447f8e545f25fd098408e1d12c3d5afee811"} Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.383480 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67bw" event={"ID":"478bd469-fb16-4cab-a3de-eed03f6919c4","Type":"ContainerDied","Data":"082fbffde41015d85576ed5ce7d74dbd6deef931de4bd6c96e4b5605f52d09f9"} Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.383502 4724 scope.go:117] "RemoveContainer" containerID="884f4c4e9757feea976aa1f68b11447f8e545f25fd098408e1d12c3d5afee811" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.391341 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4ts7\" (UniqueName: \"kubernetes.io/projected/478bd469-fb16-4cab-a3de-eed03f6919c4-kube-api-access-q4ts7\") on node \"crc\" DevicePath \"\"" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.391373 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478bd469-fb16-4cab-a3de-eed03f6919c4-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.391387 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478bd469-fb16-4cab-a3de-eed03f6919c4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.409004 4724 scope.go:117] "RemoveContainer" containerID="2f67590ba9492b2cecd585d19fd8b9b96191297f637628fea61490e25c4269b5" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.441160 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l67bw"] Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.441289 4724 scope.go:117] "RemoveContainer" containerID="43058f8424ba13871d012c0ec173b78eecb235ecd171835cb836cf777354cf77" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.451168 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l67bw"] Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.499051 4724 scope.go:117] "RemoveContainer" containerID="884f4c4e9757feea976aa1f68b11447f8e545f25fd098408e1d12c3d5afee811" Feb 26 14:37:16 crc kubenswrapper[4724]: E0226 14:37:16.501701 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"884f4c4e9757feea976aa1f68b11447f8e545f25fd098408e1d12c3d5afee811\": container with ID starting with 884f4c4e9757feea976aa1f68b11447f8e545f25fd098408e1d12c3d5afee811 not found: ID does not exist" containerID="884f4c4e9757feea976aa1f68b11447f8e545f25fd098408e1d12c3d5afee811" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.501749 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"884f4c4e9757feea976aa1f68b11447f8e545f25fd098408e1d12c3d5afee811"} err="failed to get container status \"884f4c4e9757feea976aa1f68b11447f8e545f25fd098408e1d12c3d5afee811\": rpc error: code = NotFound desc = could not find container \"884f4c4e9757feea976aa1f68b11447f8e545f25fd098408e1d12c3d5afee811\": container with ID starting with 884f4c4e9757feea976aa1f68b11447f8e545f25fd098408e1d12c3d5afee811 not found: ID does not exist" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.501776 4724 scope.go:117] "RemoveContainer" containerID="2f67590ba9492b2cecd585d19fd8b9b96191297f637628fea61490e25c4269b5" Feb 26 14:37:16 crc kubenswrapper[4724]: E0226 14:37:16.503022 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f67590ba9492b2cecd585d19fd8b9b96191297f637628fea61490e25c4269b5\": container with ID starting with 2f67590ba9492b2cecd585d19fd8b9b96191297f637628fea61490e25c4269b5 not found: ID does not exist" containerID="2f67590ba9492b2cecd585d19fd8b9b96191297f637628fea61490e25c4269b5" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.503062 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f67590ba9492b2cecd585d19fd8b9b96191297f637628fea61490e25c4269b5"} err="failed to get container status \"2f67590ba9492b2cecd585d19fd8b9b96191297f637628fea61490e25c4269b5\": rpc error: code = NotFound desc = could not find container \"2f67590ba9492b2cecd585d19fd8b9b96191297f637628fea61490e25c4269b5\": container with ID starting with 2f67590ba9492b2cecd585d19fd8b9b96191297f637628fea61490e25c4269b5 not found: ID does not exist" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.503087 4724 scope.go:117] "RemoveContainer" containerID="43058f8424ba13871d012c0ec173b78eecb235ecd171835cb836cf777354cf77" Feb 26 14:37:16 crc kubenswrapper[4724]: E0226 14:37:16.503437 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43058f8424ba13871d012c0ec173b78eecb235ecd171835cb836cf777354cf77\": container with ID starting with 43058f8424ba13871d012c0ec173b78eecb235ecd171835cb836cf777354cf77 not found: ID does not exist" containerID="43058f8424ba13871d012c0ec173b78eecb235ecd171835cb836cf777354cf77" Feb 26 14:37:16 crc kubenswrapper[4724]: I0226 14:37:16.503470 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43058f8424ba13871d012c0ec173b78eecb235ecd171835cb836cf777354cf77"} err="failed to get container status \"43058f8424ba13871d012c0ec173b78eecb235ecd171835cb836cf777354cf77\": rpc error: code = NotFound desc = could not find container \"43058f8424ba13871d012c0ec173b78eecb235ecd171835cb836cf777354cf77\": container with ID starting with 43058f8424ba13871d012c0ec173b78eecb235ecd171835cb836cf777354cf77 not found: ID does not exist" Feb 26 14:37:17 crc kubenswrapper[4724]: I0226 14:37:17.990923 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="478bd469-fb16-4cab-a3de-eed03f6919c4" path="/var/lib/kubelet/pods/478bd469-fb16-4cab-a3de-eed03f6919c4/volumes" Feb 26 14:37:21 crc kubenswrapper[4724]: I0226 14:37:21.974956 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:37:21 crc kubenswrapper[4724]: E0226 14:37:21.975606 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:37:35 crc kubenswrapper[4724]: I0226 14:37:35.976447 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:37:35 crc kubenswrapper[4724]: E0226 14:37:35.977307 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:37:50 crc kubenswrapper[4724]: I0226 14:37:50.975936 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:37:50 crc kubenswrapper[4724]: E0226 14:37:50.976691 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:38:00 crc kubenswrapper[4724]: I0226 14:38:00.165592 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535278-qj6hq"] Feb 26 14:38:00 crc kubenswrapper[4724]: E0226 14:38:00.166502 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478bd469-fb16-4cab-a3de-eed03f6919c4" containerName="extract-utilities" Feb 26 14:38:00 crc kubenswrapper[4724]: I0226 14:38:00.166516 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="478bd469-fb16-4cab-a3de-eed03f6919c4" containerName="extract-utilities" Feb 26 14:38:00 crc kubenswrapper[4724]: E0226 14:38:00.166528 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478bd469-fb16-4cab-a3de-eed03f6919c4" containerName="extract-content" Feb 26 14:38:00 crc kubenswrapper[4724]: I0226 14:38:00.166534 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="478bd469-fb16-4cab-a3de-eed03f6919c4" containerName="extract-content" Feb 26 14:38:00 crc kubenswrapper[4724]: E0226 14:38:00.166577 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478bd469-fb16-4cab-a3de-eed03f6919c4" containerName="registry-server" Feb 26 14:38:00 crc kubenswrapper[4724]: I0226 14:38:00.166583 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="478bd469-fb16-4cab-a3de-eed03f6919c4" containerName="registry-server" Feb 26 14:38:00 crc kubenswrapper[4724]: I0226 14:38:00.166769 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="478bd469-fb16-4cab-a3de-eed03f6919c4" containerName="registry-server" Feb 26 14:38:00 crc kubenswrapper[4724]: I0226 14:38:00.167437 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535278-qj6hq" Feb 26 14:38:00 crc kubenswrapper[4724]: I0226 14:38:00.171333 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:38:00 crc kubenswrapper[4724]: I0226 14:38:00.171810 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:38:00 crc kubenswrapper[4724]: I0226 14:38:00.172053 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:38:00 crc kubenswrapper[4724]: I0226 14:38:00.189723 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535278-qj6hq"] Feb 26 14:38:00 crc kubenswrapper[4724]: I0226 14:38:00.281267 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z522s\" (UniqueName: \"kubernetes.io/projected/d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3-kube-api-access-z522s\") pod \"auto-csr-approver-29535278-qj6hq\" (UID: \"d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3\") " pod="openshift-infra/auto-csr-approver-29535278-qj6hq" Feb 26 14:38:00 crc kubenswrapper[4724]: I0226 14:38:00.383827 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z522s\" (UniqueName: \"kubernetes.io/projected/d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3-kube-api-access-z522s\") pod \"auto-csr-approver-29535278-qj6hq\" (UID: \"d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3\") " pod="openshift-infra/auto-csr-approver-29535278-qj6hq" Feb 26 14:38:00 crc kubenswrapper[4724]: I0226 14:38:00.405567 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z522s\" (UniqueName: \"kubernetes.io/projected/d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3-kube-api-access-z522s\") pod \"auto-csr-approver-29535278-qj6hq\" (UID: \"d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3\") " pod="openshift-infra/auto-csr-approver-29535278-qj6hq" Feb 26 14:38:00 crc kubenswrapper[4724]: I0226 14:38:00.495132 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535278-qj6hq" Feb 26 14:38:01 crc kubenswrapper[4724]: I0226 14:38:01.008454 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535278-qj6hq"] Feb 26 14:38:01 crc kubenswrapper[4724]: I0226 14:38:01.849918 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535278-qj6hq" event={"ID":"d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3","Type":"ContainerStarted","Data":"18335929855e178fd0f83dffac6401cc15e7530095ec9c4564e6619b2900215a"} Feb 26 14:38:03 crc kubenswrapper[4724]: I0226 14:38:03.871637 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535278-qj6hq" event={"ID":"d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3","Type":"ContainerStarted","Data":"e3bf516d88bbc5794bc676aa87d96f6d3e535bffb3a165419c44390286616a8b"} Feb 26 14:38:03 crc kubenswrapper[4724]: I0226 14:38:03.894505 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535278-qj6hq" podStartSLOduration=2.195138966 podStartE2EDuration="3.894483489s" podCreationTimestamp="2026-02-26 14:38:00 +0000 UTC" firstStartedPulling="2026-02-26 14:38:01.004644355 +0000 UTC m=+12747.660383470" lastFinishedPulling="2026-02-26 14:38:02.703988878 +0000 UTC m=+12749.359727993" observedRunningTime="2026-02-26 14:38:03.88889266 +0000 UTC m=+12750.544631785" watchObservedRunningTime="2026-02-26 14:38:03.894483489 +0000 UTC m=+12750.550222614" Feb 26 14:38:05 crc kubenswrapper[4724]: I0226 14:38:05.889197 4724 generic.go:334] "Generic (PLEG): container finished" podID="d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3" containerID="e3bf516d88bbc5794bc676aa87d96f6d3e535bffb3a165419c44390286616a8b" exitCode=0 Feb 26 14:38:05 crc kubenswrapper[4724]: I0226 14:38:05.889277 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535278-qj6hq" event={"ID":"d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3","Type":"ContainerDied","Data":"e3bf516d88bbc5794bc676aa87d96f6d3e535bffb3a165419c44390286616a8b"} Feb 26 14:38:05 crc kubenswrapper[4724]: I0226 14:38:05.975559 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:38:05 crc kubenswrapper[4724]: E0226 14:38:05.975920 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:38:06 crc kubenswrapper[4724]: I0226 14:38:06.429170 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p4q6t"] Feb 26 14:38:06 crc kubenswrapper[4724]: I0226 14:38:06.431145 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:06 crc kubenswrapper[4724]: I0226 14:38:06.438718 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4q6t"] Feb 26 14:38:06 crc kubenswrapper[4724]: I0226 14:38:06.626608 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c62e30-a697-46ab-b389-9e08fde60721-catalog-content\") pod \"redhat-marketplace-p4q6t\" (UID: \"35c62e30-a697-46ab-b389-9e08fde60721\") " pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:06 crc kubenswrapper[4724]: I0226 14:38:06.626806 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6rkg\" (UniqueName: \"kubernetes.io/projected/35c62e30-a697-46ab-b389-9e08fde60721-kube-api-access-x6rkg\") pod \"redhat-marketplace-p4q6t\" (UID: \"35c62e30-a697-46ab-b389-9e08fde60721\") " pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:06 crc kubenswrapper[4724]: I0226 14:38:06.627028 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c62e30-a697-46ab-b389-9e08fde60721-utilities\") pod \"redhat-marketplace-p4q6t\" (UID: \"35c62e30-a697-46ab-b389-9e08fde60721\") " pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:06 crc kubenswrapper[4724]: I0226 14:38:06.728151 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6rkg\" (UniqueName: \"kubernetes.io/projected/35c62e30-a697-46ab-b389-9e08fde60721-kube-api-access-x6rkg\") pod \"redhat-marketplace-p4q6t\" (UID: \"35c62e30-a697-46ab-b389-9e08fde60721\") " pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:06 crc kubenswrapper[4724]: I0226 14:38:06.728652 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c62e30-a697-46ab-b389-9e08fde60721-utilities\") pod \"redhat-marketplace-p4q6t\" (UID: \"35c62e30-a697-46ab-b389-9e08fde60721\") " pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:06 crc kubenswrapper[4724]: I0226 14:38:06.729093 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c62e30-a697-46ab-b389-9e08fde60721-utilities\") pod \"redhat-marketplace-p4q6t\" (UID: \"35c62e30-a697-46ab-b389-9e08fde60721\") " pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:06 crc kubenswrapper[4724]: I0226 14:38:06.729216 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c62e30-a697-46ab-b389-9e08fde60721-catalog-content\") pod \"redhat-marketplace-p4q6t\" (UID: \"35c62e30-a697-46ab-b389-9e08fde60721\") " pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:06 crc kubenswrapper[4724]: I0226 14:38:06.729528 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c62e30-a697-46ab-b389-9e08fde60721-catalog-content\") pod \"redhat-marketplace-p4q6t\" (UID: \"35c62e30-a697-46ab-b389-9e08fde60721\") " pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:06 crc kubenswrapper[4724]: I0226 14:38:06.752157 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6rkg\" (UniqueName: \"kubernetes.io/projected/35c62e30-a697-46ab-b389-9e08fde60721-kube-api-access-x6rkg\") pod \"redhat-marketplace-p4q6t\" (UID: \"35c62e30-a697-46ab-b389-9e08fde60721\") " pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:06 crc kubenswrapper[4724]: I0226 14:38:06.789047 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:07 crc kubenswrapper[4724]: I0226 14:38:07.402801 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535278-qj6hq" Feb 26 14:38:07 crc kubenswrapper[4724]: I0226 14:38:07.480698 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4q6t"] Feb 26 14:38:07 crc kubenswrapper[4724]: W0226 14:38:07.490206 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35c62e30_a697_46ab_b389_9e08fde60721.slice/crio-6141a08b76973113d982e724a2b04594642bae7c4f0686a4293303f251b4e714 WatchSource:0}: Error finding container 6141a08b76973113d982e724a2b04594642bae7c4f0686a4293303f251b4e714: Status 404 returned error can't find the container with id 6141a08b76973113d982e724a2b04594642bae7c4f0686a4293303f251b4e714 Feb 26 14:38:07 crc kubenswrapper[4724]: I0226 14:38:07.546928 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z522s\" (UniqueName: \"kubernetes.io/projected/d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3-kube-api-access-z522s\") pod \"d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3\" (UID: \"d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3\") " Feb 26 14:38:07 crc kubenswrapper[4724]: I0226 14:38:07.551977 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3-kube-api-access-z522s" (OuterVolumeSpecName: "kube-api-access-z522s") pod "d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3" (UID: "d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3"). InnerVolumeSpecName "kube-api-access-z522s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:07 crc kubenswrapper[4724]: I0226 14:38:07.649382 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z522s\" (UniqueName: \"kubernetes.io/projected/d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3-kube-api-access-z522s\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:07 crc kubenswrapper[4724]: I0226 14:38:07.909382 4724 generic.go:334] "Generic (PLEG): container finished" podID="35c62e30-a697-46ab-b389-9e08fde60721" containerID="2399ff98c82ace5281179daf1651162bc5b3f537b62add908199b7a09288c216" exitCode=0 Feb 26 14:38:07 crc kubenswrapper[4724]: I0226 14:38:07.909460 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4q6t" event={"ID":"35c62e30-a697-46ab-b389-9e08fde60721","Type":"ContainerDied","Data":"2399ff98c82ace5281179daf1651162bc5b3f537b62add908199b7a09288c216"} Feb 26 14:38:07 crc kubenswrapper[4724]: I0226 14:38:07.909490 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4q6t" event={"ID":"35c62e30-a697-46ab-b389-9e08fde60721","Type":"ContainerStarted","Data":"6141a08b76973113d982e724a2b04594642bae7c4f0686a4293303f251b4e714"} Feb 26 14:38:07 crc kubenswrapper[4724]: I0226 14:38:07.916723 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535278-qj6hq" event={"ID":"d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3","Type":"ContainerDied","Data":"18335929855e178fd0f83dffac6401cc15e7530095ec9c4564e6619b2900215a"} Feb 26 14:38:07 crc kubenswrapper[4724]: I0226 14:38:07.916760 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18335929855e178fd0f83dffac6401cc15e7530095ec9c4564e6619b2900215a" Feb 26 14:38:07 crc kubenswrapper[4724]: I0226 14:38:07.916816 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535278-qj6hq" Feb 26 14:38:07 crc kubenswrapper[4724]: I0226 14:38:07.993713 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535272-hzvjj"] Feb 26 14:38:08 crc kubenswrapper[4724]: I0226 14:38:08.002560 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535272-hzvjj"] Feb 26 14:38:09 crc kubenswrapper[4724]: I0226 14:38:09.939527 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4q6t" event={"ID":"35c62e30-a697-46ab-b389-9e08fde60721","Type":"ContainerStarted","Data":"7397674f4aa2ad7c68187de3007670ad069054f7ca61ea5b2041b5fa41f9032d"} Feb 26 14:38:09 crc kubenswrapper[4724]: I0226 14:38:09.991400 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e" path="/var/lib/kubelet/pods/b9bf1b7e-d18a-4e94-9de8-2ecee5d4900e/volumes" Feb 26 14:38:13 crc kubenswrapper[4724]: E0226 14:38:13.383131 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35c62e30_a697_46ab_b389_9e08fde60721.slice/crio-7397674f4aa2ad7c68187de3007670ad069054f7ca61ea5b2041b5fa41f9032d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35c62e30_a697_46ab_b389_9e08fde60721.slice/crio-conmon-7397674f4aa2ad7c68187de3007670ad069054f7ca61ea5b2041b5fa41f9032d.scope\": RecentStats: unable to find data in memory cache]" Feb 26 14:38:14 crc kubenswrapper[4724]: I0226 14:38:14.008728 4724 generic.go:334] "Generic (PLEG): container finished" podID="35c62e30-a697-46ab-b389-9e08fde60721" containerID="7397674f4aa2ad7c68187de3007670ad069054f7ca61ea5b2041b5fa41f9032d" exitCode=0 Feb 26 14:38:14 crc kubenswrapper[4724]: I0226 14:38:14.015227 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4q6t" event={"ID":"35c62e30-a697-46ab-b389-9e08fde60721","Type":"ContainerDied","Data":"7397674f4aa2ad7c68187de3007670ad069054f7ca61ea5b2041b5fa41f9032d"} Feb 26 14:38:15 crc kubenswrapper[4724]: I0226 14:38:15.020047 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4q6t" event={"ID":"35c62e30-a697-46ab-b389-9e08fde60721","Type":"ContainerStarted","Data":"407d46febe8849cf5e13308c4d0be6477863d2936361005bed75e177d8cbf969"} Feb 26 14:38:15 crc kubenswrapper[4724]: I0226 14:38:15.042761 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p4q6t" podStartSLOduration=2.372264505 podStartE2EDuration="9.042745019s" podCreationTimestamp="2026-02-26 14:38:06 +0000 UTC" firstStartedPulling="2026-02-26 14:38:07.912100103 +0000 UTC m=+12754.567839218" lastFinishedPulling="2026-02-26 14:38:14.582580617 +0000 UTC m=+12761.238319732" observedRunningTime="2026-02-26 14:38:15.04157821 +0000 UTC m=+12761.697317325" watchObservedRunningTime="2026-02-26 14:38:15.042745019 +0000 UTC m=+12761.698484134" Feb 26 14:38:16 crc kubenswrapper[4724]: I0226 14:38:16.790521 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:16 crc kubenswrapper[4724]: I0226 14:38:16.791634 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:17 crc kubenswrapper[4724]: I0226 14:38:17.838811 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-p4q6t" podUID="35c62e30-a697-46ab-b389-9e08fde60721" containerName="registry-server" probeResult="failure" output=< Feb 26 14:38:17 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:38:17 crc kubenswrapper[4724]: > Feb 26 14:38:17 crc kubenswrapper[4724]: I0226 14:38:17.975828 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:38:17 crc kubenswrapper[4724]: E0226 14:38:17.976107 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:38:26 crc kubenswrapper[4724]: I0226 14:38:26.848899 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:26 crc kubenswrapper[4724]: I0226 14:38:26.910499 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:27 crc kubenswrapper[4724]: I0226 14:38:27.092719 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4q6t"] Feb 26 14:38:27 crc kubenswrapper[4724]: I0226 14:38:27.892581 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p4q6t" podUID="35c62e30-a697-46ab-b389-9e08fde60721" containerName="registry-server" containerID="cri-o://407d46febe8849cf5e13308c4d0be6477863d2936361005bed75e177d8cbf969" gracePeriod=2 Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.407433 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.465199 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c62e30-a697-46ab-b389-9e08fde60721-catalog-content\") pod \"35c62e30-a697-46ab-b389-9e08fde60721\" (UID: \"35c62e30-a697-46ab-b389-9e08fde60721\") " Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.465427 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c62e30-a697-46ab-b389-9e08fde60721-utilities\") pod \"35c62e30-a697-46ab-b389-9e08fde60721\" (UID: \"35c62e30-a697-46ab-b389-9e08fde60721\") " Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.465453 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6rkg\" (UniqueName: \"kubernetes.io/projected/35c62e30-a697-46ab-b389-9e08fde60721-kube-api-access-x6rkg\") pod \"35c62e30-a697-46ab-b389-9e08fde60721\" (UID: \"35c62e30-a697-46ab-b389-9e08fde60721\") " Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.466044 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35c62e30-a697-46ab-b389-9e08fde60721-utilities" (OuterVolumeSpecName: "utilities") pod "35c62e30-a697-46ab-b389-9e08fde60721" (UID: "35c62e30-a697-46ab-b389-9e08fde60721"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.473411 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c62e30-a697-46ab-b389-9e08fde60721-kube-api-access-x6rkg" (OuterVolumeSpecName: "kube-api-access-x6rkg") pod "35c62e30-a697-46ab-b389-9e08fde60721" (UID: "35c62e30-a697-46ab-b389-9e08fde60721"). InnerVolumeSpecName "kube-api-access-x6rkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.495513 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35c62e30-a697-46ab-b389-9e08fde60721-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35c62e30-a697-46ab-b389-9e08fde60721" (UID: "35c62e30-a697-46ab-b389-9e08fde60721"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.567649 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c62e30-a697-46ab-b389-9e08fde60721-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.567687 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6rkg\" (UniqueName: \"kubernetes.io/projected/35c62e30-a697-46ab-b389-9e08fde60721-kube-api-access-x6rkg\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.567700 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c62e30-a697-46ab-b389-9e08fde60721-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.902949 4724 generic.go:334] "Generic (PLEG): container finished" podID="35c62e30-a697-46ab-b389-9e08fde60721" containerID="407d46febe8849cf5e13308c4d0be6477863d2936361005bed75e177d8cbf969" exitCode=0 Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.902993 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4q6t" event={"ID":"35c62e30-a697-46ab-b389-9e08fde60721","Type":"ContainerDied","Data":"407d46febe8849cf5e13308c4d0be6477863d2936361005bed75e177d8cbf969"} Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.903006 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4q6t" Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.903023 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4q6t" event={"ID":"35c62e30-a697-46ab-b389-9e08fde60721","Type":"ContainerDied","Data":"6141a08b76973113d982e724a2b04594642bae7c4f0686a4293303f251b4e714"} Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.903042 4724 scope.go:117] "RemoveContainer" containerID="407d46febe8849cf5e13308c4d0be6477863d2936361005bed75e177d8cbf969" Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.928534 4724 scope.go:117] "RemoveContainer" containerID="7397674f4aa2ad7c68187de3007670ad069054f7ca61ea5b2041b5fa41f9032d" Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.957708 4724 scope.go:117] "RemoveContainer" containerID="2399ff98c82ace5281179daf1651162bc5b3f537b62add908199b7a09288c216" Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.968445 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4q6t"] Feb 26 14:38:28 crc kubenswrapper[4724]: I0226 14:38:28.979990 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4q6t"] Feb 26 14:38:29 crc kubenswrapper[4724]: I0226 14:38:29.008594 4724 scope.go:117] "RemoveContainer" containerID="407d46febe8849cf5e13308c4d0be6477863d2936361005bed75e177d8cbf969" Feb 26 14:38:29 crc kubenswrapper[4724]: E0226 14:38:29.009000 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"407d46febe8849cf5e13308c4d0be6477863d2936361005bed75e177d8cbf969\": container with ID starting with 407d46febe8849cf5e13308c4d0be6477863d2936361005bed75e177d8cbf969 not found: ID does not exist" containerID="407d46febe8849cf5e13308c4d0be6477863d2936361005bed75e177d8cbf969" Feb 26 14:38:29 crc kubenswrapper[4724]: I0226 14:38:29.009039 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"407d46febe8849cf5e13308c4d0be6477863d2936361005bed75e177d8cbf969"} err="failed to get container status \"407d46febe8849cf5e13308c4d0be6477863d2936361005bed75e177d8cbf969\": rpc error: code = NotFound desc = could not find container \"407d46febe8849cf5e13308c4d0be6477863d2936361005bed75e177d8cbf969\": container with ID starting with 407d46febe8849cf5e13308c4d0be6477863d2936361005bed75e177d8cbf969 not found: ID does not exist" Feb 26 14:38:29 crc kubenswrapper[4724]: I0226 14:38:29.009064 4724 scope.go:117] "RemoveContainer" containerID="7397674f4aa2ad7c68187de3007670ad069054f7ca61ea5b2041b5fa41f9032d" Feb 26 14:38:29 crc kubenswrapper[4724]: E0226 14:38:29.009571 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7397674f4aa2ad7c68187de3007670ad069054f7ca61ea5b2041b5fa41f9032d\": container with ID starting with 7397674f4aa2ad7c68187de3007670ad069054f7ca61ea5b2041b5fa41f9032d not found: ID does not exist" containerID="7397674f4aa2ad7c68187de3007670ad069054f7ca61ea5b2041b5fa41f9032d" Feb 26 14:38:29 crc kubenswrapper[4724]: I0226 14:38:29.009611 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7397674f4aa2ad7c68187de3007670ad069054f7ca61ea5b2041b5fa41f9032d"} err="failed to get container status \"7397674f4aa2ad7c68187de3007670ad069054f7ca61ea5b2041b5fa41f9032d\": rpc error: code = NotFound desc = could not find container \"7397674f4aa2ad7c68187de3007670ad069054f7ca61ea5b2041b5fa41f9032d\": container with ID starting with 7397674f4aa2ad7c68187de3007670ad069054f7ca61ea5b2041b5fa41f9032d not found: ID does not exist" Feb 26 14:38:29 crc kubenswrapper[4724]: I0226 14:38:29.009638 4724 scope.go:117] "RemoveContainer" containerID="2399ff98c82ace5281179daf1651162bc5b3f537b62add908199b7a09288c216" Feb 26 14:38:29 crc kubenswrapper[4724]: E0226 14:38:29.010040 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2399ff98c82ace5281179daf1651162bc5b3f537b62add908199b7a09288c216\": container with ID starting with 2399ff98c82ace5281179daf1651162bc5b3f537b62add908199b7a09288c216 not found: ID does not exist" containerID="2399ff98c82ace5281179daf1651162bc5b3f537b62add908199b7a09288c216" Feb 26 14:38:29 crc kubenswrapper[4724]: I0226 14:38:29.010085 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2399ff98c82ace5281179daf1651162bc5b3f537b62add908199b7a09288c216"} err="failed to get container status \"2399ff98c82ace5281179daf1651162bc5b3f537b62add908199b7a09288c216\": rpc error: code = NotFound desc = could not find container \"2399ff98c82ace5281179daf1651162bc5b3f537b62add908199b7a09288c216\": container with ID starting with 2399ff98c82ace5281179daf1651162bc5b3f537b62add908199b7a09288c216 not found: ID does not exist" Feb 26 14:38:29 crc kubenswrapper[4724]: I0226 14:38:29.989689 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35c62e30-a697-46ab-b389-9e08fde60721" path="/var/lib/kubelet/pods/35c62e30-a697-46ab-b389-9e08fde60721/volumes" Feb 26 14:38:30 crc kubenswrapper[4724]: I0226 14:38:30.977076 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:38:30 crc kubenswrapper[4724]: E0226 14:38:30.977837 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:38:34 crc kubenswrapper[4724]: I0226 14:38:34.399891 4724 scope.go:117] "RemoveContainer" containerID="924b9e48cd84e128b9fa0fb53d9e2850b03be8cda4e56dc8ae1caed1c2fd0459" Feb 26 14:38:45 crc kubenswrapper[4724]: I0226 14:38:45.975536 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:38:45 crc kubenswrapper[4724]: E0226 14:38:45.976258 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:39:00 crc kubenswrapper[4724]: I0226 14:39:00.976172 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:39:00 crc kubenswrapper[4724]: E0226 14:39:00.977020 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:39:11 crc kubenswrapper[4724]: I0226 14:39:11.975364 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:39:11 crc kubenswrapper[4724]: E0226 14:39:11.977707 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:39:26 crc kubenswrapper[4724]: I0226 14:39:26.975973 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:39:26 crc kubenswrapper[4724]: E0226 14:39:26.976774 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:39:39 crc kubenswrapper[4724]: I0226 14:39:39.976670 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:39:39 crc kubenswrapper[4724]: E0226 14:39:39.977850 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:39:52 crc kubenswrapper[4724]: I0226 14:39:52.976614 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:39:52 crc kubenswrapper[4724]: E0226 14:39:52.977548 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:39:58 crc kubenswrapper[4724]: I0226 14:39:58.831694 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pcft6"] Feb 26 14:39:58 crc kubenswrapper[4724]: E0226 14:39:58.832917 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c62e30-a697-46ab-b389-9e08fde60721" containerName="registry-server" Feb 26 14:39:58 crc kubenswrapper[4724]: I0226 14:39:58.832942 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c62e30-a697-46ab-b389-9e08fde60721" containerName="registry-server" Feb 26 14:39:58 crc kubenswrapper[4724]: E0226 14:39:58.832987 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3" containerName="oc" Feb 26 14:39:58 crc kubenswrapper[4724]: I0226 14:39:58.832996 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3" containerName="oc" Feb 26 14:39:58 crc kubenswrapper[4724]: E0226 14:39:58.833005 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c62e30-a697-46ab-b389-9e08fde60721" containerName="extract-utilities" Feb 26 14:39:58 crc kubenswrapper[4724]: I0226 14:39:58.833014 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c62e30-a697-46ab-b389-9e08fde60721" containerName="extract-utilities" Feb 26 14:39:58 crc kubenswrapper[4724]: E0226 14:39:58.833035 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c62e30-a697-46ab-b389-9e08fde60721" containerName="extract-content" Feb 26 14:39:58 crc kubenswrapper[4724]: I0226 14:39:58.833043 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c62e30-a697-46ab-b389-9e08fde60721" containerName="extract-content" Feb 26 14:39:58 crc kubenswrapper[4724]: I0226 14:39:58.833309 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c62e30-a697-46ab-b389-9e08fde60721" containerName="registry-server" Feb 26 14:39:58 crc kubenswrapper[4724]: I0226 14:39:58.833337 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3" containerName="oc" Feb 26 14:39:58 crc kubenswrapper[4724]: I0226 14:39:58.835326 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:39:58 crc kubenswrapper[4724]: I0226 14:39:58.863451 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pcft6"] Feb 26 14:39:58 crc kubenswrapper[4724]: I0226 14:39:58.952254 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0dd526-fa51-4271-a603-f921392d92b3-catalog-content\") pod \"redhat-operators-pcft6\" (UID: \"3e0dd526-fa51-4271-a603-f921392d92b3\") " pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:39:58 crc kubenswrapper[4724]: I0226 14:39:58.952551 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-245m4\" (UniqueName: \"kubernetes.io/projected/3e0dd526-fa51-4271-a603-f921392d92b3-kube-api-access-245m4\") pod \"redhat-operators-pcft6\" (UID: \"3e0dd526-fa51-4271-a603-f921392d92b3\") " pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:39:58 crc kubenswrapper[4724]: I0226 14:39:58.952830 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0dd526-fa51-4271-a603-f921392d92b3-utilities\") pod \"redhat-operators-pcft6\" (UID: \"3e0dd526-fa51-4271-a603-f921392d92b3\") " pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:39:59 crc kubenswrapper[4724]: I0226 14:39:59.055086 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0dd526-fa51-4271-a603-f921392d92b3-catalog-content\") pod \"redhat-operators-pcft6\" (UID: \"3e0dd526-fa51-4271-a603-f921392d92b3\") " pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:39:59 crc kubenswrapper[4724]: I0226 14:39:59.055296 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-245m4\" (UniqueName: \"kubernetes.io/projected/3e0dd526-fa51-4271-a603-f921392d92b3-kube-api-access-245m4\") pod \"redhat-operators-pcft6\" (UID: \"3e0dd526-fa51-4271-a603-f921392d92b3\") " pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:39:59 crc kubenswrapper[4724]: I0226 14:39:59.055371 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0dd526-fa51-4271-a603-f921392d92b3-utilities\") pod \"redhat-operators-pcft6\" (UID: \"3e0dd526-fa51-4271-a603-f921392d92b3\") " pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:39:59 crc kubenswrapper[4724]: I0226 14:39:59.055744 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0dd526-fa51-4271-a603-f921392d92b3-catalog-content\") pod \"redhat-operators-pcft6\" (UID: \"3e0dd526-fa51-4271-a603-f921392d92b3\") " pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:39:59 crc kubenswrapper[4724]: I0226 14:39:59.055910 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0dd526-fa51-4271-a603-f921392d92b3-utilities\") pod \"redhat-operators-pcft6\" (UID: \"3e0dd526-fa51-4271-a603-f921392d92b3\") " pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:39:59 crc kubenswrapper[4724]: I0226 14:39:59.074624 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-245m4\" (UniqueName: \"kubernetes.io/projected/3e0dd526-fa51-4271-a603-f921392d92b3-kube-api-access-245m4\") pod \"redhat-operators-pcft6\" (UID: \"3e0dd526-fa51-4271-a603-f921392d92b3\") " pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:39:59 crc kubenswrapper[4724]: I0226 14:39:59.171081 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:39:59 crc kubenswrapper[4724]: I0226 14:39:59.648814 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pcft6"] Feb 26 14:39:59 crc kubenswrapper[4724]: I0226 14:39:59.856151 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcft6" event={"ID":"3e0dd526-fa51-4271-a603-f921392d92b3","Type":"ContainerStarted","Data":"7c088ebd61d0c8441ccf94b01cc7b32bfc25ecc574949b1b5dbc1a52205fd64f"} Feb 26 14:39:59 crc kubenswrapper[4724]: I0226 14:39:59.857754 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcft6" event={"ID":"3e0dd526-fa51-4271-a603-f921392d92b3","Type":"ContainerStarted","Data":"268e33b662c05be3f7bdb3520a5093ef7a64b05c3239cc515605adb74542c0f4"} Feb 26 14:40:00 crc kubenswrapper[4724]: I0226 14:40:00.166407 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535280-8t942"] Feb 26 14:40:00 crc kubenswrapper[4724]: I0226 14:40:00.169610 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535280-8t942" Feb 26 14:40:00 crc kubenswrapper[4724]: I0226 14:40:00.172675 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:40:00 crc kubenswrapper[4724]: I0226 14:40:00.173137 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:40:00 crc kubenswrapper[4724]: I0226 14:40:00.174479 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:40:00 crc kubenswrapper[4724]: I0226 14:40:00.183988 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535280-8t942"] Feb 26 14:40:00 crc kubenswrapper[4724]: I0226 14:40:00.303775 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpq2x\" (UniqueName: \"kubernetes.io/projected/c5c108e2-1c28-42df-8c5c-c12f9fac0f4e-kube-api-access-kpq2x\") pod \"auto-csr-approver-29535280-8t942\" (UID: \"c5c108e2-1c28-42df-8c5c-c12f9fac0f4e\") " pod="openshift-infra/auto-csr-approver-29535280-8t942" Feb 26 14:40:00 crc kubenswrapper[4724]: I0226 14:40:00.405899 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpq2x\" (UniqueName: \"kubernetes.io/projected/c5c108e2-1c28-42df-8c5c-c12f9fac0f4e-kube-api-access-kpq2x\") pod \"auto-csr-approver-29535280-8t942\" (UID: \"c5c108e2-1c28-42df-8c5c-c12f9fac0f4e\") " pod="openshift-infra/auto-csr-approver-29535280-8t942" Feb 26 14:40:00 crc kubenswrapper[4724]: I0226 14:40:00.447425 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpq2x\" (UniqueName: \"kubernetes.io/projected/c5c108e2-1c28-42df-8c5c-c12f9fac0f4e-kube-api-access-kpq2x\") pod \"auto-csr-approver-29535280-8t942\" (UID: \"c5c108e2-1c28-42df-8c5c-c12f9fac0f4e\") " pod="openshift-infra/auto-csr-approver-29535280-8t942" Feb 26 14:40:00 crc kubenswrapper[4724]: I0226 14:40:00.496951 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535280-8t942" Feb 26 14:40:00 crc kubenswrapper[4724]: I0226 14:40:00.868355 4724 generic.go:334] "Generic (PLEG): container finished" podID="3e0dd526-fa51-4271-a603-f921392d92b3" containerID="7c088ebd61d0c8441ccf94b01cc7b32bfc25ecc574949b1b5dbc1a52205fd64f" exitCode=0 Feb 26 14:40:00 crc kubenswrapper[4724]: I0226 14:40:00.868504 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcft6" event={"ID":"3e0dd526-fa51-4271-a603-f921392d92b3","Type":"ContainerDied","Data":"7c088ebd61d0c8441ccf94b01cc7b32bfc25ecc574949b1b5dbc1a52205fd64f"} Feb 26 14:40:00 crc kubenswrapper[4724]: I0226 14:40:00.957428 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535280-8t942"] Feb 26 14:40:01 crc kubenswrapper[4724]: I0226 14:40:01.878452 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535280-8t942" event={"ID":"c5c108e2-1c28-42df-8c5c-c12f9fac0f4e","Type":"ContainerStarted","Data":"a1dada807923df70650912671615cf015b1298f2387b3db07a2db8df64a5091c"} Feb 26 14:40:02 crc kubenswrapper[4724]: I0226 14:40:02.890394 4724 generic.go:334] "Generic (PLEG): container finished" podID="c5c108e2-1c28-42df-8c5c-c12f9fac0f4e" containerID="98d2dc89f1cc67ed9e0d58407fe87a017febd212d920bc4066680f8b14955ac0" exitCode=0 Feb 26 14:40:02 crc kubenswrapper[4724]: I0226 14:40:02.890479 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535280-8t942" event={"ID":"c5c108e2-1c28-42df-8c5c-c12f9fac0f4e","Type":"ContainerDied","Data":"98d2dc89f1cc67ed9e0d58407fe87a017febd212d920bc4066680f8b14955ac0"} Feb 26 14:40:02 crc kubenswrapper[4724]: I0226 14:40:02.896635 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcft6" event={"ID":"3e0dd526-fa51-4271-a603-f921392d92b3","Type":"ContainerStarted","Data":"2d57d81c09e992921b7688c7da00b140acc9abb014f5cbb9639eb2afb9ddbe93"} Feb 26 14:40:03 crc kubenswrapper[4724]: I0226 14:40:03.989892 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:40:03 crc kubenswrapper[4724]: E0226 14:40:03.990383 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:40:04 crc kubenswrapper[4724]: I0226 14:40:04.318627 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535280-8t942" Feb 26 14:40:04 crc kubenswrapper[4724]: I0226 14:40:04.487647 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpq2x\" (UniqueName: \"kubernetes.io/projected/c5c108e2-1c28-42df-8c5c-c12f9fac0f4e-kube-api-access-kpq2x\") pod \"c5c108e2-1c28-42df-8c5c-c12f9fac0f4e\" (UID: \"c5c108e2-1c28-42df-8c5c-c12f9fac0f4e\") " Feb 26 14:40:04 crc kubenswrapper[4724]: I0226 14:40:04.498368 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c108e2-1c28-42df-8c5c-c12f9fac0f4e-kube-api-access-kpq2x" (OuterVolumeSpecName: "kube-api-access-kpq2x") pod "c5c108e2-1c28-42df-8c5c-c12f9fac0f4e" (UID: "c5c108e2-1c28-42df-8c5c-c12f9fac0f4e"). InnerVolumeSpecName "kube-api-access-kpq2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:40:04 crc kubenswrapper[4724]: I0226 14:40:04.591030 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpq2x\" (UniqueName: \"kubernetes.io/projected/c5c108e2-1c28-42df-8c5c-c12f9fac0f4e-kube-api-access-kpq2x\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:04 crc kubenswrapper[4724]: I0226 14:40:04.917884 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535280-8t942" event={"ID":"c5c108e2-1c28-42df-8c5c-c12f9fac0f4e","Type":"ContainerDied","Data":"a1dada807923df70650912671615cf015b1298f2387b3db07a2db8df64a5091c"} Feb 26 14:40:04 crc kubenswrapper[4724]: I0226 14:40:04.917920 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1dada807923df70650912671615cf015b1298f2387b3db07a2db8df64a5091c" Feb 26 14:40:04 crc kubenswrapper[4724]: I0226 14:40:04.917979 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535280-8t942" Feb 26 14:40:05 crc kubenswrapper[4724]: I0226 14:40:05.400417 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535274-bw9w6"] Feb 26 14:40:05 crc kubenswrapper[4724]: I0226 14:40:05.409676 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535274-bw9w6"] Feb 26 14:40:05 crc kubenswrapper[4724]: I0226 14:40:05.987479 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="388796e4-125b-47b9-b97f-8c7c0feff370" path="/var/lib/kubelet/pods/388796e4-125b-47b9-b97f-8c7c0feff370/volumes" Feb 26 14:40:08 crc kubenswrapper[4724]: I0226 14:40:08.979196 4724 generic.go:334] "Generic (PLEG): container finished" podID="3e0dd526-fa51-4271-a603-f921392d92b3" containerID="2d57d81c09e992921b7688c7da00b140acc9abb014f5cbb9639eb2afb9ddbe93" exitCode=0 Feb 26 14:40:08 crc kubenswrapper[4724]: I0226 14:40:08.979223 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcft6" event={"ID":"3e0dd526-fa51-4271-a603-f921392d92b3","Type":"ContainerDied","Data":"2d57d81c09e992921b7688c7da00b140acc9abb014f5cbb9639eb2afb9ddbe93"} Feb 26 14:40:09 crc kubenswrapper[4724]: I0226 14:40:09.989976 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcft6" event={"ID":"3e0dd526-fa51-4271-a603-f921392d92b3","Type":"ContainerStarted","Data":"5ae3960bbfdfea0a9876e0ddec122c1bdf01928c8892f081235b156462549165"} Feb 26 14:40:10 crc kubenswrapper[4724]: I0226 14:40:10.010561 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pcft6" podStartSLOduration=3.513600232 podStartE2EDuration="12.010505332s" podCreationTimestamp="2026-02-26 14:39:58 +0000 UTC" firstStartedPulling="2026-02-26 14:40:00.873699167 +0000 UTC m=+12867.529438282" lastFinishedPulling="2026-02-26 14:40:09.370604247 +0000 UTC m=+12876.026343382" observedRunningTime="2026-02-26 14:40:10.010058051 +0000 UTC m=+12876.665797156" watchObservedRunningTime="2026-02-26 14:40:10.010505332 +0000 UTC m=+12876.666244447" Feb 26 14:40:16 crc kubenswrapper[4724]: I0226 14:40:16.975318 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:40:18 crc kubenswrapper[4724]: I0226 14:40:18.075569 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"3676cbed636a154b0a67344a1d710ec53887fbc2430f7fe559a1ce84583f4535"} Feb 26 14:40:19 crc kubenswrapper[4724]: I0226 14:40:19.172294 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:40:19 crc kubenswrapper[4724]: I0226 14:40:19.172629 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:40:20 crc kubenswrapper[4724]: I0226 14:40:20.229048 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pcft6" podUID="3e0dd526-fa51-4271-a603-f921392d92b3" containerName="registry-server" probeResult="failure" output=< Feb 26 14:40:20 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:40:20 crc kubenswrapper[4724]: > Feb 26 14:40:30 crc kubenswrapper[4724]: I0226 14:40:30.228862 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pcft6" podUID="3e0dd526-fa51-4271-a603-f921392d92b3" containerName="registry-server" probeResult="failure" output=< Feb 26 14:40:30 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:40:30 crc kubenswrapper[4724]: > Feb 26 14:40:34 crc kubenswrapper[4724]: I0226 14:40:34.514766 4724 scope.go:117] "RemoveContainer" containerID="618635b6f586f1b9d35303a873406dbe6a57dcb5e6734f648e5c5378c56e998f" Feb 26 14:40:40 crc kubenswrapper[4724]: I0226 14:40:40.218377 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pcft6" podUID="3e0dd526-fa51-4271-a603-f921392d92b3" containerName="registry-server" probeResult="failure" output=< Feb 26 14:40:40 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:40:40 crc kubenswrapper[4724]: > Feb 26 14:40:50 crc kubenswrapper[4724]: I0226 14:40:50.230371 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pcft6" podUID="3e0dd526-fa51-4271-a603-f921392d92b3" containerName="registry-server" probeResult="failure" output=< Feb 26 14:40:50 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:40:50 crc kubenswrapper[4724]: > Feb 26 14:41:00 crc kubenswrapper[4724]: I0226 14:41:00.245609 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pcft6" podUID="3e0dd526-fa51-4271-a603-f921392d92b3" containerName="registry-server" probeResult="failure" output=< Feb 26 14:41:00 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:41:00 crc kubenswrapper[4724]: > Feb 26 14:41:09 crc kubenswrapper[4724]: I0226 14:41:09.246647 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:41:09 crc kubenswrapper[4724]: I0226 14:41:09.304706 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:41:09 crc kubenswrapper[4724]: I0226 14:41:09.498654 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pcft6"] Feb 26 14:41:10 crc kubenswrapper[4724]: I0226 14:41:10.609620 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pcft6" podUID="3e0dd526-fa51-4271-a603-f921392d92b3" containerName="registry-server" containerID="cri-o://5ae3960bbfdfea0a9876e0ddec122c1bdf01928c8892f081235b156462549165" gracePeriod=2 Feb 26 14:41:11 crc kubenswrapper[4724]: I0226 14:41:11.626416 4724 generic.go:334] "Generic (PLEG): container finished" podID="3e0dd526-fa51-4271-a603-f921392d92b3" containerID="5ae3960bbfdfea0a9876e0ddec122c1bdf01928c8892f081235b156462549165" exitCode=0 Feb 26 14:41:11 crc kubenswrapper[4724]: I0226 14:41:11.626741 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcft6" event={"ID":"3e0dd526-fa51-4271-a603-f921392d92b3","Type":"ContainerDied","Data":"5ae3960bbfdfea0a9876e0ddec122c1bdf01928c8892f081235b156462549165"} Feb 26 14:41:11 crc kubenswrapper[4724]: I0226 14:41:11.892492 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:41:11 crc kubenswrapper[4724]: I0226 14:41:11.937326 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0dd526-fa51-4271-a603-f921392d92b3-utilities\") pod \"3e0dd526-fa51-4271-a603-f921392d92b3\" (UID: \"3e0dd526-fa51-4271-a603-f921392d92b3\") " Feb 26 14:41:11 crc kubenswrapper[4724]: I0226 14:41:11.939035 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0dd526-fa51-4271-a603-f921392d92b3-catalog-content\") pod \"3e0dd526-fa51-4271-a603-f921392d92b3\" (UID: \"3e0dd526-fa51-4271-a603-f921392d92b3\") " Feb 26 14:41:11 crc kubenswrapper[4724]: I0226 14:41:11.939255 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-245m4\" (UniqueName: \"kubernetes.io/projected/3e0dd526-fa51-4271-a603-f921392d92b3-kube-api-access-245m4\") pod \"3e0dd526-fa51-4271-a603-f921392d92b3\" (UID: \"3e0dd526-fa51-4271-a603-f921392d92b3\") " Feb 26 14:41:11 crc kubenswrapper[4724]: I0226 14:41:11.939415 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e0dd526-fa51-4271-a603-f921392d92b3-utilities" (OuterVolumeSpecName: "utilities") pod "3e0dd526-fa51-4271-a603-f921392d92b3" (UID: "3e0dd526-fa51-4271-a603-f921392d92b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:41:11 crc kubenswrapper[4724]: I0226 14:41:11.984466 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e0dd526-fa51-4271-a603-f921392d92b3-kube-api-access-245m4" (OuterVolumeSpecName: "kube-api-access-245m4") pod "3e0dd526-fa51-4271-a603-f921392d92b3" (UID: "3e0dd526-fa51-4271-a603-f921392d92b3"). InnerVolumeSpecName "kube-api-access-245m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:12 crc kubenswrapper[4724]: I0226 14:41:12.043672 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-245m4\" (UniqueName: \"kubernetes.io/projected/3e0dd526-fa51-4271-a603-f921392d92b3-kube-api-access-245m4\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:12 crc kubenswrapper[4724]: I0226 14:41:12.043709 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0dd526-fa51-4271-a603-f921392d92b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:12 crc kubenswrapper[4724]: I0226 14:41:12.115122 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e0dd526-fa51-4271-a603-f921392d92b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e0dd526-fa51-4271-a603-f921392d92b3" (UID: "3e0dd526-fa51-4271-a603-f921392d92b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:41:12 crc kubenswrapper[4724]: I0226 14:41:12.145448 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0dd526-fa51-4271-a603-f921392d92b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:12 crc kubenswrapper[4724]: I0226 14:41:12.640561 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcft6" event={"ID":"3e0dd526-fa51-4271-a603-f921392d92b3","Type":"ContainerDied","Data":"268e33b662c05be3f7bdb3520a5093ef7a64b05c3239cc515605adb74542c0f4"} Feb 26 14:41:12 crc kubenswrapper[4724]: I0226 14:41:12.641017 4724 scope.go:117] "RemoveContainer" containerID="5ae3960bbfdfea0a9876e0ddec122c1bdf01928c8892f081235b156462549165" Feb 26 14:41:12 crc kubenswrapper[4724]: I0226 14:41:12.641282 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcft6" Feb 26 14:41:12 crc kubenswrapper[4724]: I0226 14:41:12.667666 4724 scope.go:117] "RemoveContainer" containerID="2d57d81c09e992921b7688c7da00b140acc9abb014f5cbb9639eb2afb9ddbe93" Feb 26 14:41:12 crc kubenswrapper[4724]: I0226 14:41:12.701497 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pcft6"] Feb 26 14:41:12 crc kubenswrapper[4724]: I0226 14:41:12.702071 4724 scope.go:117] "RemoveContainer" containerID="7c088ebd61d0c8441ccf94b01cc7b32bfc25ecc574949b1b5dbc1a52205fd64f" Feb 26 14:41:12 crc kubenswrapper[4724]: I0226 14:41:12.714646 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pcft6"] Feb 26 14:41:13 crc kubenswrapper[4724]: I0226 14:41:13.986150 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e0dd526-fa51-4271-a603-f921392d92b3" path="/var/lib/kubelet/pods/3e0dd526-fa51-4271-a603-f921392d92b3/volumes" Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.164407 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535282-dp7f4"] Feb 26 14:42:00 crc kubenswrapper[4724]: E0226 14:42:00.165602 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0dd526-fa51-4271-a603-f921392d92b3" containerName="registry-server" Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.165620 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0dd526-fa51-4271-a603-f921392d92b3" containerName="registry-server" Feb 26 14:42:00 crc kubenswrapper[4724]: E0226 14:42:00.165643 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0dd526-fa51-4271-a603-f921392d92b3" containerName="extract-content" Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.165653 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0dd526-fa51-4271-a603-f921392d92b3" containerName="extract-content" Feb 26 14:42:00 crc kubenswrapper[4724]: E0226 14:42:00.165688 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0dd526-fa51-4271-a603-f921392d92b3" containerName="extract-utilities" Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.165698 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0dd526-fa51-4271-a603-f921392d92b3" containerName="extract-utilities" Feb 26 14:42:00 crc kubenswrapper[4724]: E0226 14:42:00.165711 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c108e2-1c28-42df-8c5c-c12f9fac0f4e" containerName="oc" Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.165720 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c108e2-1c28-42df-8c5c-c12f9fac0f4e" containerName="oc" Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.165996 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0dd526-fa51-4271-a603-f921392d92b3" containerName="registry-server" Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.166016 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c108e2-1c28-42df-8c5c-c12f9fac0f4e" containerName="oc" Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.168312 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535282-dp7f4" Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.173825 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535282-dp7f4"] Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.225355 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.225558 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.226242 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.322703 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wkzq\" (UniqueName: \"kubernetes.io/projected/1c550a3f-c65b-46fa-8c1c-312a337d68b4-kube-api-access-6wkzq\") pod \"auto-csr-approver-29535282-dp7f4\" (UID: \"1c550a3f-c65b-46fa-8c1c-312a337d68b4\") " pod="openshift-infra/auto-csr-approver-29535282-dp7f4" Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.424545 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wkzq\" (UniqueName: \"kubernetes.io/projected/1c550a3f-c65b-46fa-8c1c-312a337d68b4-kube-api-access-6wkzq\") pod \"auto-csr-approver-29535282-dp7f4\" (UID: \"1c550a3f-c65b-46fa-8c1c-312a337d68b4\") " pod="openshift-infra/auto-csr-approver-29535282-dp7f4" Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.458588 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wkzq\" (UniqueName: \"kubernetes.io/projected/1c550a3f-c65b-46fa-8c1c-312a337d68b4-kube-api-access-6wkzq\") pod \"auto-csr-approver-29535282-dp7f4\" (UID: \"1c550a3f-c65b-46fa-8c1c-312a337d68b4\") " pod="openshift-infra/auto-csr-approver-29535282-dp7f4" Feb 26 14:42:00 crc kubenswrapper[4724]: I0226 14:42:00.545631 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535282-dp7f4" Feb 26 14:42:01 crc kubenswrapper[4724]: I0226 14:42:01.334363 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535282-dp7f4"] Feb 26 14:42:01 crc kubenswrapper[4724]: W0226 14:42:01.340813 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c550a3f_c65b_46fa_8c1c_312a337d68b4.slice/crio-ad88167a78bef00ef0023766bf65fd7b70f5e301825cdf7ac1ee22d15ff24d54 WatchSource:0}: Error finding container ad88167a78bef00ef0023766bf65fd7b70f5e301825cdf7ac1ee22d15ff24d54: Status 404 returned error can't find the container with id ad88167a78bef00ef0023766bf65fd7b70f5e301825cdf7ac1ee22d15ff24d54 Feb 26 14:42:01 crc kubenswrapper[4724]: I0226 14:42:01.349085 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:42:02 crc kubenswrapper[4724]: I0226 14:42:02.094214 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535282-dp7f4" event={"ID":"1c550a3f-c65b-46fa-8c1c-312a337d68b4","Type":"ContainerStarted","Data":"ad88167a78bef00ef0023766bf65fd7b70f5e301825cdf7ac1ee22d15ff24d54"} Feb 26 14:42:04 crc kubenswrapper[4724]: I0226 14:42:04.116808 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535282-dp7f4" event={"ID":"1c550a3f-c65b-46fa-8c1c-312a337d68b4","Type":"ContainerStarted","Data":"792791f6ed20ee266cb72489a0f4f3f3a6140297c7187b2fe6536e0f10e03974"} Feb 26 14:42:04 crc kubenswrapper[4724]: I0226 14:42:04.146771 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535282-dp7f4" podStartSLOduration=2.497300704 podStartE2EDuration="4.1467286s" podCreationTimestamp="2026-02-26 14:42:00 +0000 UTC" firstStartedPulling="2026-02-26 14:42:01.34243062 +0000 UTC m=+12987.998169735" lastFinishedPulling="2026-02-26 14:42:02.991858516 +0000 UTC m=+12989.647597631" observedRunningTime="2026-02-26 14:42:04.1403856 +0000 UTC m=+12990.796124755" watchObservedRunningTime="2026-02-26 14:42:04.1467286 +0000 UTC m=+12990.802467725" Feb 26 14:42:06 crc kubenswrapper[4724]: I0226 14:42:06.144502 4724 generic.go:334] "Generic (PLEG): container finished" podID="1c550a3f-c65b-46fa-8c1c-312a337d68b4" containerID="792791f6ed20ee266cb72489a0f4f3f3a6140297c7187b2fe6536e0f10e03974" exitCode=0 Feb 26 14:42:06 crc kubenswrapper[4724]: I0226 14:42:06.144574 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535282-dp7f4" event={"ID":"1c550a3f-c65b-46fa-8c1c-312a337d68b4","Type":"ContainerDied","Data":"792791f6ed20ee266cb72489a0f4f3f3a6140297c7187b2fe6536e0f10e03974"} Feb 26 14:42:07 crc kubenswrapper[4724]: I0226 14:42:07.570723 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535282-dp7f4" Feb 26 14:42:07 crc kubenswrapper[4724]: I0226 14:42:07.678159 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wkzq\" (UniqueName: \"kubernetes.io/projected/1c550a3f-c65b-46fa-8c1c-312a337d68b4-kube-api-access-6wkzq\") pod \"1c550a3f-c65b-46fa-8c1c-312a337d68b4\" (UID: \"1c550a3f-c65b-46fa-8c1c-312a337d68b4\") " Feb 26 14:42:07 crc kubenswrapper[4724]: I0226 14:42:07.686816 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c550a3f-c65b-46fa-8c1c-312a337d68b4-kube-api-access-6wkzq" (OuterVolumeSpecName: "kube-api-access-6wkzq") pod "1c550a3f-c65b-46fa-8c1c-312a337d68b4" (UID: "1c550a3f-c65b-46fa-8c1c-312a337d68b4"). InnerVolumeSpecName "kube-api-access-6wkzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:07 crc kubenswrapper[4724]: I0226 14:42:07.781071 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wkzq\" (UniqueName: \"kubernetes.io/projected/1c550a3f-c65b-46fa-8c1c-312a337d68b4-kube-api-access-6wkzq\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:08 crc kubenswrapper[4724]: I0226 14:42:08.168206 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535282-dp7f4" event={"ID":"1c550a3f-c65b-46fa-8c1c-312a337d68b4","Type":"ContainerDied","Data":"ad88167a78bef00ef0023766bf65fd7b70f5e301825cdf7ac1ee22d15ff24d54"} Feb 26 14:42:08 crc kubenswrapper[4724]: I0226 14:42:08.168422 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535282-dp7f4" Feb 26 14:42:08 crc kubenswrapper[4724]: I0226 14:42:08.169386 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad88167a78bef00ef0023766bf65fd7b70f5e301825cdf7ac1ee22d15ff24d54" Feb 26 14:42:08 crc kubenswrapper[4724]: I0226 14:42:08.236998 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535276-r2zf9"] Feb 26 14:42:08 crc kubenswrapper[4724]: I0226 14:42:08.248996 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535276-r2zf9"] Feb 26 14:42:10 crc kubenswrapper[4724]: I0226 14:42:10.001363 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d16ef40b-5c00-4c7b-afc0-28f98836bbd5" path="/var/lib/kubelet/pods/d16ef40b-5c00-4c7b-afc0-28f98836bbd5/volumes" Feb 26 14:42:16 crc kubenswrapper[4724]: I0226 14:42:16.474858 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8s9wz"] Feb 26 14:42:16 crc kubenswrapper[4724]: E0226 14:42:16.475851 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c550a3f-c65b-46fa-8c1c-312a337d68b4" containerName="oc" Feb 26 14:42:16 crc kubenswrapper[4724]: I0226 14:42:16.475869 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c550a3f-c65b-46fa-8c1c-312a337d68b4" containerName="oc" Feb 26 14:42:16 crc kubenswrapper[4724]: I0226 14:42:16.476344 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c550a3f-c65b-46fa-8c1c-312a337d68b4" containerName="oc" Feb 26 14:42:16 crc kubenswrapper[4724]: I0226 14:42:16.478389 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8s9wz" Feb 26 14:42:16 crc kubenswrapper[4724]: I0226 14:42:16.486720 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8s9wz"] Feb 26 14:42:16 crc kubenswrapper[4724]: I0226 14:42:16.617059 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ce62393-2f46-4fd6-b3f9-dabc3a65d917-utilities\") pod \"certified-operators-8s9wz\" (UID: \"0ce62393-2f46-4fd6-b3f9-dabc3a65d917\") " pod="openshift-marketplace/certified-operators-8s9wz" Feb 26 14:42:16 crc kubenswrapper[4724]: I0226 14:42:16.617393 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rfw5\" (UniqueName: \"kubernetes.io/projected/0ce62393-2f46-4fd6-b3f9-dabc3a65d917-kube-api-access-7rfw5\") pod \"certified-operators-8s9wz\" (UID: \"0ce62393-2f46-4fd6-b3f9-dabc3a65d917\") " pod="openshift-marketplace/certified-operators-8s9wz" Feb 26 14:42:16 crc kubenswrapper[4724]: I0226 14:42:16.617509 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ce62393-2f46-4fd6-b3f9-dabc3a65d917-catalog-content\") pod \"certified-operators-8s9wz\" (UID: \"0ce62393-2f46-4fd6-b3f9-dabc3a65d917\") " pod="openshift-marketplace/certified-operators-8s9wz" Feb 26 14:42:16 crc kubenswrapper[4724]: I0226 14:42:16.719812 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ce62393-2f46-4fd6-b3f9-dabc3a65d917-utilities\") pod \"certified-operators-8s9wz\" (UID: \"0ce62393-2f46-4fd6-b3f9-dabc3a65d917\") " pod="openshift-marketplace/certified-operators-8s9wz" Feb 26 14:42:16 crc kubenswrapper[4724]: I0226 14:42:16.720132 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rfw5\" (UniqueName: \"kubernetes.io/projected/0ce62393-2f46-4fd6-b3f9-dabc3a65d917-kube-api-access-7rfw5\") pod \"certified-operators-8s9wz\" (UID: \"0ce62393-2f46-4fd6-b3f9-dabc3a65d917\") " pod="openshift-marketplace/certified-operators-8s9wz" Feb 26 14:42:16 crc kubenswrapper[4724]: I0226 14:42:16.720317 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ce62393-2f46-4fd6-b3f9-dabc3a65d917-catalog-content\") pod \"certified-operators-8s9wz\" (UID: \"0ce62393-2f46-4fd6-b3f9-dabc3a65d917\") " pod="openshift-marketplace/certified-operators-8s9wz" Feb 26 14:42:16 crc kubenswrapper[4724]: I0226 14:42:16.720737 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ce62393-2f46-4fd6-b3f9-dabc3a65d917-utilities\") pod \"certified-operators-8s9wz\" (UID: \"0ce62393-2f46-4fd6-b3f9-dabc3a65d917\") " pod="openshift-marketplace/certified-operators-8s9wz" Feb 26 14:42:16 crc kubenswrapper[4724]: I0226 14:42:16.721129 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ce62393-2f46-4fd6-b3f9-dabc3a65d917-catalog-content\") pod \"certified-operators-8s9wz\" (UID: \"0ce62393-2f46-4fd6-b3f9-dabc3a65d917\") " pod="openshift-marketplace/certified-operators-8s9wz" Feb 26 14:42:16 crc kubenswrapper[4724]: I0226 14:42:16.749521 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rfw5\" (UniqueName: \"kubernetes.io/projected/0ce62393-2f46-4fd6-b3f9-dabc3a65d917-kube-api-access-7rfw5\") pod \"certified-operators-8s9wz\" (UID: \"0ce62393-2f46-4fd6-b3f9-dabc3a65d917\") " pod="openshift-marketplace/certified-operators-8s9wz" Feb 26 14:42:16 crc kubenswrapper[4724]: I0226 14:42:16.801152 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8s9wz" Feb 26 14:42:17 crc kubenswrapper[4724]: I0226 14:42:17.360314 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8s9wz"] Feb 26 14:42:18 crc kubenswrapper[4724]: I0226 14:42:18.259599 4724 generic.go:334] "Generic (PLEG): container finished" podID="0ce62393-2f46-4fd6-b3f9-dabc3a65d917" containerID="606529fc5359a7d926b7b31d54eb01a55b1efbdef6f6d80d3b15a2cd9a0aada9" exitCode=0 Feb 26 14:42:18 crc kubenswrapper[4724]: I0226 14:42:18.259757 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s9wz" event={"ID":"0ce62393-2f46-4fd6-b3f9-dabc3a65d917","Type":"ContainerDied","Data":"606529fc5359a7d926b7b31d54eb01a55b1efbdef6f6d80d3b15a2cd9a0aada9"} Feb 26 14:42:18 crc kubenswrapper[4724]: I0226 14:42:18.259948 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s9wz" event={"ID":"0ce62393-2f46-4fd6-b3f9-dabc3a65d917","Type":"ContainerStarted","Data":"22dd3af38f9052bee251195031a87a31d48fb19aab0773700bd9f7536b511aca"} Feb 26 14:42:26 crc kubenswrapper[4724]: I0226 14:42:26.331171 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s9wz" event={"ID":"0ce62393-2f46-4fd6-b3f9-dabc3a65d917","Type":"ContainerStarted","Data":"6f4556f01af6f49a452054286279ee2a520c63783bcc7d240b22b13581752516"} Feb 26 14:42:27 crc kubenswrapper[4724]: I0226 14:42:27.351250 4724 generic.go:334] "Generic (PLEG): container finished" podID="0ce62393-2f46-4fd6-b3f9-dabc3a65d917" containerID="6f4556f01af6f49a452054286279ee2a520c63783bcc7d240b22b13581752516" exitCode=0 Feb 26 14:42:27 crc kubenswrapper[4724]: I0226 14:42:27.351373 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s9wz" event={"ID":"0ce62393-2f46-4fd6-b3f9-dabc3a65d917","Type":"ContainerDied","Data":"6f4556f01af6f49a452054286279ee2a520c63783bcc7d240b22b13581752516"} Feb 26 14:42:28 crc kubenswrapper[4724]: I0226 14:42:28.362812 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s9wz" event={"ID":"0ce62393-2f46-4fd6-b3f9-dabc3a65d917","Type":"ContainerStarted","Data":"e273a1aec31ba3df523f6733d3150cf8387d45917ed23c56432bfd5b2213aa53"} Feb 26 14:42:28 crc kubenswrapper[4724]: I0226 14:42:28.386988 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8s9wz" podStartSLOduration=2.597625329 podStartE2EDuration="12.386960979s" podCreationTimestamp="2026-02-26 14:42:16 +0000 UTC" firstStartedPulling="2026-02-26 14:42:18.261223765 +0000 UTC m=+13004.916962880" lastFinishedPulling="2026-02-26 14:42:28.050559415 +0000 UTC m=+13014.706298530" observedRunningTime="2026-02-26 14:42:28.382269201 +0000 UTC m=+13015.038008326" watchObservedRunningTime="2026-02-26 14:42:28.386960979 +0000 UTC m=+13015.042700094" Feb 26 14:42:34 crc kubenswrapper[4724]: I0226 14:42:34.646728 4724 scope.go:117] "RemoveContainer" containerID="7fca827e7923fa6da4424bc957db4a04ac19388de983addcce957ade6db62760" Feb 26 14:42:36 crc kubenswrapper[4724]: I0226 14:42:36.802511 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8s9wz" Feb 26 14:42:36 crc kubenswrapper[4724]: I0226 14:42:36.803249 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8s9wz" Feb 26 14:42:36 crc kubenswrapper[4724]: I0226 14:42:36.850614 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8s9wz" Feb 26 14:42:37 crc kubenswrapper[4724]: I0226 14:42:37.500996 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8s9wz" Feb 26 14:42:37 crc kubenswrapper[4724]: I0226 14:42:37.785546 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8s9wz"] Feb 26 14:42:37 crc kubenswrapper[4724]: I0226 14:42:37.836606 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7gnhv"] Feb 26 14:42:37 crc kubenswrapper[4724]: I0226 14:42:37.837038 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7gnhv" podUID="0988507e-1e0a-40d5-becb-7dff50d436ac" containerName="registry-server" containerID="cri-o://2e01f8064d60ca1c149d5d85a9937168b7214dc2f6b2959585469b9f801ce087" gracePeriod=2 Feb 26 14:42:38 crc kubenswrapper[4724]: I0226 14:42:38.461441 4724 generic.go:334] "Generic (PLEG): container finished" podID="0988507e-1e0a-40d5-becb-7dff50d436ac" containerID="2e01f8064d60ca1c149d5d85a9937168b7214dc2f6b2959585469b9f801ce087" exitCode=0 Feb 26 14:42:38 crc kubenswrapper[4724]: I0226 14:42:38.461631 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gnhv" event={"ID":"0988507e-1e0a-40d5-becb-7dff50d436ac","Type":"ContainerDied","Data":"2e01f8064d60ca1c149d5d85a9937168b7214dc2f6b2959585469b9f801ce087"} Feb 26 14:42:38 crc kubenswrapper[4724]: I0226 14:42:38.462053 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gnhv" event={"ID":"0988507e-1e0a-40d5-becb-7dff50d436ac","Type":"ContainerDied","Data":"8beebfc9cea5dacdae8feb5ce15d2681b88afd3a349eede28bc82138ee5bbdd9"} Feb 26 14:42:38 crc kubenswrapper[4724]: I0226 14:42:38.462087 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8beebfc9cea5dacdae8feb5ce15d2681b88afd3a349eede28bc82138ee5bbdd9" Feb 26 14:42:38 crc kubenswrapper[4724]: I0226 14:42:38.502439 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 14:42:38 crc kubenswrapper[4724]: I0226 14:42:38.616865 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0988507e-1e0a-40d5-becb-7dff50d436ac-catalog-content\") pod \"0988507e-1e0a-40d5-becb-7dff50d436ac\" (UID: \"0988507e-1e0a-40d5-becb-7dff50d436ac\") " Feb 26 14:42:38 crc kubenswrapper[4724]: I0226 14:42:38.616935 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0988507e-1e0a-40d5-becb-7dff50d436ac-utilities\") pod \"0988507e-1e0a-40d5-becb-7dff50d436ac\" (UID: \"0988507e-1e0a-40d5-becb-7dff50d436ac\") " Feb 26 14:42:38 crc kubenswrapper[4724]: I0226 14:42:38.617121 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56cts\" (UniqueName: \"kubernetes.io/projected/0988507e-1e0a-40d5-becb-7dff50d436ac-kube-api-access-56cts\") pod \"0988507e-1e0a-40d5-becb-7dff50d436ac\" (UID: \"0988507e-1e0a-40d5-becb-7dff50d436ac\") " Feb 26 14:42:38 crc kubenswrapper[4724]: I0226 14:42:38.617652 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0988507e-1e0a-40d5-becb-7dff50d436ac-utilities" (OuterVolumeSpecName: "utilities") pod "0988507e-1e0a-40d5-becb-7dff50d436ac" (UID: "0988507e-1e0a-40d5-becb-7dff50d436ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:42:38 crc kubenswrapper[4724]: I0226 14:42:38.618005 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0988507e-1e0a-40d5-becb-7dff50d436ac-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:38 crc kubenswrapper[4724]: I0226 14:42:38.625811 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0988507e-1e0a-40d5-becb-7dff50d436ac-kube-api-access-56cts" (OuterVolumeSpecName: "kube-api-access-56cts") pod "0988507e-1e0a-40d5-becb-7dff50d436ac" (UID: "0988507e-1e0a-40d5-becb-7dff50d436ac"). InnerVolumeSpecName "kube-api-access-56cts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:38 crc kubenswrapper[4724]: I0226 14:42:38.669352 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0988507e-1e0a-40d5-becb-7dff50d436ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0988507e-1e0a-40d5-becb-7dff50d436ac" (UID: "0988507e-1e0a-40d5-becb-7dff50d436ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:42:38 crc kubenswrapper[4724]: I0226 14:42:38.719366 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56cts\" (UniqueName: \"kubernetes.io/projected/0988507e-1e0a-40d5-becb-7dff50d436ac-kube-api-access-56cts\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:38 crc kubenswrapper[4724]: I0226 14:42:38.719403 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0988507e-1e0a-40d5-becb-7dff50d436ac-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:39 crc kubenswrapper[4724]: I0226 14:42:39.470378 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gnhv" Feb 26 14:42:39 crc kubenswrapper[4724]: I0226 14:42:39.505207 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7gnhv"] Feb 26 14:42:39 crc kubenswrapper[4724]: I0226 14:42:39.516824 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7gnhv"] Feb 26 14:42:39 crc kubenswrapper[4724]: I0226 14:42:39.987686 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0988507e-1e0a-40d5-becb-7dff50d436ac" path="/var/lib/kubelet/pods/0988507e-1e0a-40d5-becb-7dff50d436ac/volumes" Feb 26 14:42:46 crc kubenswrapper[4724]: I0226 14:42:46.909313 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:42:46 crc kubenswrapper[4724]: I0226 14:42:46.910612 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:43:16 crc kubenswrapper[4724]: I0226 14:43:16.905826 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:43:16 crc kubenswrapper[4724]: I0226 14:43:16.906573 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:43:34 crc kubenswrapper[4724]: I0226 14:43:34.811945 4724 scope.go:117] "RemoveContainer" containerID="995520926b3b94cdc7bcf673617c332a6cbc0f28364a4a0b7aaf176d743080b3" Feb 26 14:43:34 crc kubenswrapper[4724]: I0226 14:43:34.862549 4724 scope.go:117] "RemoveContainer" containerID="2e01f8064d60ca1c149d5d85a9937168b7214dc2f6b2959585469b9f801ce087" Feb 26 14:43:34 crc kubenswrapper[4724]: I0226 14:43:34.905087 4724 scope.go:117] "RemoveContainer" containerID="1913c638ba48d2d77ac5f2e534bb66e0dfc4d99c67bc73c98efe46bf967f4424" Feb 26 14:43:46 crc kubenswrapper[4724]: I0226 14:43:46.906392 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:43:46 crc kubenswrapper[4724]: I0226 14:43:46.906957 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:43:46 crc kubenswrapper[4724]: I0226 14:43:46.907062 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 14:43:46 crc kubenswrapper[4724]: I0226 14:43:46.908109 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3676cbed636a154b0a67344a1d710ec53887fbc2430f7fe559a1ce84583f4535"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:43:46 crc kubenswrapper[4724]: I0226 14:43:46.908244 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://3676cbed636a154b0a67344a1d710ec53887fbc2430f7fe559a1ce84583f4535" gracePeriod=600 Feb 26 14:43:47 crc kubenswrapper[4724]: I0226 14:43:47.456213 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="3676cbed636a154b0a67344a1d710ec53887fbc2430f7fe559a1ce84583f4535" exitCode=0 Feb 26 14:43:47 crc kubenswrapper[4724]: I0226 14:43:47.456309 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"3676cbed636a154b0a67344a1d710ec53887fbc2430f7fe559a1ce84583f4535"} Feb 26 14:43:47 crc kubenswrapper[4724]: I0226 14:43:47.456716 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd"} Feb 26 14:43:47 crc kubenswrapper[4724]: I0226 14:43:47.456738 4724 scope.go:117] "RemoveContainer" containerID="f0d9790a9afa0fd415b8ad7982d4c009e547dc4a144deaa242f2760335ebe845" Feb 26 14:44:00 crc kubenswrapper[4724]: I0226 14:44:00.170140 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535284-k7x5t"] Feb 26 14:44:00 crc kubenswrapper[4724]: E0226 14:44:00.171443 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0988507e-1e0a-40d5-becb-7dff50d436ac" containerName="extract-utilities" Feb 26 14:44:00 crc kubenswrapper[4724]: I0226 14:44:00.171467 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0988507e-1e0a-40d5-becb-7dff50d436ac" containerName="extract-utilities" Feb 26 14:44:00 crc kubenswrapper[4724]: E0226 14:44:00.171493 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0988507e-1e0a-40d5-becb-7dff50d436ac" containerName="registry-server" Feb 26 14:44:00 crc kubenswrapper[4724]: I0226 14:44:00.171504 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0988507e-1e0a-40d5-becb-7dff50d436ac" containerName="registry-server" Feb 26 14:44:00 crc kubenswrapper[4724]: E0226 14:44:00.171545 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0988507e-1e0a-40d5-becb-7dff50d436ac" containerName="extract-content" Feb 26 14:44:00 crc kubenswrapper[4724]: I0226 14:44:00.171556 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0988507e-1e0a-40d5-becb-7dff50d436ac" containerName="extract-content" Feb 26 14:44:00 crc kubenswrapper[4724]: I0226 14:44:00.171919 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0988507e-1e0a-40d5-becb-7dff50d436ac" containerName="registry-server" Feb 26 14:44:00 crc kubenswrapper[4724]: I0226 14:44:00.173880 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535284-k7x5t" Feb 26 14:44:00 crc kubenswrapper[4724]: I0226 14:44:00.177867 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:44:00 crc kubenswrapper[4724]: I0226 14:44:00.178309 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:44:00 crc kubenswrapper[4724]: I0226 14:44:00.178392 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:44:00 crc kubenswrapper[4724]: I0226 14:44:00.189133 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535284-k7x5t"] Feb 26 14:44:00 crc kubenswrapper[4724]: I0226 14:44:00.334794 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhcpx\" (UniqueName: \"kubernetes.io/projected/c137ae59-f547-4be7-b2d8-98f858a19787-kube-api-access-nhcpx\") pod \"auto-csr-approver-29535284-k7x5t\" (UID: \"c137ae59-f547-4be7-b2d8-98f858a19787\") " pod="openshift-infra/auto-csr-approver-29535284-k7x5t" Feb 26 14:44:00 crc kubenswrapper[4724]: I0226 14:44:00.436081 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhcpx\" (UniqueName: \"kubernetes.io/projected/c137ae59-f547-4be7-b2d8-98f858a19787-kube-api-access-nhcpx\") pod \"auto-csr-approver-29535284-k7x5t\" (UID: \"c137ae59-f547-4be7-b2d8-98f858a19787\") " pod="openshift-infra/auto-csr-approver-29535284-k7x5t" Feb 26 14:44:00 crc kubenswrapper[4724]: I0226 14:44:00.455008 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhcpx\" (UniqueName: \"kubernetes.io/projected/c137ae59-f547-4be7-b2d8-98f858a19787-kube-api-access-nhcpx\") pod \"auto-csr-approver-29535284-k7x5t\" (UID: \"c137ae59-f547-4be7-b2d8-98f858a19787\") " pod="openshift-infra/auto-csr-approver-29535284-k7x5t" Feb 26 14:44:00 crc kubenswrapper[4724]: I0226 14:44:00.542434 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535284-k7x5t" Feb 26 14:44:01 crc kubenswrapper[4724]: I0226 14:44:01.483786 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535284-k7x5t"] Feb 26 14:44:01 crc kubenswrapper[4724]: W0226 14:44:01.496094 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc137ae59_f547_4be7_b2d8_98f858a19787.slice/crio-ebca14f2cca41ad0886369925e3321195514209b059cfbef999310679c4fa46a WatchSource:0}: Error finding container ebca14f2cca41ad0886369925e3321195514209b059cfbef999310679c4fa46a: Status 404 returned error can't find the container with id ebca14f2cca41ad0886369925e3321195514209b059cfbef999310679c4fa46a Feb 26 14:44:01 crc kubenswrapper[4724]: I0226 14:44:01.586157 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535284-k7x5t" event={"ID":"c137ae59-f547-4be7-b2d8-98f858a19787","Type":"ContainerStarted","Data":"ebca14f2cca41ad0886369925e3321195514209b059cfbef999310679c4fa46a"} Feb 26 14:44:03 crc kubenswrapper[4724]: I0226 14:44:03.614530 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535284-k7x5t" event={"ID":"c137ae59-f547-4be7-b2d8-98f858a19787","Type":"ContainerStarted","Data":"3f169d15feb60a0381a8b73ace5423e1444b6f30ded673e3920f06d141e49086"} Feb 26 14:44:03 crc kubenswrapper[4724]: I0226 14:44:03.632287 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535284-k7x5t" podStartSLOduration=2.489948764 podStartE2EDuration="3.632270439s" podCreationTimestamp="2026-02-26 14:44:00 +0000 UTC" firstStartedPulling="2026-02-26 14:44:01.500217775 +0000 UTC m=+13108.155956890" lastFinishedPulling="2026-02-26 14:44:02.6425394 +0000 UTC m=+13109.298278565" observedRunningTime="2026-02-26 14:44:03.625481588 +0000 UTC m=+13110.281220703" watchObservedRunningTime="2026-02-26 14:44:03.632270439 +0000 UTC m=+13110.288009554" Feb 26 14:44:05 crc kubenswrapper[4724]: I0226 14:44:05.634864 4724 generic.go:334] "Generic (PLEG): container finished" podID="c137ae59-f547-4be7-b2d8-98f858a19787" containerID="3f169d15feb60a0381a8b73ace5423e1444b6f30ded673e3920f06d141e49086" exitCode=0 Feb 26 14:44:05 crc kubenswrapper[4724]: I0226 14:44:05.635138 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535284-k7x5t" event={"ID":"c137ae59-f547-4be7-b2d8-98f858a19787","Type":"ContainerDied","Data":"3f169d15feb60a0381a8b73ace5423e1444b6f30ded673e3920f06d141e49086"} Feb 26 14:44:07 crc kubenswrapper[4724]: I0226 14:44:07.045621 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535284-k7x5t" Feb 26 14:44:07 crc kubenswrapper[4724]: I0226 14:44:07.213328 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhcpx\" (UniqueName: \"kubernetes.io/projected/c137ae59-f547-4be7-b2d8-98f858a19787-kube-api-access-nhcpx\") pod \"c137ae59-f547-4be7-b2d8-98f858a19787\" (UID: \"c137ae59-f547-4be7-b2d8-98f858a19787\") " Feb 26 14:44:07 crc kubenswrapper[4724]: I0226 14:44:07.241496 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c137ae59-f547-4be7-b2d8-98f858a19787-kube-api-access-nhcpx" (OuterVolumeSpecName: "kube-api-access-nhcpx") pod "c137ae59-f547-4be7-b2d8-98f858a19787" (UID: "c137ae59-f547-4be7-b2d8-98f858a19787"). InnerVolumeSpecName "kube-api-access-nhcpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:44:07 crc kubenswrapper[4724]: I0226 14:44:07.315250 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhcpx\" (UniqueName: \"kubernetes.io/projected/c137ae59-f547-4be7-b2d8-98f858a19787-kube-api-access-nhcpx\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:07 crc kubenswrapper[4724]: I0226 14:44:07.653639 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535284-k7x5t" event={"ID":"c137ae59-f547-4be7-b2d8-98f858a19787","Type":"ContainerDied","Data":"ebca14f2cca41ad0886369925e3321195514209b059cfbef999310679c4fa46a"} Feb 26 14:44:07 crc kubenswrapper[4724]: I0226 14:44:07.653684 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebca14f2cca41ad0886369925e3321195514209b059cfbef999310679c4fa46a" Feb 26 14:44:07 crc kubenswrapper[4724]: I0226 14:44:07.653760 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535284-k7x5t" Feb 26 14:44:07 crc kubenswrapper[4724]: I0226 14:44:07.713887 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535278-qj6hq"] Feb 26 14:44:07 crc kubenswrapper[4724]: I0226 14:44:07.724138 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535278-qj6hq"] Feb 26 14:44:07 crc kubenswrapper[4724]: I0226 14:44:07.996490 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3" path="/var/lib/kubelet/pods/d5e89f8f-e341-4bdd-ae3b-f29e90fd20e3/volumes" Feb 26 14:44:34 crc kubenswrapper[4724]: I0226 14:44:34.965155 4724 scope.go:117] "RemoveContainer" containerID="e3bf516d88bbc5794bc676aa87d96f6d3e535bffb3a165419c44390286616a8b" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.275680 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz"] Feb 26 14:45:00 crc kubenswrapper[4724]: E0226 14:45:00.276668 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c137ae59-f547-4be7-b2d8-98f858a19787" containerName="oc" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.276684 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c137ae59-f547-4be7-b2d8-98f858a19787" containerName="oc" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.276947 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c137ae59-f547-4be7-b2d8-98f858a19787" containerName="oc" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.314570 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.317516 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.318356 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.407670 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/534548d1-4d11-4f37-a044-17928c006adc-secret-volume\") pod \"collect-profiles-29535285-6ppzz\" (UID: \"534548d1-4d11-4f37-a044-17928c006adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.407723 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtdh8\" (UniqueName: \"kubernetes.io/projected/534548d1-4d11-4f37-a044-17928c006adc-kube-api-access-wtdh8\") pod \"collect-profiles-29535285-6ppzz\" (UID: \"534548d1-4d11-4f37-a044-17928c006adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.408239 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/534548d1-4d11-4f37-a044-17928c006adc-config-volume\") pod \"collect-profiles-29535285-6ppzz\" (UID: \"534548d1-4d11-4f37-a044-17928c006adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.426630 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz"] Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.510354 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/534548d1-4d11-4f37-a044-17928c006adc-secret-volume\") pod \"collect-profiles-29535285-6ppzz\" (UID: \"534548d1-4d11-4f37-a044-17928c006adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.510404 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtdh8\" (UniqueName: \"kubernetes.io/projected/534548d1-4d11-4f37-a044-17928c006adc-kube-api-access-wtdh8\") pod \"collect-profiles-29535285-6ppzz\" (UID: \"534548d1-4d11-4f37-a044-17928c006adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.510536 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/534548d1-4d11-4f37-a044-17928c006adc-config-volume\") pod \"collect-profiles-29535285-6ppzz\" (UID: \"534548d1-4d11-4f37-a044-17928c006adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.511530 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/534548d1-4d11-4f37-a044-17928c006adc-config-volume\") pod \"collect-profiles-29535285-6ppzz\" (UID: \"534548d1-4d11-4f37-a044-17928c006adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.581533 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/534548d1-4d11-4f37-a044-17928c006adc-secret-volume\") pod \"collect-profiles-29535285-6ppzz\" (UID: \"534548d1-4d11-4f37-a044-17928c006adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.582676 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtdh8\" (UniqueName: \"kubernetes.io/projected/534548d1-4d11-4f37-a044-17928c006adc-kube-api-access-wtdh8\") pod \"collect-profiles-29535285-6ppzz\" (UID: \"534548d1-4d11-4f37-a044-17928c006adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" Feb 26 14:45:00 crc kubenswrapper[4724]: I0226 14:45:00.639805 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" Feb 26 14:45:02 crc kubenswrapper[4724]: I0226 14:45:02.350381 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz"] Feb 26 14:45:02 crc kubenswrapper[4724]: I0226 14:45:02.648616 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" event={"ID":"534548d1-4d11-4f37-a044-17928c006adc","Type":"ContainerStarted","Data":"e9c814ec24cb6a66468e5f73065c219c658dfb9fab0813278ecea24ee0ee611e"} Feb 26 14:45:02 crc kubenswrapper[4724]: I0226 14:45:02.648665 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" event={"ID":"534548d1-4d11-4f37-a044-17928c006adc","Type":"ContainerStarted","Data":"7b03da3e6334e1e02bc7f5bdd7e85ee36c3c8d00db75b6b58ca92eb7e807eeee"} Feb 26 14:45:02 crc kubenswrapper[4724]: I0226 14:45:02.671733 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" podStartSLOduration=2.671698925 podStartE2EDuration="2.671698925s" podCreationTimestamp="2026-02-26 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:45:02.664563925 +0000 UTC m=+13169.320303060" watchObservedRunningTime="2026-02-26 14:45:02.671698925 +0000 UTC m=+13169.327438050" Feb 26 14:45:03 crc kubenswrapper[4724]: I0226 14:45:03.659238 4724 generic.go:334] "Generic (PLEG): container finished" podID="534548d1-4d11-4f37-a044-17928c006adc" containerID="e9c814ec24cb6a66468e5f73065c219c658dfb9fab0813278ecea24ee0ee611e" exitCode=0 Feb 26 14:45:03 crc kubenswrapper[4724]: I0226 14:45:03.659282 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" event={"ID":"534548d1-4d11-4f37-a044-17928c006adc","Type":"ContainerDied","Data":"e9c814ec24cb6a66468e5f73065c219c658dfb9fab0813278ecea24ee0ee611e"} Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.217306 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.374505 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/534548d1-4d11-4f37-a044-17928c006adc-secret-volume\") pod \"534548d1-4d11-4f37-a044-17928c006adc\" (UID: \"534548d1-4d11-4f37-a044-17928c006adc\") " Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.374724 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtdh8\" (UniqueName: \"kubernetes.io/projected/534548d1-4d11-4f37-a044-17928c006adc-kube-api-access-wtdh8\") pod \"534548d1-4d11-4f37-a044-17928c006adc\" (UID: \"534548d1-4d11-4f37-a044-17928c006adc\") " Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.374772 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/534548d1-4d11-4f37-a044-17928c006adc-config-volume\") pod \"534548d1-4d11-4f37-a044-17928c006adc\" (UID: \"534548d1-4d11-4f37-a044-17928c006adc\") " Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.375844 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/534548d1-4d11-4f37-a044-17928c006adc-config-volume" (OuterVolumeSpecName: "config-volume") pod "534548d1-4d11-4f37-a044-17928c006adc" (UID: "534548d1-4d11-4f37-a044-17928c006adc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.383383 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534548d1-4d11-4f37-a044-17928c006adc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "534548d1-4d11-4f37-a044-17928c006adc" (UID: "534548d1-4d11-4f37-a044-17928c006adc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.389513 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/534548d1-4d11-4f37-a044-17928c006adc-kube-api-access-wtdh8" (OuterVolumeSpecName: "kube-api-access-wtdh8") pod "534548d1-4d11-4f37-a044-17928c006adc" (UID: "534548d1-4d11-4f37-a044-17928c006adc"). InnerVolumeSpecName "kube-api-access-wtdh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.418651 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5"] Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.431581 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535240-rcgj5"] Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.484483 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtdh8\" (UniqueName: \"kubernetes.io/projected/534548d1-4d11-4f37-a044-17928c006adc-kube-api-access-wtdh8\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.484520 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/534548d1-4d11-4f37-a044-17928c006adc-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.484532 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/534548d1-4d11-4f37-a044-17928c006adc-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.677669 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" event={"ID":"534548d1-4d11-4f37-a044-17928c006adc","Type":"ContainerDied","Data":"7b03da3e6334e1e02bc7f5bdd7e85ee36c3c8d00db75b6b58ca92eb7e807eeee"} Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.677722 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b03da3e6334e1e02bc7f5bdd7e85ee36c3c8d00db75b6b58ca92eb7e807eeee" Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.677948 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-6ppzz" Feb 26 14:45:05 crc kubenswrapper[4724]: I0226 14:45:05.987922 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a934683-a48a-4008-b63f-9cdad4022fba" path="/var/lib/kubelet/pods/2a934683-a48a-4008-b63f-9cdad4022fba/volumes" Feb 26 14:45:35 crc kubenswrapper[4724]: I0226 14:45:35.186122 4724 scope.go:117] "RemoveContainer" containerID="10f7855e4ba4aa5be5dc943ba8d259a7f08e27aa621febc0ba28d6beca456d9c" Feb 26 14:46:00 crc kubenswrapper[4724]: I0226 14:46:00.193335 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535286-4rrgz"] Feb 26 14:46:00 crc kubenswrapper[4724]: E0226 14:46:00.194294 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="534548d1-4d11-4f37-a044-17928c006adc" containerName="collect-profiles" Feb 26 14:46:00 crc kubenswrapper[4724]: I0226 14:46:00.194311 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="534548d1-4d11-4f37-a044-17928c006adc" containerName="collect-profiles" Feb 26 14:46:00 crc kubenswrapper[4724]: I0226 14:46:00.194591 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="534548d1-4d11-4f37-a044-17928c006adc" containerName="collect-profiles" Feb 26 14:46:00 crc kubenswrapper[4724]: I0226 14:46:00.195425 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535286-4rrgz" Feb 26 14:46:00 crc kubenswrapper[4724]: I0226 14:46:00.207984 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:46:00 crc kubenswrapper[4724]: I0226 14:46:00.208257 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:46:00 crc kubenswrapper[4724]: I0226 14:46:00.209417 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:46:00 crc kubenswrapper[4724]: I0226 14:46:00.223023 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535286-4rrgz"] Feb 26 14:46:00 crc kubenswrapper[4724]: I0226 14:46:00.316405 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm6mx\" (UniqueName: \"kubernetes.io/projected/29245a5b-ad70-4f04-8b05-b4b35f00d1a6-kube-api-access-dm6mx\") pod \"auto-csr-approver-29535286-4rrgz\" (UID: \"29245a5b-ad70-4f04-8b05-b4b35f00d1a6\") " pod="openshift-infra/auto-csr-approver-29535286-4rrgz" Feb 26 14:46:00 crc kubenswrapper[4724]: I0226 14:46:00.418768 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm6mx\" (UniqueName: \"kubernetes.io/projected/29245a5b-ad70-4f04-8b05-b4b35f00d1a6-kube-api-access-dm6mx\") pod \"auto-csr-approver-29535286-4rrgz\" (UID: \"29245a5b-ad70-4f04-8b05-b4b35f00d1a6\") " pod="openshift-infra/auto-csr-approver-29535286-4rrgz" Feb 26 14:46:00 crc kubenswrapper[4724]: I0226 14:46:00.453286 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm6mx\" (UniqueName: \"kubernetes.io/projected/29245a5b-ad70-4f04-8b05-b4b35f00d1a6-kube-api-access-dm6mx\") pod \"auto-csr-approver-29535286-4rrgz\" (UID: \"29245a5b-ad70-4f04-8b05-b4b35f00d1a6\") " pod="openshift-infra/auto-csr-approver-29535286-4rrgz" Feb 26 14:46:00 crc kubenswrapper[4724]: I0226 14:46:00.534996 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535286-4rrgz" Feb 26 14:46:02 crc kubenswrapper[4724]: I0226 14:46:02.215125 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535286-4rrgz"] Feb 26 14:46:02 crc kubenswrapper[4724]: I0226 14:46:02.718735 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535286-4rrgz" event={"ID":"29245a5b-ad70-4f04-8b05-b4b35f00d1a6","Type":"ContainerStarted","Data":"16b213e54a1d87de412fd2acbeca7983dd2359403b52348b228e194d222ee2da"} Feb 26 14:46:05 crc kubenswrapper[4724]: I0226 14:46:05.741966 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535286-4rrgz" event={"ID":"29245a5b-ad70-4f04-8b05-b4b35f00d1a6","Type":"ContainerStarted","Data":"bfa544c8a4962096f4ac0fcbe347119a2f0dd012ebf5f7243b40e78615978b27"} Feb 26 14:46:05 crc kubenswrapper[4724]: I0226 14:46:05.782576 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535286-4rrgz" podStartSLOduration=4.588069385 podStartE2EDuration="5.782038558s" podCreationTimestamp="2026-02-26 14:46:00 +0000 UTC" firstStartedPulling="2026-02-26 14:46:02.287734256 +0000 UTC m=+13228.943473411" lastFinishedPulling="2026-02-26 14:46:03.481703459 +0000 UTC m=+13230.137442584" observedRunningTime="2026-02-26 14:46:05.766068734 +0000 UTC m=+13232.421807869" watchObservedRunningTime="2026-02-26 14:46:05.782038558 +0000 UTC m=+13232.437777683" Feb 26 14:46:06 crc kubenswrapper[4724]: I0226 14:46:06.763411 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535286-4rrgz" event={"ID":"29245a5b-ad70-4f04-8b05-b4b35f00d1a6","Type":"ContainerDied","Data":"bfa544c8a4962096f4ac0fcbe347119a2f0dd012ebf5f7243b40e78615978b27"} Feb 26 14:46:06 crc kubenswrapper[4724]: I0226 14:46:06.764048 4724 generic.go:334] "Generic (PLEG): container finished" podID="29245a5b-ad70-4f04-8b05-b4b35f00d1a6" containerID="bfa544c8a4962096f4ac0fcbe347119a2f0dd012ebf5f7243b40e78615978b27" exitCode=0 Feb 26 14:46:08 crc kubenswrapper[4724]: I0226 14:46:08.446477 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535286-4rrgz" Feb 26 14:46:08 crc kubenswrapper[4724]: I0226 14:46:08.479131 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm6mx\" (UniqueName: \"kubernetes.io/projected/29245a5b-ad70-4f04-8b05-b4b35f00d1a6-kube-api-access-dm6mx\") pod \"29245a5b-ad70-4f04-8b05-b4b35f00d1a6\" (UID: \"29245a5b-ad70-4f04-8b05-b4b35f00d1a6\") " Feb 26 14:46:08 crc kubenswrapper[4724]: I0226 14:46:08.495150 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29245a5b-ad70-4f04-8b05-b4b35f00d1a6-kube-api-access-dm6mx" (OuterVolumeSpecName: "kube-api-access-dm6mx") pod "29245a5b-ad70-4f04-8b05-b4b35f00d1a6" (UID: "29245a5b-ad70-4f04-8b05-b4b35f00d1a6"). InnerVolumeSpecName "kube-api-access-dm6mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:46:08 crc kubenswrapper[4724]: I0226 14:46:08.581900 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm6mx\" (UniqueName: \"kubernetes.io/projected/29245a5b-ad70-4f04-8b05-b4b35f00d1a6-kube-api-access-dm6mx\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:08 crc kubenswrapper[4724]: I0226 14:46:08.789139 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535286-4rrgz" event={"ID":"29245a5b-ad70-4f04-8b05-b4b35f00d1a6","Type":"ContainerDied","Data":"16b213e54a1d87de412fd2acbeca7983dd2359403b52348b228e194d222ee2da"} Feb 26 14:46:08 crc kubenswrapper[4724]: I0226 14:46:08.789253 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535286-4rrgz" Feb 26 14:46:08 crc kubenswrapper[4724]: I0226 14:46:08.790153 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16b213e54a1d87de412fd2acbeca7983dd2359403b52348b228e194d222ee2da" Feb 26 14:46:08 crc kubenswrapper[4724]: I0226 14:46:08.887736 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535280-8t942"] Feb 26 14:46:08 crc kubenswrapper[4724]: I0226 14:46:08.898545 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535280-8t942"] Feb 26 14:46:09 crc kubenswrapper[4724]: I0226 14:46:09.998966 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5c108e2-1c28-42df-8c5c-c12f9fac0f4e" path="/var/lib/kubelet/pods/c5c108e2-1c28-42df-8c5c-c12f9fac0f4e/volumes" Feb 26 14:46:16 crc kubenswrapper[4724]: I0226 14:46:16.907301 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:46:16 crc kubenswrapper[4724]: I0226 14:46:16.908517 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:46:35 crc kubenswrapper[4724]: I0226 14:46:35.276702 4724 scope.go:117] "RemoveContainer" containerID="98d2dc89f1cc67ed9e0d58407fe87a017febd212d920bc4066680f8b14955ac0" Feb 26 14:46:46 crc kubenswrapper[4724]: I0226 14:46:46.906471 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:46:46 crc kubenswrapper[4724]: I0226 14:46:46.907619 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:46:57 crc kubenswrapper[4724]: I0226 14:46:57.363444 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s9fvd"] Feb 26 14:46:57 crc kubenswrapper[4724]: E0226 14:46:57.364491 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29245a5b-ad70-4f04-8b05-b4b35f00d1a6" containerName="oc" Feb 26 14:46:57 crc kubenswrapper[4724]: I0226 14:46:57.364506 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="29245a5b-ad70-4f04-8b05-b4b35f00d1a6" containerName="oc" Feb 26 14:46:57 crc kubenswrapper[4724]: I0226 14:46:57.364714 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="29245a5b-ad70-4f04-8b05-b4b35f00d1a6" containerName="oc" Feb 26 14:46:57 crc kubenswrapper[4724]: I0226 14:46:57.368506 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:46:57 crc kubenswrapper[4724]: I0226 14:46:57.387040 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbqhk\" (UniqueName: \"kubernetes.io/projected/a538977c-6616-4924-95ea-bdbf26111ada-kube-api-access-vbqhk\") pod \"community-operators-s9fvd\" (UID: \"a538977c-6616-4924-95ea-bdbf26111ada\") " pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:46:57 crc kubenswrapper[4724]: I0226 14:46:57.387122 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a538977c-6616-4924-95ea-bdbf26111ada-catalog-content\") pod \"community-operators-s9fvd\" (UID: \"a538977c-6616-4924-95ea-bdbf26111ada\") " pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:46:57 crc kubenswrapper[4724]: I0226 14:46:57.387191 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a538977c-6616-4924-95ea-bdbf26111ada-utilities\") pod \"community-operators-s9fvd\" (UID: \"a538977c-6616-4924-95ea-bdbf26111ada\") " pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:46:57 crc kubenswrapper[4724]: I0226 14:46:57.399636 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s9fvd"] Feb 26 14:46:57 crc kubenswrapper[4724]: I0226 14:46:57.489343 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a538977c-6616-4924-95ea-bdbf26111ada-utilities\") pod \"community-operators-s9fvd\" (UID: \"a538977c-6616-4924-95ea-bdbf26111ada\") " pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:46:57 crc kubenswrapper[4724]: I0226 14:46:57.490383 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a538977c-6616-4924-95ea-bdbf26111ada-utilities\") pod \"community-operators-s9fvd\" (UID: \"a538977c-6616-4924-95ea-bdbf26111ada\") " pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:46:57 crc kubenswrapper[4724]: I0226 14:46:57.490465 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbqhk\" (UniqueName: \"kubernetes.io/projected/a538977c-6616-4924-95ea-bdbf26111ada-kube-api-access-vbqhk\") pod \"community-operators-s9fvd\" (UID: \"a538977c-6616-4924-95ea-bdbf26111ada\") " pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:46:57 crc kubenswrapper[4724]: I0226 14:46:57.490524 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a538977c-6616-4924-95ea-bdbf26111ada-catalog-content\") pod \"community-operators-s9fvd\" (UID: \"a538977c-6616-4924-95ea-bdbf26111ada\") " pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:46:57 crc kubenswrapper[4724]: I0226 14:46:57.491306 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a538977c-6616-4924-95ea-bdbf26111ada-catalog-content\") pod \"community-operators-s9fvd\" (UID: \"a538977c-6616-4924-95ea-bdbf26111ada\") " pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:46:57 crc kubenswrapper[4724]: I0226 14:46:57.533725 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbqhk\" (UniqueName: \"kubernetes.io/projected/a538977c-6616-4924-95ea-bdbf26111ada-kube-api-access-vbqhk\") pod \"community-operators-s9fvd\" (UID: \"a538977c-6616-4924-95ea-bdbf26111ada\") " pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:46:57 crc kubenswrapper[4724]: I0226 14:46:57.746317 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:46:58 crc kubenswrapper[4724]: I0226 14:46:58.458929 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s9fvd"] Feb 26 14:46:58 crc kubenswrapper[4724]: W0226 14:46:58.476486 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda538977c_6616_4924_95ea_bdbf26111ada.slice/crio-1ec750d6f604e15e93e80759b7951ecd75eee8066afb1a866a3fde33a6e4decd WatchSource:0}: Error finding container 1ec750d6f604e15e93e80759b7951ecd75eee8066afb1a866a3fde33a6e4decd: Status 404 returned error can't find the container with id 1ec750d6f604e15e93e80759b7951ecd75eee8066afb1a866a3fde33a6e4decd Feb 26 14:46:59 crc kubenswrapper[4724]: I0226 14:46:59.341551 4724 generic.go:334] "Generic (PLEG): container finished" podID="a538977c-6616-4924-95ea-bdbf26111ada" containerID="1afe73f10461a86f39ae616a6d13083b95bce8f59d884b06ff1fd24148e42a7b" exitCode=0 Feb 26 14:46:59 crc kubenswrapper[4724]: I0226 14:46:59.341670 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9fvd" event={"ID":"a538977c-6616-4924-95ea-bdbf26111ada","Type":"ContainerDied","Data":"1afe73f10461a86f39ae616a6d13083b95bce8f59d884b06ff1fd24148e42a7b"} Feb 26 14:46:59 crc kubenswrapper[4724]: I0226 14:46:59.341905 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9fvd" event={"ID":"a538977c-6616-4924-95ea-bdbf26111ada","Type":"ContainerStarted","Data":"1ec750d6f604e15e93e80759b7951ecd75eee8066afb1a866a3fde33a6e4decd"} Feb 26 14:47:01 crc kubenswrapper[4724]: I0226 14:47:01.386009 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9fvd" event={"ID":"a538977c-6616-4924-95ea-bdbf26111ada","Type":"ContainerStarted","Data":"f144211667812780fa228b912b1cb3b1f440c506d1450199a1b641e78ae9441e"} Feb 26 14:47:06 crc kubenswrapper[4724]: I0226 14:47:06.450643 4724 generic.go:334] "Generic (PLEG): container finished" podID="a538977c-6616-4924-95ea-bdbf26111ada" containerID="f144211667812780fa228b912b1cb3b1f440c506d1450199a1b641e78ae9441e" exitCode=0 Feb 26 14:47:06 crc kubenswrapper[4724]: I0226 14:47:06.450754 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9fvd" event={"ID":"a538977c-6616-4924-95ea-bdbf26111ada","Type":"ContainerDied","Data":"f144211667812780fa228b912b1cb3b1f440c506d1450199a1b641e78ae9441e"} Feb 26 14:47:06 crc kubenswrapper[4724]: I0226 14:47:06.455660 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:47:08 crc kubenswrapper[4724]: I0226 14:47:08.475436 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9fvd" event={"ID":"a538977c-6616-4924-95ea-bdbf26111ada","Type":"ContainerStarted","Data":"2c6c7de94d89f3614f8fd84dccf456824f8e9b3f1ef0efc1163ec8636fdde4a8"} Feb 26 14:47:08 crc kubenswrapper[4724]: I0226 14:47:08.521630 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s9fvd" podStartSLOduration=3.647354268 podStartE2EDuration="11.521607969s" podCreationTimestamp="2026-02-26 14:46:57 +0000 UTC" firstStartedPulling="2026-02-26 14:46:59.343418616 +0000 UTC m=+13285.999157731" lastFinishedPulling="2026-02-26 14:47:07.217672317 +0000 UTC m=+13293.873411432" observedRunningTime="2026-02-26 14:47:08.508059656 +0000 UTC m=+13295.163798811" watchObservedRunningTime="2026-02-26 14:47:08.521607969 +0000 UTC m=+13295.177347084" Feb 26 14:47:16 crc kubenswrapper[4724]: I0226 14:47:16.905982 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:47:16 crc kubenswrapper[4724]: I0226 14:47:16.906483 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:47:16 crc kubenswrapper[4724]: I0226 14:47:16.906527 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 14:47:16 crc kubenswrapper[4724]: I0226 14:47:16.907256 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:47:16 crc kubenswrapper[4724]: I0226 14:47:16.908997 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" gracePeriod=600 Feb 26 14:47:17 crc kubenswrapper[4724]: E0226 14:47:17.057240 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:47:17 crc kubenswrapper[4724]: I0226 14:47:17.571155 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" exitCode=0 Feb 26 14:47:17 crc kubenswrapper[4724]: I0226 14:47:17.571259 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd"} Feb 26 14:47:17 crc kubenswrapper[4724]: I0226 14:47:17.571327 4724 scope.go:117] "RemoveContainer" containerID="3676cbed636a154b0a67344a1d710ec53887fbc2430f7fe559a1ce84583f4535" Feb 26 14:47:17 crc kubenswrapper[4724]: I0226 14:47:17.571924 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:47:17 crc kubenswrapper[4724]: E0226 14:47:17.572398 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:47:18 crc kubenswrapper[4724]: I0226 14:47:18.394261 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:47:18 crc kubenswrapper[4724]: I0226 14:47:18.394838 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:47:19 crc kubenswrapper[4724]: I0226 14:47:19.483813 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-s9fvd" podUID="a538977c-6616-4924-95ea-bdbf26111ada" containerName="registry-server" probeResult="failure" output=< Feb 26 14:47:19 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:47:19 crc kubenswrapper[4724]: > Feb 26 14:47:27 crc kubenswrapper[4724]: I0226 14:47:27.822452 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:47:27 crc kubenswrapper[4724]: I0226 14:47:27.887319 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:47:27 crc kubenswrapper[4724]: I0226 14:47:27.975596 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:47:27 crc kubenswrapper[4724]: E0226 14:47:27.976282 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:47:28 crc kubenswrapper[4724]: I0226 14:47:28.555490 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s9fvd"] Feb 26 14:47:29 crc kubenswrapper[4724]: I0226 14:47:29.700479 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s9fvd" podUID="a538977c-6616-4924-95ea-bdbf26111ada" containerName="registry-server" containerID="cri-o://2c6c7de94d89f3614f8fd84dccf456824f8e9b3f1ef0efc1163ec8636fdde4a8" gracePeriod=2 Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.153942 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.235219 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a538977c-6616-4924-95ea-bdbf26111ada-utilities\") pod \"a538977c-6616-4924-95ea-bdbf26111ada\" (UID: \"a538977c-6616-4924-95ea-bdbf26111ada\") " Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.235394 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a538977c-6616-4924-95ea-bdbf26111ada-catalog-content\") pod \"a538977c-6616-4924-95ea-bdbf26111ada\" (UID: \"a538977c-6616-4924-95ea-bdbf26111ada\") " Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.235432 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbqhk\" (UniqueName: \"kubernetes.io/projected/a538977c-6616-4924-95ea-bdbf26111ada-kube-api-access-vbqhk\") pod \"a538977c-6616-4924-95ea-bdbf26111ada\" (UID: \"a538977c-6616-4924-95ea-bdbf26111ada\") " Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.237805 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a538977c-6616-4924-95ea-bdbf26111ada-utilities" (OuterVolumeSpecName: "utilities") pod "a538977c-6616-4924-95ea-bdbf26111ada" (UID: "a538977c-6616-4924-95ea-bdbf26111ada"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.260992 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a538977c-6616-4924-95ea-bdbf26111ada-kube-api-access-vbqhk" (OuterVolumeSpecName: "kube-api-access-vbqhk") pod "a538977c-6616-4924-95ea-bdbf26111ada" (UID: "a538977c-6616-4924-95ea-bdbf26111ada"). InnerVolumeSpecName "kube-api-access-vbqhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.305400 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a538977c-6616-4924-95ea-bdbf26111ada-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a538977c-6616-4924-95ea-bdbf26111ada" (UID: "a538977c-6616-4924-95ea-bdbf26111ada"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.337898 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a538977c-6616-4924-95ea-bdbf26111ada-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.337929 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbqhk\" (UniqueName: \"kubernetes.io/projected/a538977c-6616-4924-95ea-bdbf26111ada-kube-api-access-vbqhk\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.337939 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a538977c-6616-4924-95ea-bdbf26111ada-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.712813 4724 generic.go:334] "Generic (PLEG): container finished" podID="a538977c-6616-4924-95ea-bdbf26111ada" containerID="2c6c7de94d89f3614f8fd84dccf456824f8e9b3f1ef0efc1163ec8636fdde4a8" exitCode=0 Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.712874 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9fvd" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.712897 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9fvd" event={"ID":"a538977c-6616-4924-95ea-bdbf26111ada","Type":"ContainerDied","Data":"2c6c7de94d89f3614f8fd84dccf456824f8e9b3f1ef0efc1163ec8636fdde4a8"} Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.713478 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9fvd" event={"ID":"a538977c-6616-4924-95ea-bdbf26111ada","Type":"ContainerDied","Data":"1ec750d6f604e15e93e80759b7951ecd75eee8066afb1a866a3fde33a6e4decd"} Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.713518 4724 scope.go:117] "RemoveContainer" containerID="2c6c7de94d89f3614f8fd84dccf456824f8e9b3f1ef0efc1163ec8636fdde4a8" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.742954 4724 scope.go:117] "RemoveContainer" containerID="f144211667812780fa228b912b1cb3b1f440c506d1450199a1b641e78ae9441e" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.785429 4724 scope.go:117] "RemoveContainer" containerID="1afe73f10461a86f39ae616a6d13083b95bce8f59d884b06ff1fd24148e42a7b" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.790569 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s9fvd"] Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.804704 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s9fvd"] Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.852391 4724 scope.go:117] "RemoveContainer" containerID="2c6c7de94d89f3614f8fd84dccf456824f8e9b3f1ef0efc1163ec8636fdde4a8" Feb 26 14:47:30 crc kubenswrapper[4724]: E0226 14:47:30.856277 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c6c7de94d89f3614f8fd84dccf456824f8e9b3f1ef0efc1163ec8636fdde4a8\": container with ID starting with 2c6c7de94d89f3614f8fd84dccf456824f8e9b3f1ef0efc1163ec8636fdde4a8 not found: ID does not exist" containerID="2c6c7de94d89f3614f8fd84dccf456824f8e9b3f1ef0efc1163ec8636fdde4a8" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.856339 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c6c7de94d89f3614f8fd84dccf456824f8e9b3f1ef0efc1163ec8636fdde4a8"} err="failed to get container status \"2c6c7de94d89f3614f8fd84dccf456824f8e9b3f1ef0efc1163ec8636fdde4a8\": rpc error: code = NotFound desc = could not find container \"2c6c7de94d89f3614f8fd84dccf456824f8e9b3f1ef0efc1163ec8636fdde4a8\": container with ID starting with 2c6c7de94d89f3614f8fd84dccf456824f8e9b3f1ef0efc1163ec8636fdde4a8 not found: ID does not exist" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.856380 4724 scope.go:117] "RemoveContainer" containerID="f144211667812780fa228b912b1cb3b1f440c506d1450199a1b641e78ae9441e" Feb 26 14:47:30 crc kubenswrapper[4724]: E0226 14:47:30.857155 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f144211667812780fa228b912b1cb3b1f440c506d1450199a1b641e78ae9441e\": container with ID starting with f144211667812780fa228b912b1cb3b1f440c506d1450199a1b641e78ae9441e not found: ID does not exist" containerID="f144211667812780fa228b912b1cb3b1f440c506d1450199a1b641e78ae9441e" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.857196 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f144211667812780fa228b912b1cb3b1f440c506d1450199a1b641e78ae9441e"} err="failed to get container status \"f144211667812780fa228b912b1cb3b1f440c506d1450199a1b641e78ae9441e\": rpc error: code = NotFound desc = could not find container \"f144211667812780fa228b912b1cb3b1f440c506d1450199a1b641e78ae9441e\": container with ID starting with f144211667812780fa228b912b1cb3b1f440c506d1450199a1b641e78ae9441e not found: ID does not exist" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.857214 4724 scope.go:117] "RemoveContainer" containerID="1afe73f10461a86f39ae616a6d13083b95bce8f59d884b06ff1fd24148e42a7b" Feb 26 14:47:30 crc kubenswrapper[4724]: E0226 14:47:30.857633 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1afe73f10461a86f39ae616a6d13083b95bce8f59d884b06ff1fd24148e42a7b\": container with ID starting with 1afe73f10461a86f39ae616a6d13083b95bce8f59d884b06ff1fd24148e42a7b not found: ID does not exist" containerID="1afe73f10461a86f39ae616a6d13083b95bce8f59d884b06ff1fd24148e42a7b" Feb 26 14:47:30 crc kubenswrapper[4724]: I0226 14:47:30.857672 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1afe73f10461a86f39ae616a6d13083b95bce8f59d884b06ff1fd24148e42a7b"} err="failed to get container status \"1afe73f10461a86f39ae616a6d13083b95bce8f59d884b06ff1fd24148e42a7b\": rpc error: code = NotFound desc = could not find container \"1afe73f10461a86f39ae616a6d13083b95bce8f59d884b06ff1fd24148e42a7b\": container with ID starting with 1afe73f10461a86f39ae616a6d13083b95bce8f59d884b06ff1fd24148e42a7b not found: ID does not exist" Feb 26 14:47:31 crc kubenswrapper[4724]: I0226 14:47:31.984907 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a538977c-6616-4924-95ea-bdbf26111ada" path="/var/lib/kubelet/pods/a538977c-6616-4924-95ea-bdbf26111ada/volumes" Feb 26 14:47:38 crc kubenswrapper[4724]: I0226 14:47:38.975419 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:47:38 crc kubenswrapper[4724]: E0226 14:47:38.976358 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:47:52 crc kubenswrapper[4724]: I0226 14:47:52.977323 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:47:52 crc kubenswrapper[4724]: E0226 14:47:52.978594 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:48:00 crc kubenswrapper[4724]: I0226 14:48:00.188659 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535288-nt7pn"] Feb 26 14:48:00 crc kubenswrapper[4724]: E0226 14:48:00.189552 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a538977c-6616-4924-95ea-bdbf26111ada" containerName="extract-utilities" Feb 26 14:48:00 crc kubenswrapper[4724]: I0226 14:48:00.189567 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a538977c-6616-4924-95ea-bdbf26111ada" containerName="extract-utilities" Feb 26 14:48:00 crc kubenswrapper[4724]: E0226 14:48:00.189576 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a538977c-6616-4924-95ea-bdbf26111ada" containerName="extract-content" Feb 26 14:48:00 crc kubenswrapper[4724]: I0226 14:48:00.189583 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a538977c-6616-4924-95ea-bdbf26111ada" containerName="extract-content" Feb 26 14:48:00 crc kubenswrapper[4724]: E0226 14:48:00.189629 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a538977c-6616-4924-95ea-bdbf26111ada" containerName="registry-server" Feb 26 14:48:00 crc kubenswrapper[4724]: I0226 14:48:00.189635 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="a538977c-6616-4924-95ea-bdbf26111ada" containerName="registry-server" Feb 26 14:48:00 crc kubenswrapper[4724]: I0226 14:48:00.189865 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="a538977c-6616-4924-95ea-bdbf26111ada" containerName="registry-server" Feb 26 14:48:00 crc kubenswrapper[4724]: I0226 14:48:00.190704 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535288-nt7pn" Feb 26 14:48:00 crc kubenswrapper[4724]: I0226 14:48:00.195975 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:48:00 crc kubenswrapper[4724]: I0226 14:48:00.196323 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:48:00 crc kubenswrapper[4724]: I0226 14:48:00.200383 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535288-nt7pn"] Feb 26 14:48:00 crc kubenswrapper[4724]: I0226 14:48:00.201988 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:48:00 crc kubenswrapper[4724]: I0226 14:48:00.270032 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p4ch\" (UniqueName: \"kubernetes.io/projected/2e320a3f-5d55-45a8-9392-143d1d520d94-kube-api-access-7p4ch\") pod \"auto-csr-approver-29535288-nt7pn\" (UID: \"2e320a3f-5d55-45a8-9392-143d1d520d94\") " pod="openshift-infra/auto-csr-approver-29535288-nt7pn" Feb 26 14:48:00 crc kubenswrapper[4724]: I0226 14:48:00.372850 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p4ch\" (UniqueName: \"kubernetes.io/projected/2e320a3f-5d55-45a8-9392-143d1d520d94-kube-api-access-7p4ch\") pod \"auto-csr-approver-29535288-nt7pn\" (UID: \"2e320a3f-5d55-45a8-9392-143d1d520d94\") " pod="openshift-infra/auto-csr-approver-29535288-nt7pn" Feb 26 14:48:00 crc kubenswrapper[4724]: I0226 14:48:00.435985 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p4ch\" (UniqueName: \"kubernetes.io/projected/2e320a3f-5d55-45a8-9392-143d1d520d94-kube-api-access-7p4ch\") pod \"auto-csr-approver-29535288-nt7pn\" (UID: \"2e320a3f-5d55-45a8-9392-143d1d520d94\") " pod="openshift-infra/auto-csr-approver-29535288-nt7pn" Feb 26 14:48:00 crc kubenswrapper[4724]: I0226 14:48:00.537939 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535288-nt7pn" Feb 26 14:48:01 crc kubenswrapper[4724]: I0226 14:48:01.184790 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535288-nt7pn"] Feb 26 14:48:02 crc kubenswrapper[4724]: I0226 14:48:02.201911 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535288-nt7pn" event={"ID":"2e320a3f-5d55-45a8-9392-143d1d520d94","Type":"ContainerStarted","Data":"8048d56dc1a3297b744c1ca36823103b2a2c93015a814b7ece56f5f8bc523a81"} Feb 26 14:48:03 crc kubenswrapper[4724]: I0226 14:48:03.228100 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535288-nt7pn" event={"ID":"2e320a3f-5d55-45a8-9392-143d1d520d94","Type":"ContainerStarted","Data":"7da9a7e2728ffe88abef58978c6ed15ad552c7c36d3fffbe4cff57eb050bb3dd"} Feb 26 14:48:03 crc kubenswrapper[4724]: I0226 14:48:03.250215 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535288-nt7pn" podStartSLOduration=1.693677155 podStartE2EDuration="3.250193571s" podCreationTimestamp="2026-02-26 14:48:00 +0000 UTC" firstStartedPulling="2026-02-26 14:48:01.217850836 +0000 UTC m=+13347.873589951" lastFinishedPulling="2026-02-26 14:48:02.774367242 +0000 UTC m=+13349.430106367" observedRunningTime="2026-02-26 14:48:03.246042226 +0000 UTC m=+13349.901781361" watchObservedRunningTime="2026-02-26 14:48:03.250193571 +0000 UTC m=+13349.905932686" Feb 26 14:48:05 crc kubenswrapper[4724]: I0226 14:48:05.246023 4724 generic.go:334] "Generic (PLEG): container finished" podID="2e320a3f-5d55-45a8-9392-143d1d520d94" containerID="7da9a7e2728ffe88abef58978c6ed15ad552c7c36d3fffbe4cff57eb050bb3dd" exitCode=0 Feb 26 14:48:05 crc kubenswrapper[4724]: I0226 14:48:05.246091 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535288-nt7pn" event={"ID":"2e320a3f-5d55-45a8-9392-143d1d520d94","Type":"ContainerDied","Data":"7da9a7e2728ffe88abef58978c6ed15ad552c7c36d3fffbe4cff57eb050bb3dd"} Feb 26 14:48:05 crc kubenswrapper[4724]: I0226 14:48:05.976251 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:48:05 crc kubenswrapper[4724]: E0226 14:48:05.976813 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:48:06 crc kubenswrapper[4724]: I0226 14:48:06.595455 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535288-nt7pn" Feb 26 14:48:06 crc kubenswrapper[4724]: I0226 14:48:06.695517 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p4ch\" (UniqueName: \"kubernetes.io/projected/2e320a3f-5d55-45a8-9392-143d1d520d94-kube-api-access-7p4ch\") pod \"2e320a3f-5d55-45a8-9392-143d1d520d94\" (UID: \"2e320a3f-5d55-45a8-9392-143d1d520d94\") " Feb 26 14:48:06 crc kubenswrapper[4724]: I0226 14:48:06.708581 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e320a3f-5d55-45a8-9392-143d1d520d94-kube-api-access-7p4ch" (OuterVolumeSpecName: "kube-api-access-7p4ch") pod "2e320a3f-5d55-45a8-9392-143d1d520d94" (UID: "2e320a3f-5d55-45a8-9392-143d1d520d94"). InnerVolumeSpecName "kube-api-access-7p4ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:48:06 crc kubenswrapper[4724]: I0226 14:48:06.798143 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p4ch\" (UniqueName: \"kubernetes.io/projected/2e320a3f-5d55-45a8-9392-143d1d520d94-kube-api-access-7p4ch\") on node \"crc\" DevicePath \"\"" Feb 26 14:48:07 crc kubenswrapper[4724]: I0226 14:48:07.265747 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535288-nt7pn" event={"ID":"2e320a3f-5d55-45a8-9392-143d1d520d94","Type":"ContainerDied","Data":"8048d56dc1a3297b744c1ca36823103b2a2c93015a814b7ece56f5f8bc523a81"} Feb 26 14:48:07 crc kubenswrapper[4724]: I0226 14:48:07.265799 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8048d56dc1a3297b744c1ca36823103b2a2c93015a814b7ece56f5f8bc523a81" Feb 26 14:48:07 crc kubenswrapper[4724]: I0226 14:48:07.266110 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535288-nt7pn" Feb 26 14:48:07 crc kubenswrapper[4724]: I0226 14:48:07.341678 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535282-dp7f4"] Feb 26 14:48:07 crc kubenswrapper[4724]: I0226 14:48:07.350192 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535282-dp7f4"] Feb 26 14:48:07 crc kubenswrapper[4724]: I0226 14:48:07.987331 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c550a3f-c65b-46fa-8c1c-312a337d68b4" path="/var/lib/kubelet/pods/1c550a3f-c65b-46fa-8c1c-312a337d68b4/volumes" Feb 26 14:48:16 crc kubenswrapper[4724]: I0226 14:48:16.975936 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:48:16 crc kubenswrapper[4724]: E0226 14:48:16.976542 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.225632 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rpjq9"] Feb 26 14:48:18 crc kubenswrapper[4724]: E0226 14:48:18.226740 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e320a3f-5d55-45a8-9392-143d1d520d94" containerName="oc" Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.226810 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e320a3f-5d55-45a8-9392-143d1d520d94" containerName="oc" Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.227073 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e320a3f-5d55-45a8-9392-143d1d520d94" containerName="oc" Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.228467 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.237291 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rpjq9"] Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.273220 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78e22d0-b625-4d02-b976-31182f147a68-catalog-content\") pod \"redhat-marketplace-rpjq9\" (UID: \"e78e22d0-b625-4d02-b976-31182f147a68\") " pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.273337 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78e22d0-b625-4d02-b976-31182f147a68-utilities\") pod \"redhat-marketplace-rpjq9\" (UID: \"e78e22d0-b625-4d02-b976-31182f147a68\") " pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.273408 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8f7n\" (UniqueName: \"kubernetes.io/projected/e78e22d0-b625-4d02-b976-31182f147a68-kube-api-access-g8f7n\") pod \"redhat-marketplace-rpjq9\" (UID: \"e78e22d0-b625-4d02-b976-31182f147a68\") " pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.374816 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78e22d0-b625-4d02-b976-31182f147a68-catalog-content\") pod \"redhat-marketplace-rpjq9\" (UID: \"e78e22d0-b625-4d02-b976-31182f147a68\") " pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.374880 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78e22d0-b625-4d02-b976-31182f147a68-utilities\") pod \"redhat-marketplace-rpjq9\" (UID: \"e78e22d0-b625-4d02-b976-31182f147a68\") " pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.374923 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8f7n\" (UniqueName: \"kubernetes.io/projected/e78e22d0-b625-4d02-b976-31182f147a68-kube-api-access-g8f7n\") pod \"redhat-marketplace-rpjq9\" (UID: \"e78e22d0-b625-4d02-b976-31182f147a68\") " pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.375381 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78e22d0-b625-4d02-b976-31182f147a68-catalog-content\") pod \"redhat-marketplace-rpjq9\" (UID: \"e78e22d0-b625-4d02-b976-31182f147a68\") " pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.375694 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78e22d0-b625-4d02-b976-31182f147a68-utilities\") pod \"redhat-marketplace-rpjq9\" (UID: \"e78e22d0-b625-4d02-b976-31182f147a68\") " pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.394965 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8f7n\" (UniqueName: \"kubernetes.io/projected/e78e22d0-b625-4d02-b976-31182f147a68-kube-api-access-g8f7n\") pod \"redhat-marketplace-rpjq9\" (UID: \"e78e22d0-b625-4d02-b976-31182f147a68\") " pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.549429 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:48:18 crc kubenswrapper[4724]: I0226 14:48:18.862859 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rpjq9"] Feb 26 14:48:19 crc kubenswrapper[4724]: I0226 14:48:19.425653 4724 generic.go:334] "Generic (PLEG): container finished" podID="e78e22d0-b625-4d02-b976-31182f147a68" containerID="6c88b4778f45a795d4e0b8576581a564a1d11c7bcf7d62b15ac0aa82237e7072" exitCode=0 Feb 26 14:48:19 crc kubenswrapper[4724]: I0226 14:48:19.425723 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpjq9" event={"ID":"e78e22d0-b625-4d02-b976-31182f147a68","Type":"ContainerDied","Data":"6c88b4778f45a795d4e0b8576581a564a1d11c7bcf7d62b15ac0aa82237e7072"} Feb 26 14:48:19 crc kubenswrapper[4724]: I0226 14:48:19.425893 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpjq9" event={"ID":"e78e22d0-b625-4d02-b976-31182f147a68","Type":"ContainerStarted","Data":"7899e1b7a22e3222620d92e89e85ca9e12a3cf8a219cb26ef152380a82284c96"} Feb 26 14:48:21 crc kubenswrapper[4724]: I0226 14:48:21.445227 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpjq9" event={"ID":"e78e22d0-b625-4d02-b976-31182f147a68","Type":"ContainerStarted","Data":"7a7b9da9691314f96607214b2a04b95f63c308abcba3f768beca248aac2b1d98"} Feb 26 14:48:24 crc kubenswrapper[4724]: I0226 14:48:24.480428 4724 generic.go:334] "Generic (PLEG): container finished" podID="e78e22d0-b625-4d02-b976-31182f147a68" containerID="7a7b9da9691314f96607214b2a04b95f63c308abcba3f768beca248aac2b1d98" exitCode=0 Feb 26 14:48:24 crc kubenswrapper[4724]: I0226 14:48:24.480500 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpjq9" event={"ID":"e78e22d0-b625-4d02-b976-31182f147a68","Type":"ContainerDied","Data":"7a7b9da9691314f96607214b2a04b95f63c308abcba3f768beca248aac2b1d98"} Feb 26 14:48:26 crc kubenswrapper[4724]: I0226 14:48:26.504536 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpjq9" event={"ID":"e78e22d0-b625-4d02-b976-31182f147a68","Type":"ContainerStarted","Data":"50c6214f4e3d83a39b90fa0571a9869e932871a23c1dedeb4958340b167f75f2"} Feb 26 14:48:26 crc kubenswrapper[4724]: I0226 14:48:26.530689 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rpjq9" podStartSLOduration=2.864772454 podStartE2EDuration="8.530667039s" podCreationTimestamp="2026-02-26 14:48:18 +0000 UTC" firstStartedPulling="2026-02-26 14:48:19.427611364 +0000 UTC m=+13366.083350479" lastFinishedPulling="2026-02-26 14:48:25.093505949 +0000 UTC m=+13371.749245064" observedRunningTime="2026-02-26 14:48:26.522168454 +0000 UTC m=+13373.177907589" watchObservedRunningTime="2026-02-26 14:48:26.530667039 +0000 UTC m=+13373.186406144" Feb 26 14:48:28 crc kubenswrapper[4724]: I0226 14:48:28.549831 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:48:28 crc kubenswrapper[4724]: I0226 14:48:28.550364 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:48:29 crc kubenswrapper[4724]: I0226 14:48:29.635506 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rpjq9" podUID="e78e22d0-b625-4d02-b976-31182f147a68" containerName="registry-server" probeResult="failure" output=< Feb 26 14:48:29 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:48:29 crc kubenswrapper[4724]: > Feb 26 14:48:31 crc kubenswrapper[4724]: I0226 14:48:31.980530 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:48:31 crc kubenswrapper[4724]: E0226 14:48:31.981353 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:48:35 crc kubenswrapper[4724]: I0226 14:48:35.412452 4724 scope.go:117] "RemoveContainer" containerID="792791f6ed20ee266cb72489a0f4f3f3a6140297c7187b2fe6536e0f10e03974" Feb 26 14:48:39 crc kubenswrapper[4724]: I0226 14:48:39.600483 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rpjq9" podUID="e78e22d0-b625-4d02-b976-31182f147a68" containerName="registry-server" probeResult="failure" output=< Feb 26 14:48:39 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:48:39 crc kubenswrapper[4724]: > Feb 26 14:48:45 crc kubenswrapper[4724]: I0226 14:48:45.976637 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:48:45 crc kubenswrapper[4724]: E0226 14:48:45.977305 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:48:49 crc kubenswrapper[4724]: I0226 14:48:49.618008 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rpjq9" podUID="e78e22d0-b625-4d02-b976-31182f147a68" containerName="registry-server" probeResult="failure" output=< Feb 26 14:48:49 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:48:49 crc kubenswrapper[4724]: > Feb 26 14:48:58 crc kubenswrapper[4724]: I0226 14:48:58.599615 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:48:58 crc kubenswrapper[4724]: I0226 14:48:58.660442 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:48:58 crc kubenswrapper[4724]: I0226 14:48:58.844275 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rpjq9"] Feb 26 14:48:59 crc kubenswrapper[4724]: I0226 14:48:59.789723 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rpjq9" podUID="e78e22d0-b625-4d02-b976-31182f147a68" containerName="registry-server" containerID="cri-o://50c6214f4e3d83a39b90fa0571a9869e932871a23c1dedeb4958340b167f75f2" gracePeriod=2 Feb 26 14:48:59 crc kubenswrapper[4724]: I0226 14:48:59.975349 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:48:59 crc kubenswrapper[4724]: E0226 14:48:59.975896 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:49:00 crc kubenswrapper[4724]: I0226 14:49:00.801412 4724 generic.go:334] "Generic (PLEG): container finished" podID="e78e22d0-b625-4d02-b976-31182f147a68" containerID="50c6214f4e3d83a39b90fa0571a9869e932871a23c1dedeb4958340b167f75f2" exitCode=0 Feb 26 14:49:00 crc kubenswrapper[4724]: I0226 14:49:00.801448 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpjq9" event={"ID":"e78e22d0-b625-4d02-b976-31182f147a68","Type":"ContainerDied","Data":"50c6214f4e3d83a39b90fa0571a9869e932871a23c1dedeb4958340b167f75f2"} Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.064806 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.193617 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8f7n\" (UniqueName: \"kubernetes.io/projected/e78e22d0-b625-4d02-b976-31182f147a68-kube-api-access-g8f7n\") pod \"e78e22d0-b625-4d02-b976-31182f147a68\" (UID: \"e78e22d0-b625-4d02-b976-31182f147a68\") " Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.193708 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78e22d0-b625-4d02-b976-31182f147a68-utilities\") pod \"e78e22d0-b625-4d02-b976-31182f147a68\" (UID: \"e78e22d0-b625-4d02-b976-31182f147a68\") " Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.193811 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78e22d0-b625-4d02-b976-31182f147a68-catalog-content\") pod \"e78e22d0-b625-4d02-b976-31182f147a68\" (UID: \"e78e22d0-b625-4d02-b976-31182f147a68\") " Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.194156 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78e22d0-b625-4d02-b976-31182f147a68-utilities" (OuterVolumeSpecName: "utilities") pod "e78e22d0-b625-4d02-b976-31182f147a68" (UID: "e78e22d0-b625-4d02-b976-31182f147a68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.194438 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78e22d0-b625-4d02-b976-31182f147a68-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.215591 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78e22d0-b625-4d02-b976-31182f147a68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e78e22d0-b625-4d02-b976-31182f147a68" (UID: "e78e22d0-b625-4d02-b976-31182f147a68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.227625 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78e22d0-b625-4d02-b976-31182f147a68-kube-api-access-g8f7n" (OuterVolumeSpecName: "kube-api-access-g8f7n") pod "e78e22d0-b625-4d02-b976-31182f147a68" (UID: "e78e22d0-b625-4d02-b976-31182f147a68"). InnerVolumeSpecName "kube-api-access-g8f7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.296167 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8f7n\" (UniqueName: \"kubernetes.io/projected/e78e22d0-b625-4d02-b976-31182f147a68-kube-api-access-g8f7n\") on node \"crc\" DevicePath \"\"" Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.296272 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78e22d0-b625-4d02-b976-31182f147a68-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.860474 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpjq9" event={"ID":"e78e22d0-b625-4d02-b976-31182f147a68","Type":"ContainerDied","Data":"7899e1b7a22e3222620d92e89e85ca9e12a3cf8a219cb26ef152380a82284c96"} Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.860541 4724 scope.go:117] "RemoveContainer" containerID="50c6214f4e3d83a39b90fa0571a9869e932871a23c1dedeb4958340b167f75f2" Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.860748 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rpjq9" Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.909760 4724 scope.go:117] "RemoveContainer" containerID="7a7b9da9691314f96607214b2a04b95f63c308abcba3f768beca248aac2b1d98" Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.926948 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rpjq9"] Feb 26 14:49:01 crc kubenswrapper[4724]: I0226 14:49:01.965794 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rpjq9"] Feb 26 14:49:02 crc kubenswrapper[4724]: I0226 14:49:02.005400 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e78e22d0-b625-4d02-b976-31182f147a68" path="/var/lib/kubelet/pods/e78e22d0-b625-4d02-b976-31182f147a68/volumes" Feb 26 14:49:02 crc kubenswrapper[4724]: I0226 14:49:02.025631 4724 scope.go:117] "RemoveContainer" containerID="6c88b4778f45a795d4e0b8576581a564a1d11c7bcf7d62b15ac0aa82237e7072" Feb 26 14:49:11 crc kubenswrapper[4724]: I0226 14:49:11.975484 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:49:11 crc kubenswrapper[4724]: E0226 14:49:11.976261 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:49:26 crc kubenswrapper[4724]: I0226 14:49:26.976956 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:49:26 crc kubenswrapper[4724]: E0226 14:49:26.977805 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:49:38 crc kubenswrapper[4724]: I0226 14:49:38.975894 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:49:38 crc kubenswrapper[4724]: E0226 14:49:38.976718 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:49:50 crc kubenswrapper[4724]: I0226 14:49:50.975699 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:49:50 crc kubenswrapper[4724]: E0226 14:49:50.976513 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:50:00 crc kubenswrapper[4724]: I0226 14:50:00.170402 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535290-b5pmb"] Feb 26 14:50:00 crc kubenswrapper[4724]: E0226 14:50:00.171996 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78e22d0-b625-4d02-b976-31182f147a68" containerName="extract-utilities" Feb 26 14:50:00 crc kubenswrapper[4724]: I0226 14:50:00.172098 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78e22d0-b625-4d02-b976-31182f147a68" containerName="extract-utilities" Feb 26 14:50:00 crc kubenswrapper[4724]: E0226 14:50:00.172207 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78e22d0-b625-4d02-b976-31182f147a68" containerName="extract-content" Feb 26 14:50:00 crc kubenswrapper[4724]: I0226 14:50:00.172296 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78e22d0-b625-4d02-b976-31182f147a68" containerName="extract-content" Feb 26 14:50:00 crc kubenswrapper[4724]: E0226 14:50:00.172402 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78e22d0-b625-4d02-b976-31182f147a68" containerName="registry-server" Feb 26 14:50:00 crc kubenswrapper[4724]: I0226 14:50:00.173087 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78e22d0-b625-4d02-b976-31182f147a68" containerName="registry-server" Feb 26 14:50:00 crc kubenswrapper[4724]: I0226 14:50:00.173414 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e78e22d0-b625-4d02-b976-31182f147a68" containerName="registry-server" Feb 26 14:50:00 crc kubenswrapper[4724]: I0226 14:50:00.174353 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535290-b5pmb" Feb 26 14:50:00 crc kubenswrapper[4724]: I0226 14:50:00.178272 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:50:00 crc kubenswrapper[4724]: I0226 14:50:00.178273 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:50:00 crc kubenswrapper[4724]: I0226 14:50:00.178331 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:50:00 crc kubenswrapper[4724]: I0226 14:50:00.194887 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535290-b5pmb"] Feb 26 14:50:00 crc kubenswrapper[4724]: I0226 14:50:00.201945 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqz4s\" (UniqueName: \"kubernetes.io/projected/ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e-kube-api-access-gqz4s\") pod \"auto-csr-approver-29535290-b5pmb\" (UID: \"ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e\") " pod="openshift-infra/auto-csr-approver-29535290-b5pmb" Feb 26 14:50:00 crc kubenswrapper[4724]: I0226 14:50:00.303579 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqz4s\" (UniqueName: \"kubernetes.io/projected/ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e-kube-api-access-gqz4s\") pod \"auto-csr-approver-29535290-b5pmb\" (UID: \"ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e\") " pod="openshift-infra/auto-csr-approver-29535290-b5pmb" Feb 26 14:50:00 crc kubenswrapper[4724]: I0226 14:50:00.328800 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqz4s\" (UniqueName: \"kubernetes.io/projected/ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e-kube-api-access-gqz4s\") pod \"auto-csr-approver-29535290-b5pmb\" (UID: \"ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e\") " pod="openshift-infra/auto-csr-approver-29535290-b5pmb" Feb 26 14:50:00 crc kubenswrapper[4724]: I0226 14:50:00.505918 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535290-b5pmb" Feb 26 14:50:02 crc kubenswrapper[4724]: I0226 14:50:02.016008 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:50:02 crc kubenswrapper[4724]: E0226 14:50:02.017216 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:50:02 crc kubenswrapper[4724]: I0226 14:50:02.365732 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535290-b5pmb"] Feb 26 14:50:02 crc kubenswrapper[4724]: I0226 14:50:02.699616 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535290-b5pmb" event={"ID":"ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e","Type":"ContainerStarted","Data":"b15848255656fac41337bf691841ebaba9b5befc4f83b686e7e0e3dbbb6fde7e"} Feb 26 14:50:07 crc kubenswrapper[4724]: I0226 14:50:07.752776 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535290-b5pmb" event={"ID":"ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e","Type":"ContainerStarted","Data":"47860a2a4a7b1c7ff97ef438a560b4136ea27cf47e4824423fd96ef4950c7efd"} Feb 26 14:50:08 crc kubenswrapper[4724]: I0226 14:50:08.780331 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535290-b5pmb" podStartSLOduration=4.961657387 podStartE2EDuration="8.77843742s" podCreationTimestamp="2026-02-26 14:50:00 +0000 UTC" firstStartedPulling="2026-02-26 14:50:02.431361385 +0000 UTC m=+13469.087100520" lastFinishedPulling="2026-02-26 14:50:06.248141438 +0000 UTC m=+13472.903880553" observedRunningTime="2026-02-26 14:50:08.773029674 +0000 UTC m=+13475.428768789" watchObservedRunningTime="2026-02-26 14:50:08.77843742 +0000 UTC m=+13475.434176535" Feb 26 14:50:13 crc kubenswrapper[4724]: I0226 14:50:13.813304 4724 generic.go:334] "Generic (PLEG): container finished" podID="ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e" containerID="47860a2a4a7b1c7ff97ef438a560b4136ea27cf47e4824423fd96ef4950c7efd" exitCode=0 Feb 26 14:50:13 crc kubenswrapper[4724]: I0226 14:50:13.815313 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535290-b5pmb" event={"ID":"ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e","Type":"ContainerDied","Data":"47860a2a4a7b1c7ff97ef438a560b4136ea27cf47e4824423fd96ef4950c7efd"} Feb 26 14:50:15 crc kubenswrapper[4724]: I0226 14:50:15.277669 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535290-b5pmb" Feb 26 14:50:15 crc kubenswrapper[4724]: I0226 14:50:15.431133 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqz4s\" (UniqueName: \"kubernetes.io/projected/ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e-kube-api-access-gqz4s\") pod \"ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e\" (UID: \"ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e\") " Feb 26 14:50:15 crc kubenswrapper[4724]: I0226 14:50:15.482336 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e-kube-api-access-gqz4s" (OuterVolumeSpecName: "kube-api-access-gqz4s") pod "ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e" (UID: "ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e"). InnerVolumeSpecName "kube-api-access-gqz4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:50:15 crc kubenswrapper[4724]: I0226 14:50:15.533616 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqz4s\" (UniqueName: \"kubernetes.io/projected/ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e-kube-api-access-gqz4s\") on node \"crc\" DevicePath \"\"" Feb 26 14:50:15 crc kubenswrapper[4724]: I0226 14:50:15.832547 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535290-b5pmb" event={"ID":"ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e","Type":"ContainerDied","Data":"b15848255656fac41337bf691841ebaba9b5befc4f83b686e7e0e3dbbb6fde7e"} Feb 26 14:50:15 crc kubenswrapper[4724]: I0226 14:50:15.832616 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535290-b5pmb" Feb 26 14:50:16 crc kubenswrapper[4724]: I0226 14:50:15.839113 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b15848255656fac41337bf691841ebaba9b5befc4f83b686e7e0e3dbbb6fde7e" Feb 26 14:50:16 crc kubenswrapper[4724]: I0226 14:50:16.033019 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:50:16 crc kubenswrapper[4724]: E0226 14:50:16.033274 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:50:16 crc kubenswrapper[4724]: I0226 14:50:16.062842 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535284-k7x5t"] Feb 26 14:50:16 crc kubenswrapper[4724]: I0226 14:50:16.075879 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535284-k7x5t"] Feb 26 14:50:17 crc kubenswrapper[4724]: I0226 14:50:17.989506 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c137ae59-f547-4be7-b2d8-98f858a19787" path="/var/lib/kubelet/pods/c137ae59-f547-4be7-b2d8-98f858a19787/volumes" Feb 26 14:50:28 crc kubenswrapper[4724]: I0226 14:50:28.975563 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:50:28 crc kubenswrapper[4724]: E0226 14:50:28.976378 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:50:35 crc kubenswrapper[4724]: I0226 14:50:35.561947 4724 scope.go:117] "RemoveContainer" containerID="3f169d15feb60a0381a8b73ace5423e1444b6f30ded673e3920f06d141e49086" Feb 26 14:50:43 crc kubenswrapper[4724]: I0226 14:50:43.982028 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:50:43 crc kubenswrapper[4724]: E0226 14:50:43.982842 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:50:54 crc kubenswrapper[4724]: I0226 14:50:54.976042 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:50:54 crc kubenswrapper[4724]: E0226 14:50:54.976972 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:51:07 crc kubenswrapper[4724]: I0226 14:51:07.976751 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:51:07 crc kubenswrapper[4724]: E0226 14:51:07.978472 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:51:16 crc kubenswrapper[4724]: I0226 14:51:16.781318 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xr87w"] Feb 26 14:51:16 crc kubenswrapper[4724]: E0226 14:51:16.782472 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e" containerName="oc" Feb 26 14:51:16 crc kubenswrapper[4724]: I0226 14:51:16.782488 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e" containerName="oc" Feb 26 14:51:16 crc kubenswrapper[4724]: I0226 14:51:16.782719 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e" containerName="oc" Feb 26 14:51:16 crc kubenswrapper[4724]: I0226 14:51:16.816075 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xr87w"] Feb 26 14:51:16 crc kubenswrapper[4724]: I0226 14:51:16.816205 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:51:16 crc kubenswrapper[4724]: I0226 14:51:16.907242 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550ea3fc-915a-433b-9b60-2a6febd5afe4-utilities\") pod \"redhat-operators-xr87w\" (UID: \"550ea3fc-915a-433b-9b60-2a6febd5afe4\") " pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:51:16 crc kubenswrapper[4724]: I0226 14:51:16.907392 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gqg2\" (UniqueName: \"kubernetes.io/projected/550ea3fc-915a-433b-9b60-2a6febd5afe4-kube-api-access-6gqg2\") pod \"redhat-operators-xr87w\" (UID: \"550ea3fc-915a-433b-9b60-2a6febd5afe4\") " pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:51:16 crc kubenswrapper[4724]: I0226 14:51:16.907448 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550ea3fc-915a-433b-9b60-2a6febd5afe4-catalog-content\") pod \"redhat-operators-xr87w\" (UID: \"550ea3fc-915a-433b-9b60-2a6febd5afe4\") " pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:51:17 crc kubenswrapper[4724]: I0226 14:51:17.009936 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550ea3fc-915a-433b-9b60-2a6febd5afe4-utilities\") pod \"redhat-operators-xr87w\" (UID: \"550ea3fc-915a-433b-9b60-2a6febd5afe4\") " pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:51:17 crc kubenswrapper[4724]: I0226 14:51:17.010435 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550ea3fc-915a-433b-9b60-2a6febd5afe4-utilities\") pod \"redhat-operators-xr87w\" (UID: \"550ea3fc-915a-433b-9b60-2a6febd5afe4\") " pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:51:17 crc kubenswrapper[4724]: I0226 14:51:17.010512 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gqg2\" (UniqueName: \"kubernetes.io/projected/550ea3fc-915a-433b-9b60-2a6febd5afe4-kube-api-access-6gqg2\") pod \"redhat-operators-xr87w\" (UID: \"550ea3fc-915a-433b-9b60-2a6febd5afe4\") " pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:51:17 crc kubenswrapper[4724]: I0226 14:51:17.010579 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550ea3fc-915a-433b-9b60-2a6febd5afe4-catalog-content\") pod \"redhat-operators-xr87w\" (UID: \"550ea3fc-915a-433b-9b60-2a6febd5afe4\") " pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:51:17 crc kubenswrapper[4724]: I0226 14:51:17.010987 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550ea3fc-915a-433b-9b60-2a6febd5afe4-catalog-content\") pod \"redhat-operators-xr87w\" (UID: \"550ea3fc-915a-433b-9b60-2a6febd5afe4\") " pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:51:17 crc kubenswrapper[4724]: I0226 14:51:17.108041 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gqg2\" (UniqueName: \"kubernetes.io/projected/550ea3fc-915a-433b-9b60-2a6febd5afe4-kube-api-access-6gqg2\") pod \"redhat-operators-xr87w\" (UID: \"550ea3fc-915a-433b-9b60-2a6febd5afe4\") " pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:51:17 crc kubenswrapper[4724]: I0226 14:51:17.167688 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:51:18 crc kubenswrapper[4724]: I0226 14:51:18.833339 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xr87w"] Feb 26 14:51:18 crc kubenswrapper[4724]: I0226 14:51:18.976608 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:51:18 crc kubenswrapper[4724]: E0226 14:51:18.977066 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:51:19 crc kubenswrapper[4724]: I0226 14:51:19.657363 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr87w" event={"ID":"550ea3fc-915a-433b-9b60-2a6febd5afe4","Type":"ContainerStarted","Data":"122dd6907cac67f8b0c58fd5339163892c1cd04534b9777d6150a4ee9fa83448"} Feb 26 14:51:19 crc kubenswrapper[4724]: I0226 14:51:19.657408 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr87w" event={"ID":"550ea3fc-915a-433b-9b60-2a6febd5afe4","Type":"ContainerStarted","Data":"82e1f32a8eee8d71a4b3b1743b2138158aff99a0037ea299946f4f337a38af2f"} Feb 26 14:51:20 crc kubenswrapper[4724]: I0226 14:51:20.670210 4724 generic.go:334] "Generic (PLEG): container finished" podID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerID="122dd6907cac67f8b0c58fd5339163892c1cd04534b9777d6150a4ee9fa83448" exitCode=0 Feb 26 14:51:20 crc kubenswrapper[4724]: I0226 14:51:20.670252 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr87w" event={"ID":"550ea3fc-915a-433b-9b60-2a6febd5afe4","Type":"ContainerDied","Data":"122dd6907cac67f8b0c58fd5339163892c1cd04534b9777d6150a4ee9fa83448"} Feb 26 14:51:30 crc kubenswrapper[4724]: I0226 14:51:30.976150 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:51:30 crc kubenswrapper[4724]: E0226 14:51:30.976949 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:51:42 crc kubenswrapper[4724]: E0226 14:51:42.338219 4724 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 26 14:51:42 crc kubenswrapper[4724]: E0226 14:51:42.346735 4724 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gqg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xr87w_openshift-marketplace(550ea3fc-915a-433b-9b60-2a6febd5afe4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 14:51:42 crc kubenswrapper[4724]: E0226 14:51:42.348456 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" Feb 26 14:51:42 crc kubenswrapper[4724]: E0226 14:51:42.903842 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" Feb 26 14:51:42 crc kubenswrapper[4724]: I0226 14:51:42.975805 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:51:42 crc kubenswrapper[4724]: E0226 14:51:42.976146 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:51:53 crc kubenswrapper[4724]: I0226 14:51:53.989689 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:51:53 crc kubenswrapper[4724]: E0226 14:51:53.992366 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:52:00 crc kubenswrapper[4724]: I0226 14:52:00.214271 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535292-ld7bw"] Feb 26 14:52:00 crc kubenswrapper[4724]: I0226 14:52:00.217662 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535292-ld7bw" Feb 26 14:52:00 crc kubenswrapper[4724]: I0226 14:52:00.227754 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:52:00 crc kubenswrapper[4724]: I0226 14:52:00.229404 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:52:00 crc kubenswrapper[4724]: I0226 14:52:00.229547 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:52:00 crc kubenswrapper[4724]: I0226 14:52:00.246452 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535292-ld7bw"] Feb 26 14:52:00 crc kubenswrapper[4724]: I0226 14:52:00.383938 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf8nj\" (UniqueName: \"kubernetes.io/projected/c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d-kube-api-access-tf8nj\") pod \"auto-csr-approver-29535292-ld7bw\" (UID: \"c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d\") " pod="openshift-infra/auto-csr-approver-29535292-ld7bw" Feb 26 14:52:00 crc kubenswrapper[4724]: I0226 14:52:00.487369 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf8nj\" (UniqueName: \"kubernetes.io/projected/c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d-kube-api-access-tf8nj\") pod \"auto-csr-approver-29535292-ld7bw\" (UID: \"c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d\") " pod="openshift-infra/auto-csr-approver-29535292-ld7bw" Feb 26 14:52:00 crc kubenswrapper[4724]: I0226 14:52:00.546118 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf8nj\" (UniqueName: \"kubernetes.io/projected/c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d-kube-api-access-tf8nj\") pod \"auto-csr-approver-29535292-ld7bw\" (UID: \"c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d\") " pod="openshift-infra/auto-csr-approver-29535292-ld7bw" Feb 26 14:52:00 crc kubenswrapper[4724]: I0226 14:52:00.625514 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535292-ld7bw" Feb 26 14:52:01 crc kubenswrapper[4724]: I0226 14:52:01.086039 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr87w" event={"ID":"550ea3fc-915a-433b-9b60-2a6febd5afe4","Type":"ContainerStarted","Data":"21083e6bd5f67f8e41f7cbf359a20add7e4e6f993ffb705933a933fcf69ae1fe"} Feb 26 14:52:04 crc kubenswrapper[4724]: I0226 14:52:04.357665 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535292-ld7bw"] Feb 26 14:52:05 crc kubenswrapper[4724]: I0226 14:52:05.136676 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535292-ld7bw" event={"ID":"c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d","Type":"ContainerStarted","Data":"11ebeed970765d3ee58ca462df139c0611a290ce2ea2edc8e9c539ee550a5ed3"} Feb 26 14:52:06 crc kubenswrapper[4724]: I0226 14:52:06.975167 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:52:06 crc kubenswrapper[4724]: E0226 14:52:06.975644 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:52:15 crc kubenswrapper[4724]: I0226 14:52:15.280904 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535292-ld7bw" event={"ID":"c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d","Type":"ContainerStarted","Data":"5065c032ee0806ddfa26cc4bc710fb0e7ed72875f83e53b43163b7a1d5190a9a"} Feb 26 14:52:16 crc kubenswrapper[4724]: I0226 14:52:16.310038 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535292-ld7bw" podStartSLOduration=12.877259937 podStartE2EDuration="16.310010273s" podCreationTimestamp="2026-02-26 14:52:00 +0000 UTC" firstStartedPulling="2026-02-26 14:52:04.541112601 +0000 UTC m=+13591.196851716" lastFinishedPulling="2026-02-26 14:52:07.973862927 +0000 UTC m=+13594.629602052" observedRunningTime="2026-02-26 14:52:16.309604373 +0000 UTC m=+13602.965343488" watchObservedRunningTime="2026-02-26 14:52:16.310010273 +0000 UTC m=+13602.965749398" Feb 26 14:52:19 crc kubenswrapper[4724]: I0226 14:52:19.976448 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:52:21 crc kubenswrapper[4724]: I0226 14:52:21.366935 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"280923b246c87c62f2d181f81ba14e28f5ae6e43bd7b9256085eb5cf25afbd11"} Feb 26 14:52:22 crc kubenswrapper[4724]: I0226 14:52:22.391224 4724 generic.go:334] "Generic (PLEG): container finished" podID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerID="21083e6bd5f67f8e41f7cbf359a20add7e4e6f993ffb705933a933fcf69ae1fe" exitCode=0 Feb 26 14:52:22 crc kubenswrapper[4724]: I0226 14:52:22.391274 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr87w" event={"ID":"550ea3fc-915a-433b-9b60-2a6febd5afe4","Type":"ContainerDied","Data":"21083e6bd5f67f8e41f7cbf359a20add7e4e6f993ffb705933a933fcf69ae1fe"} Feb 26 14:52:22 crc kubenswrapper[4724]: I0226 14:52:22.409940 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:52:23 crc kubenswrapper[4724]: I0226 14:52:23.400406 4724 generic.go:334] "Generic (PLEG): container finished" podID="c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d" containerID="5065c032ee0806ddfa26cc4bc710fb0e7ed72875f83e53b43163b7a1d5190a9a" exitCode=0 Feb 26 14:52:23 crc kubenswrapper[4724]: I0226 14:52:23.400499 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535292-ld7bw" event={"ID":"c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d","Type":"ContainerDied","Data":"5065c032ee0806ddfa26cc4bc710fb0e7ed72875f83e53b43163b7a1d5190a9a"} Feb 26 14:52:24 crc kubenswrapper[4724]: I0226 14:52:24.411380 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr87w" event={"ID":"550ea3fc-915a-433b-9b60-2a6febd5afe4","Type":"ContainerStarted","Data":"4172cc778621be572bec2d935c18f36a1438dbd08d0f366050625aab37a5177b"} Feb 26 14:52:24 crc kubenswrapper[4724]: I0226 14:52:24.453552 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xr87w" podStartSLOduration=5.135029955 podStartE2EDuration="1m8.453528329s" podCreationTimestamp="2026-02-26 14:51:16 +0000 UTC" firstStartedPulling="2026-02-26 14:51:20.672066792 +0000 UTC m=+13547.327805907" lastFinishedPulling="2026-02-26 14:52:23.990565156 +0000 UTC m=+13610.646304281" observedRunningTime="2026-02-26 14:52:24.445814544 +0000 UTC m=+13611.101553659" watchObservedRunningTime="2026-02-26 14:52:24.453528329 +0000 UTC m=+13611.109267454" Feb 26 14:52:25 crc kubenswrapper[4724]: I0226 14:52:25.031525 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535292-ld7bw" Feb 26 14:52:25 crc kubenswrapper[4724]: I0226 14:52:25.140023 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf8nj\" (UniqueName: \"kubernetes.io/projected/c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d-kube-api-access-tf8nj\") pod \"c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d\" (UID: \"c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d\") " Feb 26 14:52:25 crc kubenswrapper[4724]: I0226 14:52:25.171713 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d-kube-api-access-tf8nj" (OuterVolumeSpecName: "kube-api-access-tf8nj") pod "c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d" (UID: "c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d"). InnerVolumeSpecName "kube-api-access-tf8nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:52:25 crc kubenswrapper[4724]: I0226 14:52:25.242460 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf8nj\" (UniqueName: \"kubernetes.io/projected/c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d-kube-api-access-tf8nj\") on node \"crc\" DevicePath \"\"" Feb 26 14:52:25 crc kubenswrapper[4724]: I0226 14:52:25.422558 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535292-ld7bw" event={"ID":"c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d","Type":"ContainerDied","Data":"11ebeed970765d3ee58ca462df139c0611a290ce2ea2edc8e9c539ee550a5ed3"} Feb 26 14:52:25 crc kubenswrapper[4724]: I0226 14:52:25.422807 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11ebeed970765d3ee58ca462df139c0611a290ce2ea2edc8e9c539ee550a5ed3" Feb 26 14:52:25 crc kubenswrapper[4724]: I0226 14:52:25.422632 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535292-ld7bw" Feb 26 14:52:25 crc kubenswrapper[4724]: I0226 14:52:25.580742 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535286-4rrgz"] Feb 26 14:52:25 crc kubenswrapper[4724]: I0226 14:52:25.598102 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535286-4rrgz"] Feb 26 14:52:25 crc kubenswrapper[4724]: I0226 14:52:25.987867 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29245a5b-ad70-4f04-8b05-b4b35f00d1a6" path="/var/lib/kubelet/pods/29245a5b-ad70-4f04-8b05-b4b35f00d1a6/volumes" Feb 26 14:52:27 crc kubenswrapper[4724]: I0226 14:52:27.169251 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:52:27 crc kubenswrapper[4724]: I0226 14:52:27.169728 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:52:28 crc kubenswrapper[4724]: I0226 14:52:28.225051 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:52:28 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:52:28 crc kubenswrapper[4724]: > Feb 26 14:52:35 crc kubenswrapper[4724]: I0226 14:52:35.853079 4724 scope.go:117] "RemoveContainer" containerID="bfa544c8a4962096f4ac0fcbe347119a2f0dd012ebf5f7243b40e78615978b27" Feb 26 14:52:38 crc kubenswrapper[4724]: I0226 14:52:38.216510 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:52:38 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:52:38 crc kubenswrapper[4724]: > Feb 26 14:52:48 crc kubenswrapper[4724]: I0226 14:52:48.226339 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:52:48 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:52:48 crc kubenswrapper[4724]: > Feb 26 14:52:58 crc kubenswrapper[4724]: I0226 14:52:58.217070 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:52:58 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:52:58 crc kubenswrapper[4724]: > Feb 26 14:53:08 crc kubenswrapper[4724]: I0226 14:53:08.224494 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:53:08 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:53:08 crc kubenswrapper[4724]: > Feb 26 14:53:11 crc kubenswrapper[4724]: I0226 14:53:11.310613 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mftqn"] Feb 26 14:53:11 crc kubenswrapper[4724]: E0226 14:53:11.311830 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d" containerName="oc" Feb 26 14:53:11 crc kubenswrapper[4724]: I0226 14:53:11.311850 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d" containerName="oc" Feb 26 14:53:11 crc kubenswrapper[4724]: I0226 14:53:11.312119 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d" containerName="oc" Feb 26 14:53:11 crc kubenswrapper[4724]: I0226 14:53:11.321398 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mftqn"] Feb 26 14:53:11 crc kubenswrapper[4724]: I0226 14:53:11.321528 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:11 crc kubenswrapper[4724]: I0226 14:53:11.512974 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55c340fa-2ab5-4b12-9b53-fceab510ee7a-catalog-content\") pod \"certified-operators-mftqn\" (UID: \"55c340fa-2ab5-4b12-9b53-fceab510ee7a\") " pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:11 crc kubenswrapper[4724]: I0226 14:53:11.513086 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55c340fa-2ab5-4b12-9b53-fceab510ee7a-utilities\") pod \"certified-operators-mftqn\" (UID: \"55c340fa-2ab5-4b12-9b53-fceab510ee7a\") " pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:11 crc kubenswrapper[4724]: I0226 14:53:11.513358 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgxcv\" (UniqueName: \"kubernetes.io/projected/55c340fa-2ab5-4b12-9b53-fceab510ee7a-kube-api-access-vgxcv\") pod \"certified-operators-mftqn\" (UID: \"55c340fa-2ab5-4b12-9b53-fceab510ee7a\") " pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:11 crc kubenswrapper[4724]: I0226 14:53:11.615675 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55c340fa-2ab5-4b12-9b53-fceab510ee7a-utilities\") pod \"certified-operators-mftqn\" (UID: \"55c340fa-2ab5-4b12-9b53-fceab510ee7a\") " pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:11 crc kubenswrapper[4724]: I0226 14:53:11.615768 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgxcv\" (UniqueName: \"kubernetes.io/projected/55c340fa-2ab5-4b12-9b53-fceab510ee7a-kube-api-access-vgxcv\") pod \"certified-operators-mftqn\" (UID: \"55c340fa-2ab5-4b12-9b53-fceab510ee7a\") " pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:11 crc kubenswrapper[4724]: I0226 14:53:11.615863 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55c340fa-2ab5-4b12-9b53-fceab510ee7a-catalog-content\") pod \"certified-operators-mftqn\" (UID: \"55c340fa-2ab5-4b12-9b53-fceab510ee7a\") " pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:11 crc kubenswrapper[4724]: I0226 14:53:11.616733 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55c340fa-2ab5-4b12-9b53-fceab510ee7a-catalog-content\") pod \"certified-operators-mftqn\" (UID: \"55c340fa-2ab5-4b12-9b53-fceab510ee7a\") " pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:11 crc kubenswrapper[4724]: I0226 14:53:11.616820 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55c340fa-2ab5-4b12-9b53-fceab510ee7a-utilities\") pod \"certified-operators-mftqn\" (UID: \"55c340fa-2ab5-4b12-9b53-fceab510ee7a\") " pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:11 crc kubenswrapper[4724]: I0226 14:53:11.644903 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgxcv\" (UniqueName: \"kubernetes.io/projected/55c340fa-2ab5-4b12-9b53-fceab510ee7a-kube-api-access-vgxcv\") pod \"certified-operators-mftqn\" (UID: \"55c340fa-2ab5-4b12-9b53-fceab510ee7a\") " pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:11 crc kubenswrapper[4724]: I0226 14:53:11.647514 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:12 crc kubenswrapper[4724]: I0226 14:53:12.676033 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mftqn"] Feb 26 14:53:12 crc kubenswrapper[4724]: W0226 14:53:12.691926 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55c340fa_2ab5_4b12_9b53_fceab510ee7a.slice/crio-9c09689e0449cb32de02f7f5681f1ddfda8d45c27de6953f9b9e912d1df697a5 WatchSource:0}: Error finding container 9c09689e0449cb32de02f7f5681f1ddfda8d45c27de6953f9b9e912d1df697a5: Status 404 returned error can't find the container with id 9c09689e0449cb32de02f7f5681f1ddfda8d45c27de6953f9b9e912d1df697a5 Feb 26 14:53:12 crc kubenswrapper[4724]: I0226 14:53:12.892704 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mftqn" event={"ID":"55c340fa-2ab5-4b12-9b53-fceab510ee7a","Type":"ContainerStarted","Data":"9c09689e0449cb32de02f7f5681f1ddfda8d45c27de6953f9b9e912d1df697a5"} Feb 26 14:53:13 crc kubenswrapper[4724]: I0226 14:53:13.909487 4724 generic.go:334] "Generic (PLEG): container finished" podID="55c340fa-2ab5-4b12-9b53-fceab510ee7a" containerID="c8637aec0b1056c937abd058c6e75c0a2fead841cdacf97968797141268cfa6a" exitCode=0 Feb 26 14:53:13 crc kubenswrapper[4724]: I0226 14:53:13.909915 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mftqn" event={"ID":"55c340fa-2ab5-4b12-9b53-fceab510ee7a","Type":"ContainerDied","Data":"c8637aec0b1056c937abd058c6e75c0a2fead841cdacf97968797141268cfa6a"} Feb 26 14:53:18 crc kubenswrapper[4724]: I0226 14:53:18.242926 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:53:18 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:53:18 crc kubenswrapper[4724]: > Feb 26 14:53:19 crc kubenswrapper[4724]: I0226 14:53:19.993323 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mftqn" event={"ID":"55c340fa-2ab5-4b12-9b53-fceab510ee7a","Type":"ContainerStarted","Data":"b9975a42c7af75655cebfabf073303591b9274e05f0edbb433b625e7dc00f5d5"} Feb 26 14:53:28 crc kubenswrapper[4724]: I0226 14:53:28.218975 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:53:28 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:53:28 crc kubenswrapper[4724]: > Feb 26 14:53:31 crc kubenswrapper[4724]: I0226 14:53:31.116311 4724 generic.go:334] "Generic (PLEG): container finished" podID="55c340fa-2ab5-4b12-9b53-fceab510ee7a" containerID="b9975a42c7af75655cebfabf073303591b9274e05f0edbb433b625e7dc00f5d5" exitCode=0 Feb 26 14:53:31 crc kubenswrapper[4724]: I0226 14:53:31.116392 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mftqn" event={"ID":"55c340fa-2ab5-4b12-9b53-fceab510ee7a","Type":"ContainerDied","Data":"b9975a42c7af75655cebfabf073303591b9274e05f0edbb433b625e7dc00f5d5"} Feb 26 14:53:33 crc kubenswrapper[4724]: I0226 14:53:33.972998 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-b86hc" podUID="d848b417-9306-4564-b059-0dc84bd7ec1a" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 14:53:36 crc kubenswrapper[4724]: I0226 14:53:36.166665 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mftqn" event={"ID":"55c340fa-2ab5-4b12-9b53-fceab510ee7a","Type":"ContainerStarted","Data":"9efada3652bbb1a0270d177bf84a9dd1c934e0875a5f1e64bc7e2da28aea2613"} Feb 26 14:53:36 crc kubenswrapper[4724]: I0226 14:53:36.203970 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mftqn" podStartSLOduration=3.701411399 podStartE2EDuration="25.203951193s" podCreationTimestamp="2026-02-26 14:53:11 +0000 UTC" firstStartedPulling="2026-02-26 14:53:13.912321581 +0000 UTC m=+13660.568060706" lastFinishedPulling="2026-02-26 14:53:35.414861375 +0000 UTC m=+13682.070600500" observedRunningTime="2026-02-26 14:53:36.201134131 +0000 UTC m=+13682.856873246" watchObservedRunningTime="2026-02-26 14:53:36.203951193 +0000 UTC m=+13682.859690308" Feb 26 14:53:38 crc kubenswrapper[4724]: I0226 14:53:38.218541 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:53:38 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:53:38 crc kubenswrapper[4724]: > Feb 26 14:53:41 crc kubenswrapper[4724]: I0226 14:53:41.648727 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:41 crc kubenswrapper[4724]: I0226 14:53:41.649355 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:41 crc kubenswrapper[4724]: I0226 14:53:41.762828 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:42 crc kubenswrapper[4724]: I0226 14:53:42.267124 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:42 crc kubenswrapper[4724]: I0226 14:53:42.550122 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mftqn"] Feb 26 14:53:44 crc kubenswrapper[4724]: I0226 14:53:44.237046 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mftqn" podUID="55c340fa-2ab5-4b12-9b53-fceab510ee7a" containerName="registry-server" containerID="cri-o://9efada3652bbb1a0270d177bf84a9dd1c934e0875a5f1e64bc7e2da28aea2613" gracePeriod=2 Feb 26 14:53:44 crc kubenswrapper[4724]: I0226 14:53:44.800394 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:44 crc kubenswrapper[4724]: I0226 14:53:44.877471 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55c340fa-2ab5-4b12-9b53-fceab510ee7a-utilities\") pod \"55c340fa-2ab5-4b12-9b53-fceab510ee7a\" (UID: \"55c340fa-2ab5-4b12-9b53-fceab510ee7a\") " Feb 26 14:53:44 crc kubenswrapper[4724]: I0226 14:53:44.877711 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55c340fa-2ab5-4b12-9b53-fceab510ee7a-catalog-content\") pod \"55c340fa-2ab5-4b12-9b53-fceab510ee7a\" (UID: \"55c340fa-2ab5-4b12-9b53-fceab510ee7a\") " Feb 26 14:53:44 crc kubenswrapper[4724]: I0226 14:53:44.877842 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgxcv\" (UniqueName: \"kubernetes.io/projected/55c340fa-2ab5-4b12-9b53-fceab510ee7a-kube-api-access-vgxcv\") pod \"55c340fa-2ab5-4b12-9b53-fceab510ee7a\" (UID: \"55c340fa-2ab5-4b12-9b53-fceab510ee7a\") " Feb 26 14:53:44 crc kubenswrapper[4724]: I0226 14:53:44.881692 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55c340fa-2ab5-4b12-9b53-fceab510ee7a-utilities" (OuterVolumeSpecName: "utilities") pod "55c340fa-2ab5-4b12-9b53-fceab510ee7a" (UID: "55c340fa-2ab5-4b12-9b53-fceab510ee7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:53:44 crc kubenswrapper[4724]: I0226 14:53:44.887650 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55c340fa-2ab5-4b12-9b53-fceab510ee7a-kube-api-access-vgxcv" (OuterVolumeSpecName: "kube-api-access-vgxcv") pod "55c340fa-2ab5-4b12-9b53-fceab510ee7a" (UID: "55c340fa-2ab5-4b12-9b53-fceab510ee7a"). InnerVolumeSpecName "kube-api-access-vgxcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:53:44 crc kubenswrapper[4724]: I0226 14:53:44.956104 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55c340fa-2ab5-4b12-9b53-fceab510ee7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55c340fa-2ab5-4b12-9b53-fceab510ee7a" (UID: "55c340fa-2ab5-4b12-9b53-fceab510ee7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:53:44 crc kubenswrapper[4724]: I0226 14:53:44.981323 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55c340fa-2ab5-4b12-9b53-fceab510ee7a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:53:44 crc kubenswrapper[4724]: I0226 14:53:44.981550 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgxcv\" (UniqueName: \"kubernetes.io/projected/55c340fa-2ab5-4b12-9b53-fceab510ee7a-kube-api-access-vgxcv\") on node \"crc\" DevicePath \"\"" Feb 26 14:53:44 crc kubenswrapper[4724]: I0226 14:53:44.981614 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55c340fa-2ab5-4b12-9b53-fceab510ee7a-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.245511 4724 generic.go:334] "Generic (PLEG): container finished" podID="55c340fa-2ab5-4b12-9b53-fceab510ee7a" containerID="9efada3652bbb1a0270d177bf84a9dd1c934e0875a5f1e64bc7e2da28aea2613" exitCode=0 Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.245576 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mftqn" event={"ID":"55c340fa-2ab5-4b12-9b53-fceab510ee7a","Type":"ContainerDied","Data":"9efada3652bbb1a0270d177bf84a9dd1c934e0875a5f1e64bc7e2da28aea2613"} Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.247024 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mftqn" event={"ID":"55c340fa-2ab5-4b12-9b53-fceab510ee7a","Type":"ContainerDied","Data":"9c09689e0449cb32de02f7f5681f1ddfda8d45c27de6953f9b9e912d1df697a5"} Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.245590 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mftqn" Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.247087 4724 scope.go:117] "RemoveContainer" containerID="9efada3652bbb1a0270d177bf84a9dd1c934e0875a5f1e64bc7e2da28aea2613" Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.290073 4724 scope.go:117] "RemoveContainer" containerID="b9975a42c7af75655cebfabf073303591b9274e05f0edbb433b625e7dc00f5d5" Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.294946 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mftqn"] Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.305893 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mftqn"] Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.314784 4724 scope.go:117] "RemoveContainer" containerID="c8637aec0b1056c937abd058c6e75c0a2fead841cdacf97968797141268cfa6a" Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.365382 4724 scope.go:117] "RemoveContainer" containerID="9efada3652bbb1a0270d177bf84a9dd1c934e0875a5f1e64bc7e2da28aea2613" Feb 26 14:53:45 crc kubenswrapper[4724]: E0226 14:53:45.367119 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9efada3652bbb1a0270d177bf84a9dd1c934e0875a5f1e64bc7e2da28aea2613\": container with ID starting with 9efada3652bbb1a0270d177bf84a9dd1c934e0875a5f1e64bc7e2da28aea2613 not found: ID does not exist" containerID="9efada3652bbb1a0270d177bf84a9dd1c934e0875a5f1e64bc7e2da28aea2613" Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.367158 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9efada3652bbb1a0270d177bf84a9dd1c934e0875a5f1e64bc7e2da28aea2613"} err="failed to get container status \"9efada3652bbb1a0270d177bf84a9dd1c934e0875a5f1e64bc7e2da28aea2613\": rpc error: code = NotFound desc = could not find container \"9efada3652bbb1a0270d177bf84a9dd1c934e0875a5f1e64bc7e2da28aea2613\": container with ID starting with 9efada3652bbb1a0270d177bf84a9dd1c934e0875a5f1e64bc7e2da28aea2613 not found: ID does not exist" Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.367283 4724 scope.go:117] "RemoveContainer" containerID="b9975a42c7af75655cebfabf073303591b9274e05f0edbb433b625e7dc00f5d5" Feb 26 14:53:45 crc kubenswrapper[4724]: E0226 14:53:45.370846 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9975a42c7af75655cebfabf073303591b9274e05f0edbb433b625e7dc00f5d5\": container with ID starting with b9975a42c7af75655cebfabf073303591b9274e05f0edbb433b625e7dc00f5d5 not found: ID does not exist" containerID="b9975a42c7af75655cebfabf073303591b9274e05f0edbb433b625e7dc00f5d5" Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.371058 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9975a42c7af75655cebfabf073303591b9274e05f0edbb433b625e7dc00f5d5"} err="failed to get container status \"b9975a42c7af75655cebfabf073303591b9274e05f0edbb433b625e7dc00f5d5\": rpc error: code = NotFound desc = could not find container \"b9975a42c7af75655cebfabf073303591b9274e05f0edbb433b625e7dc00f5d5\": container with ID starting with b9975a42c7af75655cebfabf073303591b9274e05f0edbb433b625e7dc00f5d5 not found: ID does not exist" Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.371154 4724 scope.go:117] "RemoveContainer" containerID="c8637aec0b1056c937abd058c6e75c0a2fead841cdacf97968797141268cfa6a" Feb 26 14:53:45 crc kubenswrapper[4724]: E0226 14:53:45.371758 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8637aec0b1056c937abd058c6e75c0a2fead841cdacf97968797141268cfa6a\": container with ID starting with c8637aec0b1056c937abd058c6e75c0a2fead841cdacf97968797141268cfa6a not found: ID does not exist" containerID="c8637aec0b1056c937abd058c6e75c0a2fead841cdacf97968797141268cfa6a" Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.371880 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8637aec0b1056c937abd058c6e75c0a2fead841cdacf97968797141268cfa6a"} err="failed to get container status \"c8637aec0b1056c937abd058c6e75c0a2fead841cdacf97968797141268cfa6a\": rpc error: code = NotFound desc = could not find container \"c8637aec0b1056c937abd058c6e75c0a2fead841cdacf97968797141268cfa6a\": container with ID starting with c8637aec0b1056c937abd058c6e75c0a2fead841cdacf97968797141268cfa6a not found: ID does not exist" Feb 26 14:53:45 crc kubenswrapper[4724]: I0226 14:53:45.987671 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55c340fa-2ab5-4b12-9b53-fceab510ee7a" path="/var/lib/kubelet/pods/55c340fa-2ab5-4b12-9b53-fceab510ee7a/volumes" Feb 26 14:53:48 crc kubenswrapper[4724]: I0226 14:53:48.213167 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:53:48 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:53:48 crc kubenswrapper[4724]: > Feb 26 14:53:58 crc kubenswrapper[4724]: I0226 14:53:58.228361 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:53:58 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:53:58 crc kubenswrapper[4724]: > Feb 26 14:53:58 crc kubenswrapper[4724]: I0226 14:53:58.228955 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:53:58 crc kubenswrapper[4724]: I0226 14:53:58.229794 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"4172cc778621be572bec2d935c18f36a1438dbd08d0f366050625aab37a5177b"} pod="openshift-marketplace/redhat-operators-xr87w" containerMessage="Container registry-server failed startup probe, will be restarted" Feb 26 14:53:58 crc kubenswrapper[4724]: I0226 14:53:58.229923 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" containerID="cri-o://4172cc778621be572bec2d935c18f36a1438dbd08d0f366050625aab37a5177b" gracePeriod=30 Feb 26 14:54:00 crc kubenswrapper[4724]: I0226 14:54:00.193907 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535294-p4txd"] Feb 26 14:54:00 crc kubenswrapper[4724]: E0226 14:54:00.194434 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c340fa-2ab5-4b12-9b53-fceab510ee7a" containerName="registry-server" Feb 26 14:54:00 crc kubenswrapper[4724]: I0226 14:54:00.194450 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c340fa-2ab5-4b12-9b53-fceab510ee7a" containerName="registry-server" Feb 26 14:54:00 crc kubenswrapper[4724]: E0226 14:54:00.194474 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c340fa-2ab5-4b12-9b53-fceab510ee7a" containerName="extract-utilities" Feb 26 14:54:00 crc kubenswrapper[4724]: I0226 14:54:00.194483 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c340fa-2ab5-4b12-9b53-fceab510ee7a" containerName="extract-utilities" Feb 26 14:54:00 crc kubenswrapper[4724]: E0226 14:54:00.194499 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c340fa-2ab5-4b12-9b53-fceab510ee7a" containerName="extract-content" Feb 26 14:54:00 crc kubenswrapper[4724]: I0226 14:54:00.194506 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c340fa-2ab5-4b12-9b53-fceab510ee7a" containerName="extract-content" Feb 26 14:54:00 crc kubenswrapper[4724]: I0226 14:54:00.194760 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c340fa-2ab5-4b12-9b53-fceab510ee7a" containerName="registry-server" Feb 26 14:54:00 crc kubenswrapper[4724]: I0226 14:54:00.195585 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535294-p4txd" Feb 26 14:54:00 crc kubenswrapper[4724]: I0226 14:54:00.215530 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:54:00 crc kubenswrapper[4724]: I0226 14:54:00.221294 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:54:00 crc kubenswrapper[4724]: I0226 14:54:00.223263 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:54:00 crc kubenswrapper[4724]: I0226 14:54:00.225502 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535294-p4txd"] Feb 26 14:54:00 crc kubenswrapper[4724]: I0226 14:54:00.313240 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbxfx\" (UniqueName: \"kubernetes.io/projected/f6168c23-1074-4235-8354-cbe5d261de46-kube-api-access-nbxfx\") pod \"auto-csr-approver-29535294-p4txd\" (UID: \"f6168c23-1074-4235-8354-cbe5d261de46\") " pod="openshift-infra/auto-csr-approver-29535294-p4txd" Feb 26 14:54:00 crc kubenswrapper[4724]: I0226 14:54:00.414751 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbxfx\" (UniqueName: \"kubernetes.io/projected/f6168c23-1074-4235-8354-cbe5d261de46-kube-api-access-nbxfx\") pod \"auto-csr-approver-29535294-p4txd\" (UID: \"f6168c23-1074-4235-8354-cbe5d261de46\") " pod="openshift-infra/auto-csr-approver-29535294-p4txd" Feb 26 14:54:00 crc kubenswrapper[4724]: I0226 14:54:00.455075 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbxfx\" (UniqueName: \"kubernetes.io/projected/f6168c23-1074-4235-8354-cbe5d261de46-kube-api-access-nbxfx\") pod \"auto-csr-approver-29535294-p4txd\" (UID: \"f6168c23-1074-4235-8354-cbe5d261de46\") " pod="openshift-infra/auto-csr-approver-29535294-p4txd" Feb 26 14:54:00 crc kubenswrapper[4724]: I0226 14:54:00.617613 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535294-p4txd" Feb 26 14:54:02 crc kubenswrapper[4724]: I0226 14:54:02.054306 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535294-p4txd"] Feb 26 14:54:02 crc kubenswrapper[4724]: I0226 14:54:02.393526 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535294-p4txd" event={"ID":"f6168c23-1074-4235-8354-cbe5d261de46","Type":"ContainerStarted","Data":"aaa51fd349b2d28c744af515ffc217f9dbdaed8cd508b64507de3e112aed2900"} Feb 26 14:54:06 crc kubenswrapper[4724]: I0226 14:54:06.461771 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535294-p4txd" event={"ID":"f6168c23-1074-4235-8354-cbe5d261de46","Type":"ContainerStarted","Data":"3d72d24330334e7fa275717b40307d1137a2187f31b543ea363a2ae6e7e1a74f"} Feb 26 14:54:06 crc kubenswrapper[4724]: I0226 14:54:06.484727 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535294-p4txd" podStartSLOduration=4.930222194 podStartE2EDuration="6.484704529s" podCreationTimestamp="2026-02-26 14:54:00 +0000 UTC" firstStartedPulling="2026-02-26 14:54:02.06994453 +0000 UTC m=+13708.725683635" lastFinishedPulling="2026-02-26 14:54:03.624426855 +0000 UTC m=+13710.280165970" observedRunningTime="2026-02-26 14:54:06.473736632 +0000 UTC m=+13713.129475757" watchObservedRunningTime="2026-02-26 14:54:06.484704529 +0000 UTC m=+13713.140443644" Feb 26 14:54:08 crc kubenswrapper[4724]: I0226 14:54:08.492417 4724 generic.go:334] "Generic (PLEG): container finished" podID="f6168c23-1074-4235-8354-cbe5d261de46" containerID="3d72d24330334e7fa275717b40307d1137a2187f31b543ea363a2ae6e7e1a74f" exitCode=0 Feb 26 14:54:08 crc kubenswrapper[4724]: I0226 14:54:08.492493 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535294-p4txd" event={"ID":"f6168c23-1074-4235-8354-cbe5d261de46","Type":"ContainerDied","Data":"3d72d24330334e7fa275717b40307d1137a2187f31b543ea363a2ae6e7e1a74f"} Feb 26 14:54:10 crc kubenswrapper[4724]: I0226 14:54:10.679586 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535294-p4txd" Feb 26 14:54:10 crc kubenswrapper[4724]: I0226 14:54:10.832547 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbxfx\" (UniqueName: \"kubernetes.io/projected/f6168c23-1074-4235-8354-cbe5d261de46-kube-api-access-nbxfx\") pod \"f6168c23-1074-4235-8354-cbe5d261de46\" (UID: \"f6168c23-1074-4235-8354-cbe5d261de46\") " Feb 26 14:54:10 crc kubenswrapper[4724]: I0226 14:54:10.862318 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6168c23-1074-4235-8354-cbe5d261de46-kube-api-access-nbxfx" (OuterVolumeSpecName: "kube-api-access-nbxfx") pod "f6168c23-1074-4235-8354-cbe5d261de46" (UID: "f6168c23-1074-4235-8354-cbe5d261de46"). InnerVolumeSpecName "kube-api-access-nbxfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:54:10 crc kubenswrapper[4724]: I0226 14:54:10.935667 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbxfx\" (UniqueName: \"kubernetes.io/projected/f6168c23-1074-4235-8354-cbe5d261de46-kube-api-access-nbxfx\") on node \"crc\" DevicePath \"\"" Feb 26 14:54:11 crc kubenswrapper[4724]: I0226 14:54:11.523044 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535294-p4txd" event={"ID":"f6168c23-1074-4235-8354-cbe5d261de46","Type":"ContainerDied","Data":"aaa51fd349b2d28c744af515ffc217f9dbdaed8cd508b64507de3e112aed2900"} Feb 26 14:54:11 crc kubenswrapper[4724]: I0226 14:54:11.523069 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535294-p4txd" Feb 26 14:54:11 crc kubenswrapper[4724]: I0226 14:54:11.523090 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaa51fd349b2d28c744af515ffc217f9dbdaed8cd508b64507de3e112aed2900" Feb 26 14:54:11 crc kubenswrapper[4724]: I0226 14:54:11.758781 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535288-nt7pn"] Feb 26 14:54:11 crc kubenswrapper[4724]: I0226 14:54:11.769756 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535288-nt7pn"] Feb 26 14:54:11 crc kubenswrapper[4724]: I0226 14:54:11.987600 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e320a3f-5d55-45a8-9392-143d1d520d94" path="/var/lib/kubelet/pods/2e320a3f-5d55-45a8-9392-143d1d520d94/volumes" Feb 26 14:54:28 crc kubenswrapper[4724]: I0226 14:54:28.857245 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/registry-server/0.log" Feb 26 14:54:28 crc kubenswrapper[4724]: I0226 14:54:28.859524 4724 generic.go:334] "Generic (PLEG): container finished" podID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerID="4172cc778621be572bec2d935c18f36a1438dbd08d0f366050625aab37a5177b" exitCode=137 Feb 26 14:54:28 crc kubenswrapper[4724]: I0226 14:54:28.859562 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr87w" event={"ID":"550ea3fc-915a-433b-9b60-2a6febd5afe4","Type":"ContainerDied","Data":"4172cc778621be572bec2d935c18f36a1438dbd08d0f366050625aab37a5177b"} Feb 26 14:54:31 crc kubenswrapper[4724]: I0226 14:54:31.885979 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/registry-server/0.log" Feb 26 14:54:31 crc kubenswrapper[4724]: I0226 14:54:31.887163 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr87w" event={"ID":"550ea3fc-915a-433b-9b60-2a6febd5afe4","Type":"ContainerStarted","Data":"153c52ba69bb90725afcef5e708262522d70e8c8a6bc14ddf86762f786678c1a"} Feb 26 14:54:35 crc kubenswrapper[4724]: I0226 14:54:35.979159 4724 scope.go:117] "RemoveContainer" containerID="7da9a7e2728ffe88abef58978c6ed15ad552c7c36d3fffbe4cff57eb050bb3dd" Feb 26 14:54:37 crc kubenswrapper[4724]: I0226 14:54:37.167947 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:54:37 crc kubenswrapper[4724]: I0226 14:54:37.170303 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:54:38 crc kubenswrapper[4724]: I0226 14:54:38.220689 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:54:38 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:54:38 crc kubenswrapper[4724]: > Feb 26 14:54:46 crc kubenswrapper[4724]: I0226 14:54:46.906342 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:54:46 crc kubenswrapper[4724]: I0226 14:54:46.906873 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:54:48 crc kubenswrapper[4724]: I0226 14:54:48.219933 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:54:48 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:54:48 crc kubenswrapper[4724]: > Feb 26 14:54:58 crc kubenswrapper[4724]: I0226 14:54:58.214617 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:54:58 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:54:58 crc kubenswrapper[4724]: > Feb 26 14:55:08 crc kubenswrapper[4724]: I0226 14:55:08.222165 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:55:08 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:55:08 crc kubenswrapper[4724]: > Feb 26 14:55:16 crc kubenswrapper[4724]: I0226 14:55:16.906338 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:55:16 crc kubenswrapper[4724]: I0226 14:55:16.906861 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:55:18 crc kubenswrapper[4724]: I0226 14:55:18.216335 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:55:18 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:55:18 crc kubenswrapper[4724]: > Feb 26 14:55:28 crc kubenswrapper[4724]: I0226 14:55:28.216046 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:55:28 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:55:28 crc kubenswrapper[4724]: > Feb 26 14:55:38 crc kubenswrapper[4724]: I0226 14:55:38.226678 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:55:38 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:55:38 crc kubenswrapper[4724]: > Feb 26 14:55:46 crc kubenswrapper[4724]: I0226 14:55:46.906023 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:55:46 crc kubenswrapper[4724]: I0226 14:55:46.906687 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:55:46 crc kubenswrapper[4724]: I0226 14:55:46.906747 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 14:55:46 crc kubenswrapper[4724]: I0226 14:55:46.980768 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"280923b246c87c62f2d181f81ba14e28f5ae6e43bd7b9256085eb5cf25afbd11"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:55:46 crc kubenswrapper[4724]: I0226 14:55:46.981291 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://280923b246c87c62f2d181f81ba14e28f5ae6e43bd7b9256085eb5cf25afbd11" gracePeriod=600 Feb 26 14:55:47 crc kubenswrapper[4724]: I0226 14:55:47.611870 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="280923b246c87c62f2d181f81ba14e28f5ae6e43bd7b9256085eb5cf25afbd11" exitCode=0 Feb 26 14:55:47 crc kubenswrapper[4724]: I0226 14:55:47.612225 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"280923b246c87c62f2d181f81ba14e28f5ae6e43bd7b9256085eb5cf25afbd11"} Feb 26 14:55:47 crc kubenswrapper[4724]: I0226 14:55:47.612274 4724 scope.go:117] "RemoveContainer" containerID="3724338f5179dd56ec75b8822041809b79bb036a62ac61db7510af54b09d20bd" Feb 26 14:55:48 crc kubenswrapper[4724]: I0226 14:55:48.221102 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:55:48 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:55:48 crc kubenswrapper[4724]: > Feb 26 14:55:48 crc kubenswrapper[4724]: I0226 14:55:48.625004 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2"} Feb 26 14:55:58 crc kubenswrapper[4724]: I0226 14:55:58.218817 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:55:58 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:55:58 crc kubenswrapper[4724]: > Feb 26 14:56:00 crc kubenswrapper[4724]: I0226 14:56:00.235480 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535296-5q9w2"] Feb 26 14:56:00 crc kubenswrapper[4724]: E0226 14:56:00.240133 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6168c23-1074-4235-8354-cbe5d261de46" containerName="oc" Feb 26 14:56:00 crc kubenswrapper[4724]: I0226 14:56:00.240301 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6168c23-1074-4235-8354-cbe5d261de46" containerName="oc" Feb 26 14:56:00 crc kubenswrapper[4724]: I0226 14:56:00.240767 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6168c23-1074-4235-8354-cbe5d261de46" containerName="oc" Feb 26 14:56:00 crc kubenswrapper[4724]: I0226 14:56:00.244480 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535296-5q9w2" Feb 26 14:56:00 crc kubenswrapper[4724]: I0226 14:56:00.256196 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:56:00 crc kubenswrapper[4724]: I0226 14:56:00.256229 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:56:00 crc kubenswrapper[4724]: I0226 14:56:00.256209 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:56:00 crc kubenswrapper[4724]: I0226 14:56:00.313277 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535296-5q9w2"] Feb 26 14:56:00 crc kubenswrapper[4724]: I0226 14:56:00.320703 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmjwd\" (UniqueName: \"kubernetes.io/projected/39428a48-848a-49d0-8ad5-48e204b161b4-kube-api-access-wmjwd\") pod \"auto-csr-approver-29535296-5q9w2\" (UID: \"39428a48-848a-49d0-8ad5-48e204b161b4\") " pod="openshift-infra/auto-csr-approver-29535296-5q9w2" Feb 26 14:56:00 crc kubenswrapper[4724]: I0226 14:56:00.422576 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmjwd\" (UniqueName: \"kubernetes.io/projected/39428a48-848a-49d0-8ad5-48e204b161b4-kube-api-access-wmjwd\") pod \"auto-csr-approver-29535296-5q9w2\" (UID: \"39428a48-848a-49d0-8ad5-48e204b161b4\") " pod="openshift-infra/auto-csr-approver-29535296-5q9w2" Feb 26 14:56:00 crc kubenswrapper[4724]: I0226 14:56:00.456623 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmjwd\" (UniqueName: \"kubernetes.io/projected/39428a48-848a-49d0-8ad5-48e204b161b4-kube-api-access-wmjwd\") pod \"auto-csr-approver-29535296-5q9w2\" (UID: \"39428a48-848a-49d0-8ad5-48e204b161b4\") " pod="openshift-infra/auto-csr-approver-29535296-5q9w2" Feb 26 14:56:00 crc kubenswrapper[4724]: I0226 14:56:00.571888 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535296-5q9w2" Feb 26 14:56:03 crc kubenswrapper[4724]: I0226 14:56:03.005408 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535296-5q9w2"] Feb 26 14:56:03 crc kubenswrapper[4724]: I0226 14:56:03.786803 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535296-5q9w2" event={"ID":"39428a48-848a-49d0-8ad5-48e204b161b4","Type":"ContainerStarted","Data":"efa16fb91da1f40d248df6d0a0af4fdac9c2e035a423b6b2ababe29ffbd0856e"} Feb 26 14:56:05 crc kubenswrapper[4724]: I0226 14:56:05.806896 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535296-5q9w2" event={"ID":"39428a48-848a-49d0-8ad5-48e204b161b4","Type":"ContainerStarted","Data":"9928adb1108787e1ed2032e047b984e7407d4b42530b16ed3f7f12ba13abf87e"} Feb 26 14:56:06 crc kubenswrapper[4724]: I0226 14:56:06.827206 4724 generic.go:334] "Generic (PLEG): container finished" podID="39428a48-848a-49d0-8ad5-48e204b161b4" containerID="9928adb1108787e1ed2032e047b984e7407d4b42530b16ed3f7f12ba13abf87e" exitCode=0 Feb 26 14:56:06 crc kubenswrapper[4724]: I0226 14:56:06.827487 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535296-5q9w2" event={"ID":"39428a48-848a-49d0-8ad5-48e204b161b4","Type":"ContainerDied","Data":"9928adb1108787e1ed2032e047b984e7407d4b42530b16ed3f7f12ba13abf87e"} Feb 26 14:56:08 crc kubenswrapper[4724]: I0226 14:56:08.226973 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:56:08 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:56:08 crc kubenswrapper[4724]: > Feb 26 14:56:08 crc kubenswrapper[4724]: I0226 14:56:08.227410 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:56:08 crc kubenswrapper[4724]: I0226 14:56:08.227965 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"153c52ba69bb90725afcef5e708262522d70e8c8a6bc14ddf86762f786678c1a"} pod="openshift-marketplace/redhat-operators-xr87w" containerMessage="Container registry-server failed startup probe, will be restarted" Feb 26 14:56:08 crc kubenswrapper[4724]: I0226 14:56:08.227995 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" containerID="cri-o://153c52ba69bb90725afcef5e708262522d70e8c8a6bc14ddf86762f786678c1a" gracePeriod=30 Feb 26 14:56:08 crc kubenswrapper[4724]: I0226 14:56:08.250870 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535296-5q9w2" Feb 26 14:56:08 crc kubenswrapper[4724]: I0226 14:56:08.424430 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmjwd\" (UniqueName: \"kubernetes.io/projected/39428a48-848a-49d0-8ad5-48e204b161b4-kube-api-access-wmjwd\") pod \"39428a48-848a-49d0-8ad5-48e204b161b4\" (UID: \"39428a48-848a-49d0-8ad5-48e204b161b4\") " Feb 26 14:56:08 crc kubenswrapper[4724]: I0226 14:56:08.436409 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39428a48-848a-49d0-8ad5-48e204b161b4-kube-api-access-wmjwd" (OuterVolumeSpecName: "kube-api-access-wmjwd") pod "39428a48-848a-49d0-8ad5-48e204b161b4" (UID: "39428a48-848a-49d0-8ad5-48e204b161b4"). InnerVolumeSpecName "kube-api-access-wmjwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:56:08 crc kubenswrapper[4724]: I0226 14:56:08.526730 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmjwd\" (UniqueName: \"kubernetes.io/projected/39428a48-848a-49d0-8ad5-48e204b161b4-kube-api-access-wmjwd\") on node \"crc\" DevicePath \"\"" Feb 26 14:56:08 crc kubenswrapper[4724]: I0226 14:56:08.880258 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535296-5q9w2" event={"ID":"39428a48-848a-49d0-8ad5-48e204b161b4","Type":"ContainerDied","Data":"efa16fb91da1f40d248df6d0a0af4fdac9c2e035a423b6b2ababe29ffbd0856e"} Feb 26 14:56:08 crc kubenswrapper[4724]: I0226 14:56:08.880326 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efa16fb91da1f40d248df6d0a0af4fdac9c2e035a423b6b2ababe29ffbd0856e" Feb 26 14:56:08 crc kubenswrapper[4724]: I0226 14:56:08.881234 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535296-5q9w2" Feb 26 14:56:08 crc kubenswrapper[4724]: I0226 14:56:08.939011 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535290-b5pmb"] Feb 26 14:56:08 crc kubenswrapper[4724]: I0226 14:56:08.949541 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535290-b5pmb"] Feb 26 14:56:09 crc kubenswrapper[4724]: I0226 14:56:09.989150 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e" path="/var/lib/kubelet/pods/ac4d2e51-4b2f-4ce1-a5e1-3f2389c0814e/volumes" Feb 26 14:56:16 crc kubenswrapper[4724]: I0226 14:56:16.982124 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/registry-server/0.log" Feb 26 14:56:16 crc kubenswrapper[4724]: I0226 14:56:16.983455 4724 generic.go:334] "Generic (PLEG): container finished" podID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerID="153c52ba69bb90725afcef5e708262522d70e8c8a6bc14ddf86762f786678c1a" exitCode=0 Feb 26 14:56:16 crc kubenswrapper[4724]: I0226 14:56:16.983506 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr87w" event={"ID":"550ea3fc-915a-433b-9b60-2a6febd5afe4","Type":"ContainerDied","Data":"153c52ba69bb90725afcef5e708262522d70e8c8a6bc14ddf86762f786678c1a"} Feb 26 14:56:16 crc kubenswrapper[4724]: I0226 14:56:16.983543 4724 scope.go:117] "RemoveContainer" containerID="4172cc778621be572bec2d935c18f36a1438dbd08d0f366050625aab37a5177b" Feb 26 14:56:19 crc kubenswrapper[4724]: I0226 14:56:19.010103 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xr87w" event={"ID":"550ea3fc-915a-433b-9b60-2a6febd5afe4","Type":"ContainerStarted","Data":"0f676e34c94f8dd12b0c69157b2caebe5b561a1fba53618be728e4521f4169d9"} Feb 26 14:56:27 crc kubenswrapper[4724]: I0226 14:56:27.169103 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:56:27 crc kubenswrapper[4724]: I0226 14:56:27.169470 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:56:28 crc kubenswrapper[4724]: I0226 14:56:28.221081 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:56:28 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:56:28 crc kubenswrapper[4724]: > Feb 26 14:56:36 crc kubenswrapper[4724]: I0226 14:56:36.766965 4724 scope.go:117] "RemoveContainer" containerID="47860a2a4a7b1c7ff97ef438a560b4136ea27cf47e4824423fd96ef4950c7efd" Feb 26 14:56:38 crc kubenswrapper[4724]: I0226 14:56:38.219748 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xr87w" podUID="550ea3fc-915a-433b-9b60-2a6febd5afe4" containerName="registry-server" probeResult="failure" output=< Feb 26 14:56:38 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:56:38 crc kubenswrapper[4724]: > Feb 26 14:56:47 crc kubenswrapper[4724]: I0226 14:56:47.825414 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:56:47 crc kubenswrapper[4724]: I0226 14:56:47.890124 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xr87w" Feb 26 14:56:48 crc kubenswrapper[4724]: I0226 14:56:48.901611 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xr87w"] Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.063736 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x8l8d"] Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.064736 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x8l8d" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="registry-server" containerID="cri-o://8290842d5669847747148dc173ddbe0c543b73a29ca00335979254c8e45ea05b" gracePeriod=2 Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.684781 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.821048 4724 generic.go:334] "Generic (PLEG): container finished" podID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerID="8290842d5669847747148dc173ddbe0c543b73a29ca00335979254c8e45ea05b" exitCode=0 Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.823004 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8l8d" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.822382 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8l8d" event={"ID":"ea4160fe-1944-4874-ae62-704c7884d8ca","Type":"ContainerDied","Data":"8290842d5669847747148dc173ddbe0c543b73a29ca00335979254c8e45ea05b"} Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.830153 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8l8d" event={"ID":"ea4160fe-1944-4874-ae62-704c7884d8ca","Type":"ContainerDied","Data":"6bf23ca54dc1c29ca4edb648084dd7904ebe665e957b59101a5f14a4c343a684"} Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.830013 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea4160fe-1944-4874-ae62-704c7884d8ca-utilities" (OuterVolumeSpecName: "utilities") pod "ea4160fe-1944-4874-ae62-704c7884d8ca" (UID: "ea4160fe-1944-4874-ae62-704c7884d8ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.829385 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea4160fe-1944-4874-ae62-704c7884d8ca-utilities\") pod \"ea4160fe-1944-4874-ae62-704c7884d8ca\" (UID: \"ea4160fe-1944-4874-ae62-704c7884d8ca\") " Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.830386 4724 scope.go:117] "RemoveContainer" containerID="8290842d5669847747148dc173ddbe0c543b73a29ca00335979254c8e45ea05b" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.830389 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea4160fe-1944-4874-ae62-704c7884d8ca-catalog-content\") pod \"ea4160fe-1944-4874-ae62-704c7884d8ca\" (UID: \"ea4160fe-1944-4874-ae62-704c7884d8ca\") " Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.830849 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j2wd\" (UniqueName: \"kubernetes.io/projected/ea4160fe-1944-4874-ae62-704c7884d8ca-kube-api-access-9j2wd\") pod \"ea4160fe-1944-4874-ae62-704c7884d8ca\" (UID: \"ea4160fe-1944-4874-ae62-704c7884d8ca\") " Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.832749 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea4160fe-1944-4874-ae62-704c7884d8ca-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.863702 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea4160fe-1944-4874-ae62-704c7884d8ca-kube-api-access-9j2wd" (OuterVolumeSpecName: "kube-api-access-9j2wd") pod "ea4160fe-1944-4874-ae62-704c7884d8ca" (UID: "ea4160fe-1944-4874-ae62-704c7884d8ca"). InnerVolumeSpecName "kube-api-access-9j2wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.880340 4724 scope.go:117] "RemoveContainer" containerID="8af64e7aa62838edaabefae1834d1c33835a5347e3e492c9fa4007c9678acf6a" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.909233 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea4160fe-1944-4874-ae62-704c7884d8ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea4160fe-1944-4874-ae62-704c7884d8ca" (UID: "ea4160fe-1944-4874-ae62-704c7884d8ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.931619 4724 scope.go:117] "RemoveContainer" containerID="8b281804370be754a98e143389bad4fe75aa0328aca2e31155b6ef493133f2a8" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.934391 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea4160fe-1944-4874-ae62-704c7884d8ca-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.934437 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j2wd\" (UniqueName: \"kubernetes.io/projected/ea4160fe-1944-4874-ae62-704c7884d8ca-kube-api-access-9j2wd\") on node \"crc\" DevicePath \"\"" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.993544 4724 scope.go:117] "RemoveContainer" containerID="8290842d5669847747148dc173ddbe0c543b73a29ca00335979254c8e45ea05b" Feb 26 14:56:49 crc kubenswrapper[4724]: E0226 14:56:49.995751 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8290842d5669847747148dc173ddbe0c543b73a29ca00335979254c8e45ea05b\": container with ID starting with 8290842d5669847747148dc173ddbe0c543b73a29ca00335979254c8e45ea05b not found: ID does not exist" containerID="8290842d5669847747148dc173ddbe0c543b73a29ca00335979254c8e45ea05b" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.995791 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8290842d5669847747148dc173ddbe0c543b73a29ca00335979254c8e45ea05b"} err="failed to get container status \"8290842d5669847747148dc173ddbe0c543b73a29ca00335979254c8e45ea05b\": rpc error: code = NotFound desc = could not find container \"8290842d5669847747148dc173ddbe0c543b73a29ca00335979254c8e45ea05b\": container with ID starting with 8290842d5669847747148dc173ddbe0c543b73a29ca00335979254c8e45ea05b not found: ID does not exist" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.995815 4724 scope.go:117] "RemoveContainer" containerID="8af64e7aa62838edaabefae1834d1c33835a5347e3e492c9fa4007c9678acf6a" Feb 26 14:56:49 crc kubenswrapper[4724]: E0226 14:56:49.996248 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af64e7aa62838edaabefae1834d1c33835a5347e3e492c9fa4007c9678acf6a\": container with ID starting with 8af64e7aa62838edaabefae1834d1c33835a5347e3e492c9fa4007c9678acf6a not found: ID does not exist" containerID="8af64e7aa62838edaabefae1834d1c33835a5347e3e492c9fa4007c9678acf6a" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.996272 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af64e7aa62838edaabefae1834d1c33835a5347e3e492c9fa4007c9678acf6a"} err="failed to get container status \"8af64e7aa62838edaabefae1834d1c33835a5347e3e492c9fa4007c9678acf6a\": rpc error: code = NotFound desc = could not find container \"8af64e7aa62838edaabefae1834d1c33835a5347e3e492c9fa4007c9678acf6a\": container with ID starting with 8af64e7aa62838edaabefae1834d1c33835a5347e3e492c9fa4007c9678acf6a not found: ID does not exist" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.996292 4724 scope.go:117] "RemoveContainer" containerID="8b281804370be754a98e143389bad4fe75aa0328aca2e31155b6ef493133f2a8" Feb 26 14:56:49 crc kubenswrapper[4724]: E0226 14:56:49.996565 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b281804370be754a98e143389bad4fe75aa0328aca2e31155b6ef493133f2a8\": container with ID starting with 8b281804370be754a98e143389bad4fe75aa0328aca2e31155b6ef493133f2a8 not found: ID does not exist" containerID="8b281804370be754a98e143389bad4fe75aa0328aca2e31155b6ef493133f2a8" Feb 26 14:56:49 crc kubenswrapper[4724]: I0226 14:56:49.996587 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b281804370be754a98e143389bad4fe75aa0328aca2e31155b6ef493133f2a8"} err="failed to get container status \"8b281804370be754a98e143389bad4fe75aa0328aca2e31155b6ef493133f2a8\": rpc error: code = NotFound desc = could not find container \"8b281804370be754a98e143389bad4fe75aa0328aca2e31155b6ef493133f2a8\": container with ID starting with 8b281804370be754a98e143389bad4fe75aa0328aca2e31155b6ef493133f2a8 not found: ID does not exist" Feb 26 14:56:50 crc kubenswrapper[4724]: I0226 14:56:50.148148 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x8l8d"] Feb 26 14:56:50 crc kubenswrapper[4724]: I0226 14:56:50.157311 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x8l8d"] Feb 26 14:56:51 crc kubenswrapper[4724]: I0226 14:56:51.988559 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" path="/var/lib/kubelet/pods/ea4160fe-1944-4874-ae62-704c7884d8ca/volumes" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.244769 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-88b2k"] Feb 26 14:57:28 crc kubenswrapper[4724]: E0226 14:57:28.253266 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="extract-utilities" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.253585 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="extract-utilities" Feb 26 14:57:28 crc kubenswrapper[4724]: E0226 14:57:28.254132 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39428a48-848a-49d0-8ad5-48e204b161b4" containerName="oc" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.254155 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="39428a48-848a-49d0-8ad5-48e204b161b4" containerName="oc" Feb 26 14:57:28 crc kubenswrapper[4724]: E0226 14:57:28.254171 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="registry-server" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.254193 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="registry-server" Feb 26 14:57:28 crc kubenswrapper[4724]: E0226 14:57:28.254209 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="extract-content" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.254216 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="extract-content" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.256215 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="39428a48-848a-49d0-8ad5-48e204b161b4" containerName="oc" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.257046 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4160fe-1944-4874-ae62-704c7884d8ca" containerName="registry-server" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.272579 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.277882 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-88b2k"] Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.314586 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d0fd21-0c46-4afb-80d2-343961ed0e70-utilities\") pod \"community-operators-88b2k\" (UID: \"b4d0fd21-0c46-4afb-80d2-343961ed0e70\") " pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.314727 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d0fd21-0c46-4afb-80d2-343961ed0e70-catalog-content\") pod \"community-operators-88b2k\" (UID: \"b4d0fd21-0c46-4afb-80d2-343961ed0e70\") " pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.314763 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln9zb\" (UniqueName: \"kubernetes.io/projected/b4d0fd21-0c46-4afb-80d2-343961ed0e70-kube-api-access-ln9zb\") pod \"community-operators-88b2k\" (UID: \"b4d0fd21-0c46-4afb-80d2-343961ed0e70\") " pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.417306 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d0fd21-0c46-4afb-80d2-343961ed0e70-utilities\") pod \"community-operators-88b2k\" (UID: \"b4d0fd21-0c46-4afb-80d2-343961ed0e70\") " pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.417442 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d0fd21-0c46-4afb-80d2-343961ed0e70-catalog-content\") pod \"community-operators-88b2k\" (UID: \"b4d0fd21-0c46-4afb-80d2-343961ed0e70\") " pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.417510 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln9zb\" (UniqueName: \"kubernetes.io/projected/b4d0fd21-0c46-4afb-80d2-343961ed0e70-kube-api-access-ln9zb\") pod \"community-operators-88b2k\" (UID: \"b4d0fd21-0c46-4afb-80d2-343961ed0e70\") " pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.418953 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d0fd21-0c46-4afb-80d2-343961ed0e70-utilities\") pod \"community-operators-88b2k\" (UID: \"b4d0fd21-0c46-4afb-80d2-343961ed0e70\") " pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.418978 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d0fd21-0c46-4afb-80d2-343961ed0e70-catalog-content\") pod \"community-operators-88b2k\" (UID: \"b4d0fd21-0c46-4afb-80d2-343961ed0e70\") " pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.471774 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln9zb\" (UniqueName: \"kubernetes.io/projected/b4d0fd21-0c46-4afb-80d2-343961ed0e70-kube-api-access-ln9zb\") pod \"community-operators-88b2k\" (UID: \"b4d0fd21-0c46-4afb-80d2-343961ed0e70\") " pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:57:28 crc kubenswrapper[4724]: I0226 14:57:28.624397 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:57:30 crc kubenswrapper[4724]: I0226 14:57:30.199721 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-88b2k"] Feb 26 14:57:30 crc kubenswrapper[4724]: W0226 14:57:30.219921 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4d0fd21_0c46_4afb_80d2_343961ed0e70.slice/crio-498cf700c99e7376d6e8aa98d53b4abf67303c66773b2e75561875d5c758e07a WatchSource:0}: Error finding container 498cf700c99e7376d6e8aa98d53b4abf67303c66773b2e75561875d5c758e07a: Status 404 returned error can't find the container with id 498cf700c99e7376d6e8aa98d53b4abf67303c66773b2e75561875d5c758e07a Feb 26 14:57:31 crc kubenswrapper[4724]: I0226 14:57:31.230980 4724 generic.go:334] "Generic (PLEG): container finished" podID="b4d0fd21-0c46-4afb-80d2-343961ed0e70" containerID="05475347bf257331de34983daab01a61b8d27a715517fc995fe57fa536462a9c" exitCode=0 Feb 26 14:57:31 crc kubenswrapper[4724]: I0226 14:57:31.231077 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88b2k" event={"ID":"b4d0fd21-0c46-4afb-80d2-343961ed0e70","Type":"ContainerDied","Data":"05475347bf257331de34983daab01a61b8d27a715517fc995fe57fa536462a9c"} Feb 26 14:57:31 crc kubenswrapper[4724]: I0226 14:57:31.233513 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88b2k" event={"ID":"b4d0fd21-0c46-4afb-80d2-343961ed0e70","Type":"ContainerStarted","Data":"498cf700c99e7376d6e8aa98d53b4abf67303c66773b2e75561875d5c758e07a"} Feb 26 14:57:31 crc kubenswrapper[4724]: I0226 14:57:31.241338 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:57:33 crc kubenswrapper[4724]: I0226 14:57:33.257576 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88b2k" event={"ID":"b4d0fd21-0c46-4afb-80d2-343961ed0e70","Type":"ContainerStarted","Data":"47c1a285444c26d91b1b31208ff30fafee4b7bc996657b02956682f1f3cbe6fc"} Feb 26 14:57:35 crc kubenswrapper[4724]: I0226 14:57:35.277933 4724 generic.go:334] "Generic (PLEG): container finished" podID="b4d0fd21-0c46-4afb-80d2-343961ed0e70" containerID="47c1a285444c26d91b1b31208ff30fafee4b7bc996657b02956682f1f3cbe6fc" exitCode=0 Feb 26 14:57:35 crc kubenswrapper[4724]: I0226 14:57:35.277995 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88b2k" event={"ID":"b4d0fd21-0c46-4afb-80d2-343961ed0e70","Type":"ContainerDied","Data":"47c1a285444c26d91b1b31208ff30fafee4b7bc996657b02956682f1f3cbe6fc"} Feb 26 14:57:37 crc kubenswrapper[4724]: I0226 14:57:37.298402 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88b2k" event={"ID":"b4d0fd21-0c46-4afb-80d2-343961ed0e70","Type":"ContainerStarted","Data":"7beb13c472ed3f82109fb98de2f08de967b011db55988244f1080fd5d20d51df"} Feb 26 14:57:38 crc kubenswrapper[4724]: I0226 14:57:38.626830 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:57:38 crc kubenswrapper[4724]: I0226 14:57:38.627232 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:57:39 crc kubenswrapper[4724]: I0226 14:57:39.673414 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-88b2k" podUID="b4d0fd21-0c46-4afb-80d2-343961ed0e70" containerName="registry-server" probeResult="failure" output=< Feb 26 14:57:39 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:57:39 crc kubenswrapper[4724]: > Feb 26 14:57:49 crc kubenswrapper[4724]: I0226 14:57:49.697048 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-88b2k" podUID="b4d0fd21-0c46-4afb-80d2-343961ed0e70" containerName="registry-server" probeResult="failure" output=< Feb 26 14:57:49 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:57:49 crc kubenswrapper[4724]: > Feb 26 14:57:58 crc kubenswrapper[4724]: I0226 14:57:58.681757 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:57:58 crc kubenswrapper[4724]: I0226 14:57:58.709222 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-88b2k" podStartSLOduration=25.199888467 podStartE2EDuration="30.70740429s" podCreationTimestamp="2026-02-26 14:57:28 +0000 UTC" firstStartedPulling="2026-02-26 14:57:31.23302583 +0000 UTC m=+13917.888764945" lastFinishedPulling="2026-02-26 14:57:36.740541653 +0000 UTC m=+13923.396280768" observedRunningTime="2026-02-26 14:57:37.347907811 +0000 UTC m=+13924.003646966" watchObservedRunningTime="2026-02-26 14:57:58.70740429 +0000 UTC m=+13945.363143405" Feb 26 14:57:58 crc kubenswrapper[4724]: I0226 14:57:58.728770 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:57:59 crc kubenswrapper[4724]: I0226 14:57:59.410362 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-88b2k"] Feb 26 14:58:00 crc kubenswrapper[4724]: I0226 14:58:00.367228 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535298-l9sj7"] Feb 26 14:58:00 crc kubenswrapper[4724]: I0226 14:58:00.396592 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535298-l9sj7" Feb 26 14:58:00 crc kubenswrapper[4724]: I0226 14:58:00.396754 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535298-l9sj7"] Feb 26 14:58:00 crc kubenswrapper[4724]: I0226 14:58:00.458857 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:58:00 crc kubenswrapper[4724]: I0226 14:58:00.460815 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 14:58:00 crc kubenswrapper[4724]: I0226 14:58:00.461088 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:58:00 crc kubenswrapper[4724]: I0226 14:58:00.523049 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-88b2k" podUID="b4d0fd21-0c46-4afb-80d2-343961ed0e70" containerName="registry-server" containerID="cri-o://7beb13c472ed3f82109fb98de2f08de967b011db55988244f1080fd5d20d51df" gracePeriod=2 Feb 26 14:58:00 crc kubenswrapper[4724]: I0226 14:58:00.577597 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bqbf\" (UniqueName: \"kubernetes.io/projected/67d51c93-371f-450b-bc05-2bbe03bfd362-kube-api-access-8bqbf\") pod \"auto-csr-approver-29535298-l9sj7\" (UID: \"67d51c93-371f-450b-bc05-2bbe03bfd362\") " pod="openshift-infra/auto-csr-approver-29535298-l9sj7" Feb 26 14:58:00 crc kubenswrapper[4724]: I0226 14:58:00.680677 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bqbf\" (UniqueName: \"kubernetes.io/projected/67d51c93-371f-450b-bc05-2bbe03bfd362-kube-api-access-8bqbf\") pod \"auto-csr-approver-29535298-l9sj7\" (UID: \"67d51c93-371f-450b-bc05-2bbe03bfd362\") " pod="openshift-infra/auto-csr-approver-29535298-l9sj7" Feb 26 14:58:00 crc kubenswrapper[4724]: I0226 14:58:00.710478 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bqbf\" (UniqueName: \"kubernetes.io/projected/67d51c93-371f-450b-bc05-2bbe03bfd362-kube-api-access-8bqbf\") pod \"auto-csr-approver-29535298-l9sj7\" (UID: \"67d51c93-371f-450b-bc05-2bbe03bfd362\") " pod="openshift-infra/auto-csr-approver-29535298-l9sj7" Feb 26 14:58:00 crc kubenswrapper[4724]: I0226 14:58:00.740612 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535298-l9sj7" Feb 26 14:58:01 crc kubenswrapper[4724]: I0226 14:58:01.531835 4724 generic.go:334] "Generic (PLEG): container finished" podID="b4d0fd21-0c46-4afb-80d2-343961ed0e70" containerID="7beb13c472ed3f82109fb98de2f08de967b011db55988244f1080fd5d20d51df" exitCode=0 Feb 26 14:58:01 crc kubenswrapper[4724]: I0226 14:58:01.531927 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88b2k" event={"ID":"b4d0fd21-0c46-4afb-80d2-343961ed0e70","Type":"ContainerDied","Data":"7beb13c472ed3f82109fb98de2f08de967b011db55988244f1080fd5d20d51df"} Feb 26 14:58:01 crc kubenswrapper[4724]: I0226 14:58:01.533254 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88b2k" event={"ID":"b4d0fd21-0c46-4afb-80d2-343961ed0e70","Type":"ContainerDied","Data":"498cf700c99e7376d6e8aa98d53b4abf67303c66773b2e75561875d5c758e07a"} Feb 26 14:58:01 crc kubenswrapper[4724]: I0226 14:58:01.534029 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="498cf700c99e7376d6e8aa98d53b4abf67303c66773b2e75561875d5c758e07a" Feb 26 14:58:01 crc kubenswrapper[4724]: I0226 14:58:01.569985 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:58:01 crc kubenswrapper[4724]: I0226 14:58:01.597694 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln9zb\" (UniqueName: \"kubernetes.io/projected/b4d0fd21-0c46-4afb-80d2-343961ed0e70-kube-api-access-ln9zb\") pod \"b4d0fd21-0c46-4afb-80d2-343961ed0e70\" (UID: \"b4d0fd21-0c46-4afb-80d2-343961ed0e70\") " Feb 26 14:58:01 crc kubenswrapper[4724]: I0226 14:58:01.597797 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d0fd21-0c46-4afb-80d2-343961ed0e70-catalog-content\") pod \"b4d0fd21-0c46-4afb-80d2-343961ed0e70\" (UID: \"b4d0fd21-0c46-4afb-80d2-343961ed0e70\") " Feb 26 14:58:01 crc kubenswrapper[4724]: I0226 14:58:01.597818 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d0fd21-0c46-4afb-80d2-343961ed0e70-utilities\") pod \"b4d0fd21-0c46-4afb-80d2-343961ed0e70\" (UID: \"b4d0fd21-0c46-4afb-80d2-343961ed0e70\") " Feb 26 14:58:01 crc kubenswrapper[4724]: I0226 14:58:01.599787 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4d0fd21-0c46-4afb-80d2-343961ed0e70-utilities" (OuterVolumeSpecName: "utilities") pod "b4d0fd21-0c46-4afb-80d2-343961ed0e70" (UID: "b4d0fd21-0c46-4afb-80d2-343961ed0e70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:58:01 crc kubenswrapper[4724]: I0226 14:58:01.638262 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4d0fd21-0c46-4afb-80d2-343961ed0e70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4d0fd21-0c46-4afb-80d2-343961ed0e70" (UID: "b4d0fd21-0c46-4afb-80d2-343961ed0e70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:58:01 crc kubenswrapper[4724]: I0226 14:58:01.655692 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d0fd21-0c46-4afb-80d2-343961ed0e70-kube-api-access-ln9zb" (OuterVolumeSpecName: "kube-api-access-ln9zb") pod "b4d0fd21-0c46-4afb-80d2-343961ed0e70" (UID: "b4d0fd21-0c46-4afb-80d2-343961ed0e70"). InnerVolumeSpecName "kube-api-access-ln9zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:58:01 crc kubenswrapper[4724]: I0226 14:58:01.700505 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d0fd21-0c46-4afb-80d2-343961ed0e70-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:58:01 crc kubenswrapper[4724]: I0226 14:58:01.700790 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d0fd21-0c46-4afb-80d2-343961ed0e70-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:58:01 crc kubenswrapper[4724]: I0226 14:58:01.700929 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln9zb\" (UniqueName: \"kubernetes.io/projected/b4d0fd21-0c46-4afb-80d2-343961ed0e70-kube-api-access-ln9zb\") on node \"crc\" DevicePath \"\"" Feb 26 14:58:01 crc kubenswrapper[4724]: I0226 14:58:01.733549 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535298-l9sj7"] Feb 26 14:58:02 crc kubenswrapper[4724]: I0226 14:58:02.546676 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535298-l9sj7" event={"ID":"67d51c93-371f-450b-bc05-2bbe03bfd362","Type":"ContainerStarted","Data":"707bcfaa77d906b2c18b126d8b21e221a3570a7c7ba80d5af9e4f6a5fc65952b"} Feb 26 14:58:02 crc kubenswrapper[4724]: I0226 14:58:02.546733 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88b2k" Feb 26 14:58:02 crc kubenswrapper[4724]: I0226 14:58:02.570319 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-88b2k"] Feb 26 14:58:02 crc kubenswrapper[4724]: I0226 14:58:02.581163 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-88b2k"] Feb 26 14:58:03 crc kubenswrapper[4724]: I0226 14:58:03.992887 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d0fd21-0c46-4afb-80d2-343961ed0e70" path="/var/lib/kubelet/pods/b4d0fd21-0c46-4afb-80d2-343961ed0e70/volumes" Feb 26 14:58:04 crc kubenswrapper[4724]: I0226 14:58:04.567566 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535298-l9sj7" event={"ID":"67d51c93-371f-450b-bc05-2bbe03bfd362","Type":"ContainerStarted","Data":"60d8c213b0a54202992b4956fe573fab6027bbd3c4d4e0cff4cad94e933e6d13"} Feb 26 14:58:04 crc kubenswrapper[4724]: I0226 14:58:04.586694 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535298-l9sj7" podStartSLOduration=3.49571592 podStartE2EDuration="4.586676451s" podCreationTimestamp="2026-02-26 14:58:00 +0000 UTC" firstStartedPulling="2026-02-26 14:58:01.743613164 +0000 UTC m=+13948.399352279" lastFinishedPulling="2026-02-26 14:58:02.834573655 +0000 UTC m=+13949.490312810" observedRunningTime="2026-02-26 14:58:04.5858514 +0000 UTC m=+13951.241590545" watchObservedRunningTime="2026-02-26 14:58:04.586676451 +0000 UTC m=+13951.242415586" Feb 26 14:58:05 crc kubenswrapper[4724]: I0226 14:58:05.578777 4724 generic.go:334] "Generic (PLEG): container finished" podID="67d51c93-371f-450b-bc05-2bbe03bfd362" containerID="60d8c213b0a54202992b4956fe573fab6027bbd3c4d4e0cff4cad94e933e6d13" exitCode=0 Feb 26 14:58:05 crc kubenswrapper[4724]: I0226 14:58:05.578874 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535298-l9sj7" event={"ID":"67d51c93-371f-450b-bc05-2bbe03bfd362","Type":"ContainerDied","Data":"60d8c213b0a54202992b4956fe573fab6027bbd3c4d4e0cff4cad94e933e6d13"} Feb 26 14:58:06 crc kubenswrapper[4724]: I0226 14:58:06.972611 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535298-l9sj7" Feb 26 14:58:07 crc kubenswrapper[4724]: I0226 14:58:07.109661 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535292-ld7bw"] Feb 26 14:58:07 crc kubenswrapper[4724]: I0226 14:58:07.112801 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bqbf\" (UniqueName: \"kubernetes.io/projected/67d51c93-371f-450b-bc05-2bbe03bfd362-kube-api-access-8bqbf\") pod \"67d51c93-371f-450b-bc05-2bbe03bfd362\" (UID: \"67d51c93-371f-450b-bc05-2bbe03bfd362\") " Feb 26 14:58:07 crc kubenswrapper[4724]: I0226 14:58:07.120193 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67d51c93-371f-450b-bc05-2bbe03bfd362-kube-api-access-8bqbf" (OuterVolumeSpecName: "kube-api-access-8bqbf") pod "67d51c93-371f-450b-bc05-2bbe03bfd362" (UID: "67d51c93-371f-450b-bc05-2bbe03bfd362"). InnerVolumeSpecName "kube-api-access-8bqbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:58:07 crc kubenswrapper[4724]: I0226 14:58:07.122975 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535292-ld7bw"] Feb 26 14:58:07 crc kubenswrapper[4724]: I0226 14:58:07.215380 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bqbf\" (UniqueName: \"kubernetes.io/projected/67d51c93-371f-450b-bc05-2bbe03bfd362-kube-api-access-8bqbf\") on node \"crc\" DevicePath \"\"" Feb 26 14:58:07 crc kubenswrapper[4724]: I0226 14:58:07.600378 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535298-l9sj7" event={"ID":"67d51c93-371f-450b-bc05-2bbe03bfd362","Type":"ContainerDied","Data":"707bcfaa77d906b2c18b126d8b21e221a3570a7c7ba80d5af9e4f6a5fc65952b"} Feb 26 14:58:07 crc kubenswrapper[4724]: I0226 14:58:07.600442 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="707bcfaa77d906b2c18b126d8b21e221a3570a7c7ba80d5af9e4f6a5fc65952b" Feb 26 14:58:07 crc kubenswrapper[4724]: I0226 14:58:07.600539 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535298-l9sj7" Feb 26 14:58:07 crc kubenswrapper[4724]: I0226 14:58:07.986054 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d" path="/var/lib/kubelet/pods/c0e9f0e3-2baa-48b8-8a5a-b8d148b6660d/volumes" Feb 26 14:58:16 crc kubenswrapper[4724]: I0226 14:58:16.906296 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:58:16 crc kubenswrapper[4724]: I0226 14:58:16.910541 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:58:37 crc kubenswrapper[4724]: I0226 14:58:37.059067 4724 scope.go:117] "RemoveContainer" containerID="5065c032ee0806ddfa26cc4bc710fb0e7ed72875f83e53b43163b7a1d5190a9a" Feb 26 14:58:46 crc kubenswrapper[4724]: I0226 14:58:46.907013 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:58:46 crc kubenswrapper[4724]: I0226 14:58:46.907731 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.691848 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9ptfl"] Feb 26 14:59:09 crc kubenswrapper[4724]: E0226 14:59:09.695127 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d51c93-371f-450b-bc05-2bbe03bfd362" containerName="oc" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.695171 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d51c93-371f-450b-bc05-2bbe03bfd362" containerName="oc" Feb 26 14:59:09 crc kubenswrapper[4724]: E0226 14:59:09.695296 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d0fd21-0c46-4afb-80d2-343961ed0e70" containerName="registry-server" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.695310 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d0fd21-0c46-4afb-80d2-343961ed0e70" containerName="registry-server" Feb 26 14:59:09 crc kubenswrapper[4724]: E0226 14:59:09.695336 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d0fd21-0c46-4afb-80d2-343961ed0e70" containerName="extract-utilities" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.695349 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d0fd21-0c46-4afb-80d2-343961ed0e70" containerName="extract-utilities" Feb 26 14:59:09 crc kubenswrapper[4724]: E0226 14:59:09.695368 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d0fd21-0c46-4afb-80d2-343961ed0e70" containerName="extract-content" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.695379 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d0fd21-0c46-4afb-80d2-343961ed0e70" containerName="extract-content" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.695782 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d0fd21-0c46-4afb-80d2-343961ed0e70" containerName="registry-server" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.695841 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="67d51c93-371f-450b-bc05-2bbe03bfd362" containerName="oc" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.698396 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.701456 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ptfl"] Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.742683 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e040ad-c269-4ea3-adb9-bd1ee947a829-utilities\") pod \"redhat-marketplace-9ptfl\" (UID: \"53e040ad-c269-4ea3-adb9-bd1ee947a829\") " pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.742994 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e040ad-c269-4ea3-adb9-bd1ee947a829-catalog-content\") pod \"redhat-marketplace-9ptfl\" (UID: \"53e040ad-c269-4ea3-adb9-bd1ee947a829\") " pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.743047 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lmgs\" (UniqueName: \"kubernetes.io/projected/53e040ad-c269-4ea3-adb9-bd1ee947a829-kube-api-access-5lmgs\") pod \"redhat-marketplace-9ptfl\" (UID: \"53e040ad-c269-4ea3-adb9-bd1ee947a829\") " pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.844977 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lmgs\" (UniqueName: \"kubernetes.io/projected/53e040ad-c269-4ea3-adb9-bd1ee947a829-kube-api-access-5lmgs\") pod \"redhat-marketplace-9ptfl\" (UID: \"53e040ad-c269-4ea3-adb9-bd1ee947a829\") " pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.845088 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e040ad-c269-4ea3-adb9-bd1ee947a829-utilities\") pod \"redhat-marketplace-9ptfl\" (UID: \"53e040ad-c269-4ea3-adb9-bd1ee947a829\") " pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.845163 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e040ad-c269-4ea3-adb9-bd1ee947a829-catalog-content\") pod \"redhat-marketplace-9ptfl\" (UID: \"53e040ad-c269-4ea3-adb9-bd1ee947a829\") " pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.845610 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e040ad-c269-4ea3-adb9-bd1ee947a829-catalog-content\") pod \"redhat-marketplace-9ptfl\" (UID: \"53e040ad-c269-4ea3-adb9-bd1ee947a829\") " pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.846479 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e040ad-c269-4ea3-adb9-bd1ee947a829-utilities\") pod \"redhat-marketplace-9ptfl\" (UID: \"53e040ad-c269-4ea3-adb9-bd1ee947a829\") " pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:09 crc kubenswrapper[4724]: I0226 14:59:09.868600 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lmgs\" (UniqueName: \"kubernetes.io/projected/53e040ad-c269-4ea3-adb9-bd1ee947a829-kube-api-access-5lmgs\") pod \"redhat-marketplace-9ptfl\" (UID: \"53e040ad-c269-4ea3-adb9-bd1ee947a829\") " pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:10 crc kubenswrapper[4724]: I0226 14:59:10.019682 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:10 crc kubenswrapper[4724]: I0226 14:59:10.588274 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ptfl"] Feb 26 14:59:11 crc kubenswrapper[4724]: I0226 14:59:11.298884 4724 generic.go:334] "Generic (PLEG): container finished" podID="53e040ad-c269-4ea3-adb9-bd1ee947a829" containerID="92954630d073aeeb8f06929ef187c2ef6a9772becbc6a4accc29bff9221cc31e" exitCode=0 Feb 26 14:59:11 crc kubenswrapper[4724]: I0226 14:59:11.298980 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ptfl" event={"ID":"53e040ad-c269-4ea3-adb9-bd1ee947a829","Type":"ContainerDied","Data":"92954630d073aeeb8f06929ef187c2ef6a9772becbc6a4accc29bff9221cc31e"} Feb 26 14:59:11 crc kubenswrapper[4724]: I0226 14:59:11.299429 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ptfl" event={"ID":"53e040ad-c269-4ea3-adb9-bd1ee947a829","Type":"ContainerStarted","Data":"26e2b972fe2192c717d826ef29308b8a3b950b5b4da69f6f9fb79c8607bd408d"} Feb 26 14:59:13 crc kubenswrapper[4724]: I0226 14:59:13.322398 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ptfl" event={"ID":"53e040ad-c269-4ea3-adb9-bd1ee947a829","Type":"ContainerStarted","Data":"26fb9cab080d1a5510d8d29d75ac557adbc791b1680b5041088f1d291807e746"} Feb 26 14:59:14 crc kubenswrapper[4724]: I0226 14:59:14.330292 4724 generic.go:334] "Generic (PLEG): container finished" podID="53e040ad-c269-4ea3-adb9-bd1ee947a829" containerID="26fb9cab080d1a5510d8d29d75ac557adbc791b1680b5041088f1d291807e746" exitCode=0 Feb 26 14:59:14 crc kubenswrapper[4724]: I0226 14:59:14.330342 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ptfl" event={"ID":"53e040ad-c269-4ea3-adb9-bd1ee947a829","Type":"ContainerDied","Data":"26fb9cab080d1a5510d8d29d75ac557adbc791b1680b5041088f1d291807e746"} Feb 26 14:59:15 crc kubenswrapper[4724]: I0226 14:59:15.340334 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ptfl" event={"ID":"53e040ad-c269-4ea3-adb9-bd1ee947a829","Type":"ContainerStarted","Data":"d4f2aaef238d9546d139feb3e9a42b1090b95ba640e4c4f345e5ae6375392525"} Feb 26 14:59:15 crc kubenswrapper[4724]: I0226 14:59:15.369501 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9ptfl" podStartSLOduration=2.914456255 podStartE2EDuration="6.368363478s" podCreationTimestamp="2026-02-26 14:59:09 +0000 UTC" firstStartedPulling="2026-02-26 14:59:11.300881501 +0000 UTC m=+14017.956620616" lastFinishedPulling="2026-02-26 14:59:14.754788724 +0000 UTC m=+14021.410527839" observedRunningTime="2026-02-26 14:59:15.363528136 +0000 UTC m=+14022.019267271" watchObservedRunningTime="2026-02-26 14:59:15.368363478 +0000 UTC m=+14022.024102593" Feb 26 14:59:16 crc kubenswrapper[4724]: I0226 14:59:16.907209 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:59:16 crc kubenswrapper[4724]: I0226 14:59:16.907473 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:59:16 crc kubenswrapper[4724]: I0226 14:59:16.907508 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 14:59:16 crc kubenswrapper[4724]: I0226 14:59:16.910220 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:59:16 crc kubenswrapper[4724]: I0226 14:59:16.910296 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" gracePeriod=600 Feb 26 14:59:17 crc kubenswrapper[4724]: E0226 14:59:17.037613 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:59:17 crc kubenswrapper[4724]: I0226 14:59:17.360147 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" exitCode=0 Feb 26 14:59:17 crc kubenswrapper[4724]: I0226 14:59:17.360213 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2"} Feb 26 14:59:17 crc kubenswrapper[4724]: I0226 14:59:17.360495 4724 scope.go:117] "RemoveContainer" containerID="280923b246c87c62f2d181f81ba14e28f5ae6e43bd7b9256085eb5cf25afbd11" Feb 26 14:59:17 crc kubenswrapper[4724]: I0226 14:59:17.361084 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 14:59:17 crc kubenswrapper[4724]: E0226 14:59:17.361363 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:59:20 crc kubenswrapper[4724]: I0226 14:59:20.020806 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:20 crc kubenswrapper[4724]: I0226 14:59:20.021304 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:21 crc kubenswrapper[4724]: I0226 14:59:21.061101 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-9ptfl" podUID="53e040ad-c269-4ea3-adb9-bd1ee947a829" containerName="registry-server" probeResult="failure" output=< Feb 26 14:59:21 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 14:59:21 crc kubenswrapper[4724]: > Feb 26 14:59:30 crc kubenswrapper[4724]: I0226 14:59:30.116688 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:30 crc kubenswrapper[4724]: I0226 14:59:30.166200 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:30 crc kubenswrapper[4724]: I0226 14:59:30.359633 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ptfl"] Feb 26 14:59:31 crc kubenswrapper[4724]: I0226 14:59:31.480899 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9ptfl" podUID="53e040ad-c269-4ea3-adb9-bd1ee947a829" containerName="registry-server" containerID="cri-o://d4f2aaef238d9546d139feb3e9a42b1090b95ba640e4c4f345e5ae6375392525" gracePeriod=2 Feb 26 14:59:31 crc kubenswrapper[4724]: I0226 14:59:31.979391 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 14:59:31 crc kubenswrapper[4724]: E0226 14:59:31.980318 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.319905 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.379632 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e040ad-c269-4ea3-adb9-bd1ee947a829-utilities\") pod \"53e040ad-c269-4ea3-adb9-bd1ee947a829\" (UID: \"53e040ad-c269-4ea3-adb9-bd1ee947a829\") " Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.379700 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lmgs\" (UniqueName: \"kubernetes.io/projected/53e040ad-c269-4ea3-adb9-bd1ee947a829-kube-api-access-5lmgs\") pod \"53e040ad-c269-4ea3-adb9-bd1ee947a829\" (UID: \"53e040ad-c269-4ea3-adb9-bd1ee947a829\") " Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.379949 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e040ad-c269-4ea3-adb9-bd1ee947a829-catalog-content\") pod \"53e040ad-c269-4ea3-adb9-bd1ee947a829\" (UID: \"53e040ad-c269-4ea3-adb9-bd1ee947a829\") " Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.381494 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e040ad-c269-4ea3-adb9-bd1ee947a829-utilities" (OuterVolumeSpecName: "utilities") pod "53e040ad-c269-4ea3-adb9-bd1ee947a829" (UID: "53e040ad-c269-4ea3-adb9-bd1ee947a829"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.386270 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e040ad-c269-4ea3-adb9-bd1ee947a829-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.400980 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e040ad-c269-4ea3-adb9-bd1ee947a829-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53e040ad-c269-4ea3-adb9-bd1ee947a829" (UID: "53e040ad-c269-4ea3-adb9-bd1ee947a829"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.403093 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e040ad-c269-4ea3-adb9-bd1ee947a829-kube-api-access-5lmgs" (OuterVolumeSpecName: "kube-api-access-5lmgs") pod "53e040ad-c269-4ea3-adb9-bd1ee947a829" (UID: "53e040ad-c269-4ea3-adb9-bd1ee947a829"). InnerVolumeSpecName "kube-api-access-5lmgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.487854 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e040ad-c269-4ea3-adb9-bd1ee947a829-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.487881 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lmgs\" (UniqueName: \"kubernetes.io/projected/53e040ad-c269-4ea3-adb9-bd1ee947a829-kube-api-access-5lmgs\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.491380 4724 generic.go:334] "Generic (PLEG): container finished" podID="53e040ad-c269-4ea3-adb9-bd1ee947a829" containerID="d4f2aaef238d9546d139feb3e9a42b1090b95ba640e4c4f345e5ae6375392525" exitCode=0 Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.491437 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ptfl" event={"ID":"53e040ad-c269-4ea3-adb9-bd1ee947a829","Type":"ContainerDied","Data":"d4f2aaef238d9546d139feb3e9a42b1090b95ba640e4c4f345e5ae6375392525"} Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.491471 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ptfl" event={"ID":"53e040ad-c269-4ea3-adb9-bd1ee947a829","Type":"ContainerDied","Data":"26e2b972fe2192c717d826ef29308b8a3b950b5b4da69f6f9fb79c8607bd408d"} Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.491497 4724 scope.go:117] "RemoveContainer" containerID="d4f2aaef238d9546d139feb3e9a42b1090b95ba640e4c4f345e5ae6375392525" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.491682 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ptfl" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.518832 4724 scope.go:117] "RemoveContainer" containerID="26fb9cab080d1a5510d8d29d75ac557adbc791b1680b5041088f1d291807e746" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.538235 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ptfl"] Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.558599 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ptfl"] Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.563101 4724 scope.go:117] "RemoveContainer" containerID="92954630d073aeeb8f06929ef187c2ef6a9772becbc6a4accc29bff9221cc31e" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.599932 4724 scope.go:117] "RemoveContainer" containerID="d4f2aaef238d9546d139feb3e9a42b1090b95ba640e4c4f345e5ae6375392525" Feb 26 14:59:32 crc kubenswrapper[4724]: E0226 14:59:32.604465 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4f2aaef238d9546d139feb3e9a42b1090b95ba640e4c4f345e5ae6375392525\": container with ID starting with d4f2aaef238d9546d139feb3e9a42b1090b95ba640e4c4f345e5ae6375392525 not found: ID does not exist" containerID="d4f2aaef238d9546d139feb3e9a42b1090b95ba640e4c4f345e5ae6375392525" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.605766 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f2aaef238d9546d139feb3e9a42b1090b95ba640e4c4f345e5ae6375392525"} err="failed to get container status \"d4f2aaef238d9546d139feb3e9a42b1090b95ba640e4c4f345e5ae6375392525\": rpc error: code = NotFound desc = could not find container \"d4f2aaef238d9546d139feb3e9a42b1090b95ba640e4c4f345e5ae6375392525\": container with ID starting with d4f2aaef238d9546d139feb3e9a42b1090b95ba640e4c4f345e5ae6375392525 not found: ID does not exist" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.605804 4724 scope.go:117] "RemoveContainer" containerID="26fb9cab080d1a5510d8d29d75ac557adbc791b1680b5041088f1d291807e746" Feb 26 14:59:32 crc kubenswrapper[4724]: E0226 14:59:32.606243 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26fb9cab080d1a5510d8d29d75ac557adbc791b1680b5041088f1d291807e746\": container with ID starting with 26fb9cab080d1a5510d8d29d75ac557adbc791b1680b5041088f1d291807e746 not found: ID does not exist" containerID="26fb9cab080d1a5510d8d29d75ac557adbc791b1680b5041088f1d291807e746" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.606264 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26fb9cab080d1a5510d8d29d75ac557adbc791b1680b5041088f1d291807e746"} err="failed to get container status \"26fb9cab080d1a5510d8d29d75ac557adbc791b1680b5041088f1d291807e746\": rpc error: code = NotFound desc = could not find container \"26fb9cab080d1a5510d8d29d75ac557adbc791b1680b5041088f1d291807e746\": container with ID starting with 26fb9cab080d1a5510d8d29d75ac557adbc791b1680b5041088f1d291807e746 not found: ID does not exist" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.606277 4724 scope.go:117] "RemoveContainer" containerID="92954630d073aeeb8f06929ef187c2ef6a9772becbc6a4accc29bff9221cc31e" Feb 26 14:59:32 crc kubenswrapper[4724]: E0226 14:59:32.606468 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92954630d073aeeb8f06929ef187c2ef6a9772becbc6a4accc29bff9221cc31e\": container with ID starting with 92954630d073aeeb8f06929ef187c2ef6a9772becbc6a4accc29bff9221cc31e not found: ID does not exist" containerID="92954630d073aeeb8f06929ef187c2ef6a9772becbc6a4accc29bff9221cc31e" Feb 26 14:59:32 crc kubenswrapper[4724]: I0226 14:59:32.606493 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92954630d073aeeb8f06929ef187c2ef6a9772becbc6a4accc29bff9221cc31e"} err="failed to get container status \"92954630d073aeeb8f06929ef187c2ef6a9772becbc6a4accc29bff9221cc31e\": rpc error: code = NotFound desc = could not find container \"92954630d073aeeb8f06929ef187c2ef6a9772becbc6a4accc29bff9221cc31e\": container with ID starting with 92954630d073aeeb8f06929ef187c2ef6a9772becbc6a4accc29bff9221cc31e not found: ID does not exist" Feb 26 14:59:33 crc kubenswrapper[4724]: I0226 14:59:33.991496 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e040ad-c269-4ea3-adb9-bd1ee947a829" path="/var/lib/kubelet/pods/53e040ad-c269-4ea3-adb9-bd1ee947a829/volumes" Feb 26 14:59:45 crc kubenswrapper[4724]: I0226 14:59:44.975888 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 14:59:45 crc kubenswrapper[4724]: E0226 14:59:44.977021 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 14:59:57 crc kubenswrapper[4724]: I0226 14:59:57.976485 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 14:59:57 crc kubenswrapper[4724]: E0226 14:59:57.977715 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.198060 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535300-4spbs"] Feb 26 15:00:00 crc kubenswrapper[4724]: E0226 15:00:00.198981 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e040ad-c269-4ea3-adb9-bd1ee947a829" containerName="extract-utilities" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.198999 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e040ad-c269-4ea3-adb9-bd1ee947a829" containerName="extract-utilities" Feb 26 15:00:00 crc kubenswrapper[4724]: E0226 15:00:00.199023 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e040ad-c269-4ea3-adb9-bd1ee947a829" containerName="extract-content" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.199031 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e040ad-c269-4ea3-adb9-bd1ee947a829" containerName="extract-content" Feb 26 15:00:00 crc kubenswrapper[4724]: E0226 15:00:00.199062 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e040ad-c269-4ea3-adb9-bd1ee947a829" containerName="registry-server" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.199070 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e040ad-c269-4ea3-adb9-bd1ee947a829" containerName="registry-server" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.199343 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e040ad-c269-4ea3-adb9-bd1ee947a829" containerName="registry-server" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.202107 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535300-4spbs" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.211271 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7"] Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.213983 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.216253 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.217905 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.222936 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.223473 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.223273 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.237136 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535300-4spbs"] Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.238448 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mq5h\" (UniqueName: \"kubernetes.io/projected/f89669d5-8f04-41d9-9cf6-a490ed30d9ab-kube-api-access-2mq5h\") pod \"auto-csr-approver-29535300-4spbs\" (UID: \"f89669d5-8f04-41d9-9cf6-a490ed30d9ab\") " pod="openshift-infra/auto-csr-approver-29535300-4spbs" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.272414 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7"] Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.340680 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c91c9833-70b9-4d0f-85a0-97eaffe9390c-secret-volume\") pod \"collect-profiles-29535300-rngp7\" (UID: \"c91c9833-70b9-4d0f-85a0-97eaffe9390c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.340778 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c91c9833-70b9-4d0f-85a0-97eaffe9390c-config-volume\") pod \"collect-profiles-29535300-rngp7\" (UID: \"c91c9833-70b9-4d0f-85a0-97eaffe9390c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.341011 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mq5h\" (UniqueName: \"kubernetes.io/projected/f89669d5-8f04-41d9-9cf6-a490ed30d9ab-kube-api-access-2mq5h\") pod \"auto-csr-approver-29535300-4spbs\" (UID: \"f89669d5-8f04-41d9-9cf6-a490ed30d9ab\") " pod="openshift-infra/auto-csr-approver-29535300-4spbs" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.341126 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4snqk\" (UniqueName: \"kubernetes.io/projected/c91c9833-70b9-4d0f-85a0-97eaffe9390c-kube-api-access-4snqk\") pod \"collect-profiles-29535300-rngp7\" (UID: \"c91c9833-70b9-4d0f-85a0-97eaffe9390c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.366804 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mq5h\" (UniqueName: \"kubernetes.io/projected/f89669d5-8f04-41d9-9cf6-a490ed30d9ab-kube-api-access-2mq5h\") pod \"auto-csr-approver-29535300-4spbs\" (UID: \"f89669d5-8f04-41d9-9cf6-a490ed30d9ab\") " pod="openshift-infra/auto-csr-approver-29535300-4spbs" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.443461 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4snqk\" (UniqueName: \"kubernetes.io/projected/c91c9833-70b9-4d0f-85a0-97eaffe9390c-kube-api-access-4snqk\") pod \"collect-profiles-29535300-rngp7\" (UID: \"c91c9833-70b9-4d0f-85a0-97eaffe9390c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.443782 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c91c9833-70b9-4d0f-85a0-97eaffe9390c-secret-volume\") pod \"collect-profiles-29535300-rngp7\" (UID: \"c91c9833-70b9-4d0f-85a0-97eaffe9390c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.443966 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c91c9833-70b9-4d0f-85a0-97eaffe9390c-config-volume\") pod \"collect-profiles-29535300-rngp7\" (UID: \"c91c9833-70b9-4d0f-85a0-97eaffe9390c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.444856 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c91c9833-70b9-4d0f-85a0-97eaffe9390c-config-volume\") pod \"collect-profiles-29535300-rngp7\" (UID: \"c91c9833-70b9-4d0f-85a0-97eaffe9390c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.447099 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c91c9833-70b9-4d0f-85a0-97eaffe9390c-secret-volume\") pod \"collect-profiles-29535300-rngp7\" (UID: \"c91c9833-70b9-4d0f-85a0-97eaffe9390c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.465153 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4snqk\" (UniqueName: \"kubernetes.io/projected/c91c9833-70b9-4d0f-85a0-97eaffe9390c-kube-api-access-4snqk\") pod \"collect-profiles-29535300-rngp7\" (UID: \"c91c9833-70b9-4d0f-85a0-97eaffe9390c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.538059 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535300-4spbs" Feb 26 15:00:00 crc kubenswrapper[4724]: I0226 15:00:00.574489 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" Feb 26 15:00:01 crc kubenswrapper[4724]: I0226 15:00:01.128065 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7"] Feb 26 15:00:01 crc kubenswrapper[4724]: I0226 15:00:01.189740 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535300-4spbs"] Feb 26 15:00:01 crc kubenswrapper[4724]: W0226 15:00:01.193279 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf89669d5_8f04_41d9_9cf6_a490ed30d9ab.slice/crio-e82bb424491a6e07abf1914eaf31d9c5a04fe33b601da57f698620c33ac7515d WatchSource:0}: Error finding container e82bb424491a6e07abf1914eaf31d9c5a04fe33b601da57f698620c33ac7515d: Status 404 returned error can't find the container with id e82bb424491a6e07abf1914eaf31d9c5a04fe33b601da57f698620c33ac7515d Feb 26 15:00:01 crc kubenswrapper[4724]: I0226 15:00:01.793416 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" event={"ID":"c91c9833-70b9-4d0f-85a0-97eaffe9390c","Type":"ContainerStarted","Data":"f65950967e0392e816dde4a3fd103b497ba536d379c22ff5cab02ae5e8390538"} Feb 26 15:00:01 crc kubenswrapper[4724]: I0226 15:00:01.793501 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" event={"ID":"c91c9833-70b9-4d0f-85a0-97eaffe9390c","Type":"ContainerStarted","Data":"b8c72fa402021eb0cb0e32a25ddd78366afce9a57b6b270d95076fef67490054"} Feb 26 15:00:01 crc kubenswrapper[4724]: I0226 15:00:01.799490 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535300-4spbs" event={"ID":"f89669d5-8f04-41d9-9cf6-a490ed30d9ab","Type":"ContainerStarted","Data":"e82bb424491a6e07abf1914eaf31d9c5a04fe33b601da57f698620c33ac7515d"} Feb 26 15:00:01 crc kubenswrapper[4724]: I0226 15:00:01.831793 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" podStartSLOduration=1.831765315 podStartE2EDuration="1.831765315s" podCreationTimestamp="2026-02-26 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 15:00:01.820278555 +0000 UTC m=+14068.476017680" watchObservedRunningTime="2026-02-26 15:00:01.831765315 +0000 UTC m=+14068.487504430" Feb 26 15:00:02 crc kubenswrapper[4724]: I0226 15:00:02.813264 4724 generic.go:334] "Generic (PLEG): container finished" podID="c91c9833-70b9-4d0f-85a0-97eaffe9390c" containerID="f65950967e0392e816dde4a3fd103b497ba536d379c22ff5cab02ae5e8390538" exitCode=0 Feb 26 15:00:02 crc kubenswrapper[4724]: I0226 15:00:02.813395 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" event={"ID":"c91c9833-70b9-4d0f-85a0-97eaffe9390c","Type":"ContainerDied","Data":"f65950967e0392e816dde4a3fd103b497ba536d379c22ff5cab02ae5e8390538"} Feb 26 15:00:04 crc kubenswrapper[4724]: I0226 15:00:04.401488 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" Feb 26 15:00:04 crc kubenswrapper[4724]: I0226 15:00:04.558001 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c91c9833-70b9-4d0f-85a0-97eaffe9390c-config-volume\") pod \"c91c9833-70b9-4d0f-85a0-97eaffe9390c\" (UID: \"c91c9833-70b9-4d0f-85a0-97eaffe9390c\") " Feb 26 15:00:04 crc kubenswrapper[4724]: I0226 15:00:04.558278 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c91c9833-70b9-4d0f-85a0-97eaffe9390c-secret-volume\") pod \"c91c9833-70b9-4d0f-85a0-97eaffe9390c\" (UID: \"c91c9833-70b9-4d0f-85a0-97eaffe9390c\") " Feb 26 15:00:04 crc kubenswrapper[4724]: I0226 15:00:04.558354 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4snqk\" (UniqueName: \"kubernetes.io/projected/c91c9833-70b9-4d0f-85a0-97eaffe9390c-kube-api-access-4snqk\") pod \"c91c9833-70b9-4d0f-85a0-97eaffe9390c\" (UID: \"c91c9833-70b9-4d0f-85a0-97eaffe9390c\") " Feb 26 15:00:04 crc kubenswrapper[4724]: I0226 15:00:04.560648 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c91c9833-70b9-4d0f-85a0-97eaffe9390c-config-volume" (OuterVolumeSpecName: "config-volume") pod "c91c9833-70b9-4d0f-85a0-97eaffe9390c" (UID: "c91c9833-70b9-4d0f-85a0-97eaffe9390c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 15:00:04 crc kubenswrapper[4724]: I0226 15:00:04.564427 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c91c9833-70b9-4d0f-85a0-97eaffe9390c-kube-api-access-4snqk" (OuterVolumeSpecName: "kube-api-access-4snqk") pod "c91c9833-70b9-4d0f-85a0-97eaffe9390c" (UID: "c91c9833-70b9-4d0f-85a0-97eaffe9390c"). InnerVolumeSpecName "kube-api-access-4snqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:00:04 crc kubenswrapper[4724]: I0226 15:00:04.572526 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c91c9833-70b9-4d0f-85a0-97eaffe9390c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c91c9833-70b9-4d0f-85a0-97eaffe9390c" (UID: "c91c9833-70b9-4d0f-85a0-97eaffe9390c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:00:04 crc kubenswrapper[4724]: I0226 15:00:04.660440 4724 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c91c9833-70b9-4d0f-85a0-97eaffe9390c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:04 crc kubenswrapper[4724]: I0226 15:00:04.660475 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4snqk\" (UniqueName: \"kubernetes.io/projected/c91c9833-70b9-4d0f-85a0-97eaffe9390c-kube-api-access-4snqk\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:04 crc kubenswrapper[4724]: I0226 15:00:04.660484 4724 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c91c9833-70b9-4d0f-85a0-97eaffe9390c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:04 crc kubenswrapper[4724]: I0226 15:00:04.837025 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" event={"ID":"c91c9833-70b9-4d0f-85a0-97eaffe9390c","Type":"ContainerDied","Data":"b8c72fa402021eb0cb0e32a25ddd78366afce9a57b6b270d95076fef67490054"} Feb 26 15:00:04 crc kubenswrapper[4724]: I0226 15:00:04.837070 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-rngp7" Feb 26 15:00:04 crc kubenswrapper[4724]: I0226 15:00:04.837065 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8c72fa402021eb0cb0e32a25ddd78366afce9a57b6b270d95076fef67490054" Feb 26 15:00:04 crc kubenswrapper[4724]: I0226 15:00:04.925592 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br"] Feb 26 15:00:04 crc kubenswrapper[4724]: I0226 15:00:04.934020 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535255-7p5br"] Feb 26 15:00:05 crc kubenswrapper[4724]: I0226 15:00:05.991551 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d52259-8faf-4e53-9c4d-6210079417f4" path="/var/lib/kubelet/pods/e2d52259-8faf-4e53-9c4d-6210079417f4/volumes" Feb 26 15:00:06 crc kubenswrapper[4724]: I0226 15:00:06.858801 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535300-4spbs" event={"ID":"f89669d5-8f04-41d9-9cf6-a490ed30d9ab","Type":"ContainerStarted","Data":"82093ce08f4487947505b3ff08128b3a1b537a6002888de364404e9767c6f960"} Feb 26 15:00:06 crc kubenswrapper[4724]: I0226 15:00:06.878134 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535300-4spbs" podStartSLOduration=2.495781055 podStartE2EDuration="6.878119503s" podCreationTimestamp="2026-02-26 15:00:00 +0000 UTC" firstStartedPulling="2026-02-26 15:00:01.192042901 +0000 UTC m=+14067.847782016" lastFinishedPulling="2026-02-26 15:00:05.574381349 +0000 UTC m=+14072.230120464" observedRunningTime="2026-02-26 15:00:06.872568963 +0000 UTC m=+14073.528308098" watchObservedRunningTime="2026-02-26 15:00:06.878119503 +0000 UTC m=+14073.533858618" Feb 26 15:00:07 crc kubenswrapper[4724]: I0226 15:00:07.868533 4724 generic.go:334] "Generic (PLEG): container finished" podID="f89669d5-8f04-41d9-9cf6-a490ed30d9ab" containerID="82093ce08f4487947505b3ff08128b3a1b537a6002888de364404e9767c6f960" exitCode=0 Feb 26 15:00:07 crc kubenswrapper[4724]: I0226 15:00:07.868635 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535300-4spbs" event={"ID":"f89669d5-8f04-41d9-9cf6-a490ed30d9ab","Type":"ContainerDied","Data":"82093ce08f4487947505b3ff08128b3a1b537a6002888de364404e9767c6f960"} Feb 26 15:00:09 crc kubenswrapper[4724]: I0226 15:00:09.333546 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535300-4spbs" Feb 26 15:00:09 crc kubenswrapper[4724]: I0226 15:00:09.460559 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mq5h\" (UniqueName: \"kubernetes.io/projected/f89669d5-8f04-41d9-9cf6-a490ed30d9ab-kube-api-access-2mq5h\") pod \"f89669d5-8f04-41d9-9cf6-a490ed30d9ab\" (UID: \"f89669d5-8f04-41d9-9cf6-a490ed30d9ab\") " Feb 26 15:00:09 crc kubenswrapper[4724]: I0226 15:00:09.465656 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f89669d5-8f04-41d9-9cf6-a490ed30d9ab-kube-api-access-2mq5h" (OuterVolumeSpecName: "kube-api-access-2mq5h") pod "f89669d5-8f04-41d9-9cf6-a490ed30d9ab" (UID: "f89669d5-8f04-41d9-9cf6-a490ed30d9ab"). InnerVolumeSpecName "kube-api-access-2mq5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:00:09 crc kubenswrapper[4724]: I0226 15:00:09.562673 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mq5h\" (UniqueName: \"kubernetes.io/projected/f89669d5-8f04-41d9-9cf6-a490ed30d9ab-kube-api-access-2mq5h\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:09 crc kubenswrapper[4724]: I0226 15:00:09.889660 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535300-4spbs" event={"ID":"f89669d5-8f04-41d9-9cf6-a490ed30d9ab","Type":"ContainerDied","Data":"e82bb424491a6e07abf1914eaf31d9c5a04fe33b601da57f698620c33ac7515d"} Feb 26 15:00:09 crc kubenswrapper[4724]: I0226 15:00:09.889976 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e82bb424491a6e07abf1914eaf31d9c5a04fe33b601da57f698620c33ac7515d" Feb 26 15:00:09 crc kubenswrapper[4724]: I0226 15:00:09.890038 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535300-4spbs" Feb 26 15:00:09 crc kubenswrapper[4724]: I0226 15:00:09.937730 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535294-p4txd"] Feb 26 15:00:09 crc kubenswrapper[4724]: I0226 15:00:09.946308 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535294-p4txd"] Feb 26 15:00:09 crc kubenswrapper[4724]: I0226 15:00:09.975545 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:00:09 crc kubenswrapper[4724]: E0226 15:00:09.975963 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:00:09 crc kubenswrapper[4724]: I0226 15:00:09.994972 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6168c23-1074-4235-8354-cbe5d261de46" path="/var/lib/kubelet/pods/f6168c23-1074-4235-8354-cbe5d261de46/volumes" Feb 26 15:00:20 crc kubenswrapper[4724]: I0226 15:00:20.975845 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:00:20 crc kubenswrapper[4724]: E0226 15:00:20.977460 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:00:34 crc kubenswrapper[4724]: I0226 15:00:34.976260 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:00:34 crc kubenswrapper[4724]: E0226 15:00:34.977427 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:00:37 crc kubenswrapper[4724]: I0226 15:00:37.355299 4724 scope.go:117] "RemoveContainer" containerID="baeb3d919d4c39b74893e7b72daa14d84ef1c432d323fd8c07ed39527f5dfca6" Feb 26 15:00:37 crc kubenswrapper[4724]: I0226 15:00:37.405071 4724 scope.go:117] "RemoveContainer" containerID="3d72d24330334e7fa275717b40307d1137a2187f31b543ea363a2ae6e7e1a74f" Feb 26 15:00:47 crc kubenswrapper[4724]: I0226 15:00:47.975932 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:00:47 crc kubenswrapper[4724]: E0226 15:00:47.976810 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.185530 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29535301-44wcs"] Feb 26 15:01:00 crc kubenswrapper[4724]: E0226 15:01:00.186507 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91c9833-70b9-4d0f-85a0-97eaffe9390c" containerName="collect-profiles" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.186524 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91c9833-70b9-4d0f-85a0-97eaffe9390c" containerName="collect-profiles" Feb 26 15:01:00 crc kubenswrapper[4724]: E0226 15:01:00.186547 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89669d5-8f04-41d9-9cf6-a490ed30d9ab" containerName="oc" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.186555 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89669d5-8f04-41d9-9cf6-a490ed30d9ab" containerName="oc" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.186786 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c91c9833-70b9-4d0f-85a0-97eaffe9390c" containerName="collect-profiles" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.186811 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89669d5-8f04-41d9-9cf6-a490ed30d9ab" containerName="oc" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.187577 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.205933 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535301-44wcs"] Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.222836 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-combined-ca-bundle\") pod \"keystone-cron-29535301-44wcs\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.222889 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-config-data\") pod \"keystone-cron-29535301-44wcs\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.222933 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-fernet-keys\") pod \"keystone-cron-29535301-44wcs\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.222984 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxgfs\" (UniqueName: \"kubernetes.io/projected/c640aad3-ad6f-456d-9901-0bb0a62b88e4-kube-api-access-fxgfs\") pod \"keystone-cron-29535301-44wcs\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.325315 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-combined-ca-bundle\") pod \"keystone-cron-29535301-44wcs\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.325647 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-config-data\") pod \"keystone-cron-29535301-44wcs\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.325690 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-fernet-keys\") pod \"keystone-cron-29535301-44wcs\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.325730 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxgfs\" (UniqueName: \"kubernetes.io/projected/c640aad3-ad6f-456d-9901-0bb0a62b88e4-kube-api-access-fxgfs\") pod \"keystone-cron-29535301-44wcs\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.341594 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-combined-ca-bundle\") pod \"keystone-cron-29535301-44wcs\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.341615 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-config-data\") pod \"keystone-cron-29535301-44wcs\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.343461 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-fernet-keys\") pod \"keystone-cron-29535301-44wcs\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.343802 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxgfs\" (UniqueName: \"kubernetes.io/projected/c640aad3-ad6f-456d-9901-0bb0a62b88e4-kube-api-access-fxgfs\") pod \"keystone-cron-29535301-44wcs\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.575682 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:00 crc kubenswrapper[4724]: I0226 15:01:00.976533 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:01:00 crc kubenswrapper[4724]: E0226 15:01:00.977890 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:01:01 crc kubenswrapper[4724]: I0226 15:01:01.173571 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535301-44wcs"] Feb 26 15:01:01 crc kubenswrapper[4724]: I0226 15:01:01.482838 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535301-44wcs" event={"ID":"c640aad3-ad6f-456d-9901-0bb0a62b88e4","Type":"ContainerStarted","Data":"88eadc04a26bcb0bd6c25ac5197e61acb8d2b2b3ba6b7cb98c16c581ef6f1634"} Feb 26 15:01:01 crc kubenswrapper[4724]: I0226 15:01:01.483239 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535301-44wcs" event={"ID":"c640aad3-ad6f-456d-9901-0bb0a62b88e4","Type":"ContainerStarted","Data":"b591c8600c9c10e0352c5e73d5554fc5ce27f2199ac4765e86fdd71cb072e036"} Feb 26 15:01:01 crc kubenswrapper[4724]: I0226 15:01:01.499708 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29535301-44wcs" podStartSLOduration=1.49968823 podStartE2EDuration="1.49968823s" podCreationTimestamp="2026-02-26 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 15:01:01.498938281 +0000 UTC m=+14128.154677396" watchObservedRunningTime="2026-02-26 15:01:01.49968823 +0000 UTC m=+14128.155427345" Feb 26 15:01:05 crc kubenswrapper[4724]: I0226 15:01:05.537249 4724 generic.go:334] "Generic (PLEG): container finished" podID="c640aad3-ad6f-456d-9901-0bb0a62b88e4" containerID="88eadc04a26bcb0bd6c25ac5197e61acb8d2b2b3ba6b7cb98c16c581ef6f1634" exitCode=0 Feb 26 15:01:05 crc kubenswrapper[4724]: I0226 15:01:05.537301 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535301-44wcs" event={"ID":"c640aad3-ad6f-456d-9901-0bb0a62b88e4","Type":"ContainerDied","Data":"88eadc04a26bcb0bd6c25ac5197e61acb8d2b2b3ba6b7cb98c16c581ef6f1634"} Feb 26 15:01:06 crc kubenswrapper[4724]: I0226 15:01:06.966792 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.081015 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-config-data\") pod \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.081555 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-combined-ca-bundle\") pod \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.081676 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxgfs\" (UniqueName: \"kubernetes.io/projected/c640aad3-ad6f-456d-9901-0bb0a62b88e4-kube-api-access-fxgfs\") pod \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.081778 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-fernet-keys\") pod \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\" (UID: \"c640aad3-ad6f-456d-9901-0bb0a62b88e4\") " Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.088524 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c640aad3-ad6f-456d-9901-0bb0a62b88e4" (UID: "c640aad3-ad6f-456d-9901-0bb0a62b88e4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.089349 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c640aad3-ad6f-456d-9901-0bb0a62b88e4-kube-api-access-fxgfs" (OuterVolumeSpecName: "kube-api-access-fxgfs") pod "c640aad3-ad6f-456d-9901-0bb0a62b88e4" (UID: "c640aad3-ad6f-456d-9901-0bb0a62b88e4"). InnerVolumeSpecName "kube-api-access-fxgfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.129763 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c640aad3-ad6f-456d-9901-0bb0a62b88e4" (UID: "c640aad3-ad6f-456d-9901-0bb0a62b88e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.157692 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-config-data" (OuterVolumeSpecName: "config-data") pod "c640aad3-ad6f-456d-9901-0bb0a62b88e4" (UID: "c640aad3-ad6f-456d-9901-0bb0a62b88e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.183931 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxgfs\" (UniqueName: \"kubernetes.io/projected/c640aad3-ad6f-456d-9901-0bb0a62b88e4-kube-api-access-fxgfs\") on node \"crc\" DevicePath \"\"" Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.183977 4724 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.183992 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.184005 4724 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c640aad3-ad6f-456d-9901-0bb0a62b88e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.559750 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535301-44wcs" event={"ID":"c640aad3-ad6f-456d-9901-0bb0a62b88e4","Type":"ContainerDied","Data":"b591c8600c9c10e0352c5e73d5554fc5ce27f2199ac4765e86fdd71cb072e036"} Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.559811 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b591c8600c9c10e0352c5e73d5554fc5ce27f2199ac4765e86fdd71cb072e036" Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.560340 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535301-44wcs" Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.650647 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cron-29535121-zgmbr"] Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.659878 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-cron-29535121-zgmbr"] Feb 26 15:01:07 crc kubenswrapper[4724]: I0226 15:01:07.990000 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ac65d3-f64d-4a73-b7b6-df090fc3706d" path="/var/lib/kubelet/pods/97ac65d3-f64d-4a73-b7b6-df090fc3706d/volumes" Feb 26 15:01:12 crc kubenswrapper[4724]: I0226 15:01:12.975873 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:01:12 crc kubenswrapper[4724]: E0226 15:01:12.976895 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:01:25 crc kubenswrapper[4724]: I0226 15:01:25.975982 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:01:25 crc kubenswrapper[4724]: E0226 15:01:25.976940 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:01:37 crc kubenswrapper[4724]: I0226 15:01:37.649026 4724 scope.go:117] "RemoveContainer" containerID="4add9a9a1d6e89c7acc6fd83ce47979b7300ceea40426f7789850b09a02ad5ac" Feb 26 15:01:39 crc kubenswrapper[4724]: I0226 15:01:39.976097 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:01:39 crc kubenswrapper[4724]: E0226 15:01:39.976753 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:01:51 crc kubenswrapper[4724]: I0226 15:01:51.975425 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:01:51 crc kubenswrapper[4724]: E0226 15:01:51.976261 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:02:00 crc kubenswrapper[4724]: I0226 15:02:00.191718 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535302-k7qtw"] Feb 26 15:02:00 crc kubenswrapper[4724]: E0226 15:02:00.192662 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c640aad3-ad6f-456d-9901-0bb0a62b88e4" containerName="keystone-cron" Feb 26 15:02:00 crc kubenswrapper[4724]: I0226 15:02:00.192679 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c640aad3-ad6f-456d-9901-0bb0a62b88e4" containerName="keystone-cron" Feb 26 15:02:00 crc kubenswrapper[4724]: I0226 15:02:00.192916 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c640aad3-ad6f-456d-9901-0bb0a62b88e4" containerName="keystone-cron" Feb 26 15:02:00 crc kubenswrapper[4724]: I0226 15:02:00.193858 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535302-k7qtw" Feb 26 15:02:00 crc kubenswrapper[4724]: I0226 15:02:00.207446 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:02:00 crc kubenswrapper[4724]: I0226 15:02:00.207525 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:02:00 crc kubenswrapper[4724]: I0226 15:02:00.207529 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 15:02:00 crc kubenswrapper[4724]: I0226 15:02:00.219245 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535302-k7qtw"] Feb 26 15:02:00 crc kubenswrapper[4724]: I0226 15:02:00.343376 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8w58\" (UniqueName: \"kubernetes.io/projected/46881f2d-840f-462a-ad89-af75f272e60c-kube-api-access-n8w58\") pod \"auto-csr-approver-29535302-k7qtw\" (UID: \"46881f2d-840f-462a-ad89-af75f272e60c\") " pod="openshift-infra/auto-csr-approver-29535302-k7qtw" Feb 26 15:02:00 crc kubenswrapper[4724]: I0226 15:02:00.445935 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8w58\" (UniqueName: \"kubernetes.io/projected/46881f2d-840f-462a-ad89-af75f272e60c-kube-api-access-n8w58\") pod \"auto-csr-approver-29535302-k7qtw\" (UID: \"46881f2d-840f-462a-ad89-af75f272e60c\") " pod="openshift-infra/auto-csr-approver-29535302-k7qtw" Feb 26 15:02:00 crc kubenswrapper[4724]: I0226 15:02:00.469170 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8w58\" (UniqueName: \"kubernetes.io/projected/46881f2d-840f-462a-ad89-af75f272e60c-kube-api-access-n8w58\") pod \"auto-csr-approver-29535302-k7qtw\" (UID: \"46881f2d-840f-462a-ad89-af75f272e60c\") " pod="openshift-infra/auto-csr-approver-29535302-k7qtw" Feb 26 15:02:00 crc kubenswrapper[4724]: I0226 15:02:00.545681 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535302-k7qtw" Feb 26 15:02:01 crc kubenswrapper[4724]: I0226 15:02:01.814078 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535302-k7qtw"] Feb 26 15:02:02 crc kubenswrapper[4724]: I0226 15:02:02.215685 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535302-k7qtw" event={"ID":"46881f2d-840f-462a-ad89-af75f272e60c","Type":"ContainerStarted","Data":"6e3eb0df36eef359c5927a7db348b32926ec97d1eeda94409b2e18a2ac6be3ae"} Feb 26 15:02:02 crc kubenswrapper[4724]: I0226 15:02:02.975714 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:02:02 crc kubenswrapper[4724]: E0226 15:02:02.976814 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:02:04 crc kubenswrapper[4724]: I0226 15:02:04.250377 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535302-k7qtw" event={"ID":"46881f2d-840f-462a-ad89-af75f272e60c","Type":"ContainerStarted","Data":"d496e389c292d1ab6abd8a9a8d439dc22cac1226f7428977b61156a56fdc0ae8"} Feb 26 15:02:04 crc kubenswrapper[4724]: I0226 15:02:04.276439 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535302-k7qtw" podStartSLOduration=3.253793569 podStartE2EDuration="4.276417924s" podCreationTimestamp="2026-02-26 15:02:00 +0000 UTC" firstStartedPulling="2026-02-26 15:02:01.82822919 +0000 UTC m=+14188.483968305" lastFinishedPulling="2026-02-26 15:02:02.850853545 +0000 UTC m=+14189.506592660" observedRunningTime="2026-02-26 15:02:04.275262385 +0000 UTC m=+14190.931001530" watchObservedRunningTime="2026-02-26 15:02:04.276417924 +0000 UTC m=+14190.932157049" Feb 26 15:02:05 crc kubenswrapper[4724]: I0226 15:02:05.280157 4724 generic.go:334] "Generic (PLEG): container finished" podID="46881f2d-840f-462a-ad89-af75f272e60c" containerID="d496e389c292d1ab6abd8a9a8d439dc22cac1226f7428977b61156a56fdc0ae8" exitCode=0 Feb 26 15:02:05 crc kubenswrapper[4724]: I0226 15:02:05.280226 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535302-k7qtw" event={"ID":"46881f2d-840f-462a-ad89-af75f272e60c","Type":"ContainerDied","Data":"d496e389c292d1ab6abd8a9a8d439dc22cac1226f7428977b61156a56fdc0ae8"} Feb 26 15:02:06 crc kubenswrapper[4724]: I0226 15:02:06.764235 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pk8nj/must-gather-r9677"] Feb 26 15:02:06 crc kubenswrapper[4724]: I0226 15:02:06.779150 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pk8nj/must-gather-r9677"] Feb 26 15:02:06 crc kubenswrapper[4724]: I0226 15:02:06.780199 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/must-gather-r9677" Feb 26 15:02:06 crc kubenswrapper[4724]: I0226 15:02:06.787436 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pk8nj"/"kube-root-ca.crt" Feb 26 15:02:06 crc kubenswrapper[4724]: I0226 15:02:06.787627 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pk8nj"/"openshift-service-ca.crt" Feb 26 15:02:06 crc kubenswrapper[4724]: I0226 15:02:06.849783 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535302-k7qtw" Feb 26 15:02:06 crc kubenswrapper[4724]: I0226 15:02:06.881442 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8w58\" (UniqueName: \"kubernetes.io/projected/46881f2d-840f-462a-ad89-af75f272e60c-kube-api-access-n8w58\") pod \"46881f2d-840f-462a-ad89-af75f272e60c\" (UID: \"46881f2d-840f-462a-ad89-af75f272e60c\") " Feb 26 15:02:06 crc kubenswrapper[4724]: I0226 15:02:06.882064 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf705be9-0e89-49db-aa47-c709a3f7c82c-must-gather-output\") pod \"must-gather-r9677\" (UID: \"cf705be9-0e89-49db-aa47-c709a3f7c82c\") " pod="openshift-must-gather-pk8nj/must-gather-r9677" Feb 26 15:02:06 crc kubenswrapper[4724]: I0226 15:02:06.882142 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm6fk\" (UniqueName: \"kubernetes.io/projected/cf705be9-0e89-49db-aa47-c709a3f7c82c-kube-api-access-mm6fk\") pod \"must-gather-r9677\" (UID: \"cf705be9-0e89-49db-aa47-c709a3f7c82c\") " pod="openshift-must-gather-pk8nj/must-gather-r9677" Feb 26 15:02:06 crc kubenswrapper[4724]: I0226 15:02:06.900837 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46881f2d-840f-462a-ad89-af75f272e60c-kube-api-access-n8w58" (OuterVolumeSpecName: "kube-api-access-n8w58") pod "46881f2d-840f-462a-ad89-af75f272e60c" (UID: "46881f2d-840f-462a-ad89-af75f272e60c"). InnerVolumeSpecName "kube-api-access-n8w58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:02:06 crc kubenswrapper[4724]: I0226 15:02:06.983153 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf705be9-0e89-49db-aa47-c709a3f7c82c-must-gather-output\") pod \"must-gather-r9677\" (UID: \"cf705be9-0e89-49db-aa47-c709a3f7c82c\") " pod="openshift-must-gather-pk8nj/must-gather-r9677" Feb 26 15:02:06 crc kubenswrapper[4724]: I0226 15:02:06.983494 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm6fk\" (UniqueName: \"kubernetes.io/projected/cf705be9-0e89-49db-aa47-c709a3f7c82c-kube-api-access-mm6fk\") pod \"must-gather-r9677\" (UID: \"cf705be9-0e89-49db-aa47-c709a3f7c82c\") " pod="openshift-must-gather-pk8nj/must-gather-r9677" Feb 26 15:02:06 crc kubenswrapper[4724]: I0226 15:02:06.983594 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8w58\" (UniqueName: \"kubernetes.io/projected/46881f2d-840f-462a-ad89-af75f272e60c-kube-api-access-n8w58\") on node \"crc\" DevicePath \"\"" Feb 26 15:02:06 crc kubenswrapper[4724]: I0226 15:02:06.984610 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf705be9-0e89-49db-aa47-c709a3f7c82c-must-gather-output\") pod \"must-gather-r9677\" (UID: \"cf705be9-0e89-49db-aa47-c709a3f7c82c\") " pod="openshift-must-gather-pk8nj/must-gather-r9677" Feb 26 15:02:07 crc kubenswrapper[4724]: I0226 15:02:07.011911 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm6fk\" (UniqueName: \"kubernetes.io/projected/cf705be9-0e89-49db-aa47-c709a3f7c82c-kube-api-access-mm6fk\") pod \"must-gather-r9677\" (UID: \"cf705be9-0e89-49db-aa47-c709a3f7c82c\") " pod="openshift-must-gather-pk8nj/must-gather-r9677" Feb 26 15:02:07 crc kubenswrapper[4724]: I0226 15:02:07.081259 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535296-5q9w2"] Feb 26 15:02:07 crc kubenswrapper[4724]: I0226 15:02:07.089554 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535296-5q9w2"] Feb 26 15:02:07 crc kubenswrapper[4724]: I0226 15:02:07.173656 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/must-gather-r9677" Feb 26 15:02:07 crc kubenswrapper[4724]: I0226 15:02:07.315124 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535302-k7qtw" event={"ID":"46881f2d-840f-462a-ad89-af75f272e60c","Type":"ContainerDied","Data":"6e3eb0df36eef359c5927a7db348b32926ec97d1eeda94409b2e18a2ac6be3ae"} Feb 26 15:02:07 crc kubenswrapper[4724]: I0226 15:02:07.315170 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e3eb0df36eef359c5927a7db348b32926ec97d1eeda94409b2e18a2ac6be3ae" Feb 26 15:02:07 crc kubenswrapper[4724]: I0226 15:02:07.315257 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535302-k7qtw" Feb 26 15:02:07 crc kubenswrapper[4724]: I0226 15:02:07.676583 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pk8nj/must-gather-r9677"] Feb 26 15:02:07 crc kubenswrapper[4724]: W0226 15:02:07.677834 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf705be9_0e89_49db_aa47_c709a3f7c82c.slice/crio-0861acbef1d9225aaefcae52324fd6b560afdd965430b8e98a5e0309338d1f33 WatchSource:0}: Error finding container 0861acbef1d9225aaefcae52324fd6b560afdd965430b8e98a5e0309338d1f33: Status 404 returned error can't find the container with id 0861acbef1d9225aaefcae52324fd6b560afdd965430b8e98a5e0309338d1f33 Feb 26 15:02:07 crc kubenswrapper[4724]: I0226 15:02:07.989099 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39428a48-848a-49d0-8ad5-48e204b161b4" path="/var/lib/kubelet/pods/39428a48-848a-49d0-8ad5-48e204b161b4/volumes" Feb 26 15:02:08 crc kubenswrapper[4724]: I0226 15:02:08.325953 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pk8nj/must-gather-r9677" event={"ID":"cf705be9-0e89-49db-aa47-c709a3f7c82c","Type":"ContainerStarted","Data":"0861acbef1d9225aaefcae52324fd6b560afdd965430b8e98a5e0309338d1f33"} Feb 26 15:02:14 crc kubenswrapper[4724]: I0226 15:02:14.975474 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:02:14 crc kubenswrapper[4724]: E0226 15:02:14.976322 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:02:18 crc kubenswrapper[4724]: I0226 15:02:18.428272 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pk8nj/must-gather-r9677" event={"ID":"cf705be9-0e89-49db-aa47-c709a3f7c82c","Type":"ContainerStarted","Data":"16b0304d71d80fb6806a6d1c03a18ee7193b299921ffb04aa7ada07e848268bf"} Feb 26 15:02:18 crc kubenswrapper[4724]: I0226 15:02:18.428590 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pk8nj/must-gather-r9677" event={"ID":"cf705be9-0e89-49db-aa47-c709a3f7c82c","Type":"ContainerStarted","Data":"4f4fd577dd0762cfc170fef528b65163b8f5bf6ec0e4412bb147252841411f0e"} Feb 26 15:02:18 crc kubenswrapper[4724]: I0226 15:02:18.448268 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pk8nj/must-gather-r9677" podStartSLOduration=2.444623315 podStartE2EDuration="12.448249211s" podCreationTimestamp="2026-02-26 15:02:06 +0000 UTC" firstStartedPulling="2026-02-26 15:02:07.679510884 +0000 UTC m=+14194.335249999" lastFinishedPulling="2026-02-26 15:02:17.68313677 +0000 UTC m=+14204.338875895" observedRunningTime="2026-02-26 15:02:18.439931391 +0000 UTC m=+14205.095670496" watchObservedRunningTime="2026-02-26 15:02:18.448249211 +0000 UTC m=+14205.103988326" Feb 26 15:02:25 crc kubenswrapper[4724]: I0226 15:02:25.352992 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pk8nj/crc-debug-qb77z"] Feb 26 15:02:25 crc kubenswrapper[4724]: E0226 15:02:25.353952 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46881f2d-840f-462a-ad89-af75f272e60c" containerName="oc" Feb 26 15:02:25 crc kubenswrapper[4724]: I0226 15:02:25.353965 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="46881f2d-840f-462a-ad89-af75f272e60c" containerName="oc" Feb 26 15:02:25 crc kubenswrapper[4724]: I0226 15:02:25.354156 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="46881f2d-840f-462a-ad89-af75f272e60c" containerName="oc" Feb 26 15:02:25 crc kubenswrapper[4724]: I0226 15:02:25.355455 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/crc-debug-qb77z" Feb 26 15:02:25 crc kubenswrapper[4724]: I0226 15:02:25.357556 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pk8nj"/"default-dockercfg-kcqnc" Feb 26 15:02:25 crc kubenswrapper[4724]: I0226 15:02:25.412791 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/619d32ee-3dd1-4e24-9c99-0c1999cd458e-host\") pod \"crc-debug-qb77z\" (UID: \"619d32ee-3dd1-4e24-9c99-0c1999cd458e\") " pod="openshift-must-gather-pk8nj/crc-debug-qb77z" Feb 26 15:02:25 crc kubenswrapper[4724]: I0226 15:02:25.412898 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pxb8\" (UniqueName: \"kubernetes.io/projected/619d32ee-3dd1-4e24-9c99-0c1999cd458e-kube-api-access-8pxb8\") pod \"crc-debug-qb77z\" (UID: \"619d32ee-3dd1-4e24-9c99-0c1999cd458e\") " pod="openshift-must-gather-pk8nj/crc-debug-qb77z" Feb 26 15:02:25 crc kubenswrapper[4724]: I0226 15:02:25.514798 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/619d32ee-3dd1-4e24-9c99-0c1999cd458e-host\") pod \"crc-debug-qb77z\" (UID: \"619d32ee-3dd1-4e24-9c99-0c1999cd458e\") " pod="openshift-must-gather-pk8nj/crc-debug-qb77z" Feb 26 15:02:25 crc kubenswrapper[4724]: I0226 15:02:25.514872 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pxb8\" (UniqueName: \"kubernetes.io/projected/619d32ee-3dd1-4e24-9c99-0c1999cd458e-kube-api-access-8pxb8\") pod \"crc-debug-qb77z\" (UID: \"619d32ee-3dd1-4e24-9c99-0c1999cd458e\") " pod="openshift-must-gather-pk8nj/crc-debug-qb77z" Feb 26 15:02:25 crc kubenswrapper[4724]: I0226 15:02:25.515389 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/619d32ee-3dd1-4e24-9c99-0c1999cd458e-host\") pod \"crc-debug-qb77z\" (UID: \"619d32ee-3dd1-4e24-9c99-0c1999cd458e\") " pod="openshift-must-gather-pk8nj/crc-debug-qb77z" Feb 26 15:02:25 crc kubenswrapper[4724]: I0226 15:02:25.552118 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pxb8\" (UniqueName: \"kubernetes.io/projected/619d32ee-3dd1-4e24-9c99-0c1999cd458e-kube-api-access-8pxb8\") pod \"crc-debug-qb77z\" (UID: \"619d32ee-3dd1-4e24-9c99-0c1999cd458e\") " pod="openshift-must-gather-pk8nj/crc-debug-qb77z" Feb 26 15:02:25 crc kubenswrapper[4724]: I0226 15:02:25.714327 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/crc-debug-qb77z" Feb 26 15:02:26 crc kubenswrapper[4724]: I0226 15:02:26.498764 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pk8nj/crc-debug-qb77z" event={"ID":"619d32ee-3dd1-4e24-9c99-0c1999cd458e","Type":"ContainerStarted","Data":"fe2f3185acff6dd690ecf193bdd03c62bee5334673d1976df7b22e941002ac1d"} Feb 26 15:02:27 crc kubenswrapper[4724]: I0226 15:02:27.975653 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:02:27 crc kubenswrapper[4724]: E0226 15:02:27.976316 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:02:37 crc kubenswrapper[4724]: I0226 15:02:37.606669 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"b9b5bd47-dc7c-492d-8c33-cd7d528555f6","Type":"ContainerDied","Data":"f1c7e303bd5f0b056c401fb99d967d6d5f95e751fe507c1a21461c7394030e47"} Feb 26 15:02:37 crc kubenswrapper[4724]: I0226 15:02:37.606543 4724 generic.go:334] "Generic (PLEG): container finished" podID="b9b5bd47-dc7c-492d-8c33-cd7d528555f6" containerID="f1c7e303bd5f0b056c401fb99d967d6d5f95e751fe507c1a21461c7394030e47" exitCode=0 Feb 26 15:02:37 crc kubenswrapper[4724]: I0226 15:02:37.761882 4724 scope.go:117] "RemoveContainer" containerID="9928adb1108787e1ed2032e047b984e7407d4b42530b16ed3f7f12ba13abf87e" Feb 26 15:02:38 crc kubenswrapper[4724]: I0226 15:02:38.975886 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:02:38 crc kubenswrapper[4724]: E0226 15:02:38.976589 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:02:39 crc kubenswrapper[4724]: I0226 15:02:39.625086 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pk8nj/crc-debug-qb77z" event={"ID":"619d32ee-3dd1-4e24-9c99-0c1999cd458e","Type":"ContainerStarted","Data":"d31c1ac58b7c34251ec3cc1d3b1d3b5ab1e2a5f368f11108845abaa1741bcf7f"} Feb 26 15:02:39 crc kubenswrapper[4724]: I0226 15:02:39.655709 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pk8nj/crc-debug-qb77z" podStartSLOduration=1.594700516 podStartE2EDuration="14.655688771s" podCreationTimestamp="2026-02-26 15:02:25 +0000 UTC" firstStartedPulling="2026-02-26 15:02:25.793433572 +0000 UTC m=+14212.449172687" lastFinishedPulling="2026-02-26 15:02:38.854421827 +0000 UTC m=+14225.510160942" observedRunningTime="2026-02-26 15:02:39.645923914 +0000 UTC m=+14226.301663029" watchObservedRunningTime="2026-02-26 15:02:39.655688771 +0000 UTC m=+14226.311427886" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.238326 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.358131 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.358295 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-test-operator-ephemeral-temporary\") pod \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.358334 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-openstack-config-secret\") pod \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.358381 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf5sx\" (UniqueName: \"kubernetes.io/projected/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-kube-api-access-lf5sx\") pod \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.358421 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-ca-certs\") pod \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.358446 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-ssh-key\") pod \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.358564 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-config-data\") pod \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.358610 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-test-operator-ephemeral-workdir\") pod \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.358648 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-openstack-config\") pod \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\" (UID: \"b9b5bd47-dc7c-492d-8c33-cd7d528555f6\") " Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.360400 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-config-data" (OuterVolumeSpecName: "config-data") pod "b9b5bd47-dc7c-492d-8c33-cd7d528555f6" (UID: "b9b5bd47-dc7c-492d-8c33-cd7d528555f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.360836 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "b9b5bd47-dc7c-492d-8c33-cd7d528555f6" (UID: "b9b5bd47-dc7c-492d-8c33-cd7d528555f6"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.363792 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "b9b5bd47-dc7c-492d-8c33-cd7d528555f6" (UID: "b9b5bd47-dc7c-492d-8c33-cd7d528555f6"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.381803 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-kube-api-access-lf5sx" (OuterVolumeSpecName: "kube-api-access-lf5sx") pod "b9b5bd47-dc7c-492d-8c33-cd7d528555f6" (UID: "b9b5bd47-dc7c-492d-8c33-cd7d528555f6"). InnerVolumeSpecName "kube-api-access-lf5sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.388855 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "b9b5bd47-dc7c-492d-8c33-cd7d528555f6" (UID: "b9b5bd47-dc7c-492d-8c33-cd7d528555f6"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.403527 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b9b5bd47-dc7c-492d-8c33-cd7d528555f6" (UID: "b9b5bd47-dc7c-492d-8c33-cd7d528555f6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.440698 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "b9b5bd47-dc7c-492d-8c33-cd7d528555f6" (UID: "b9b5bd47-dc7c-492d-8c33-cd7d528555f6"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.447790 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b9b5bd47-dc7c-492d-8c33-cd7d528555f6" (UID: "b9b5bd47-dc7c-492d-8c33-cd7d528555f6"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.455124 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b9b5bd47-dc7c-492d-8c33-cd7d528555f6" (UID: "b9b5bd47-dc7c-492d-8c33-cd7d528555f6"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.461475 4724 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.461500 4724 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.461512 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.466589 4724 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.466612 4724 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.466623 4724 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.466634 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf5sx\" (UniqueName: \"kubernetes.io/projected/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-kube-api-access-lf5sx\") on node \"crc\" DevicePath \"\"" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.466642 4724 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.466652 4724 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9b5bd47-dc7c-492d-8c33-cd7d528555f6-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.487025 4724 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.569397 4724 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.637957 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.638111 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"b9b5bd47-dc7c-492d-8c33-cd7d528555f6","Type":"ContainerDied","Data":"2b30a67cffca72bc4169a209b1ff757b3610dc3ea448d701deba4d6e33fa00a8"} Feb 26 15:02:40 crc kubenswrapper[4724]: I0226 15:02:40.638143 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b30a67cffca72bc4169a209b1ff757b3610dc3ea448d701deba4d6e33fa00a8" Feb 26 15:02:46 crc kubenswrapper[4724]: I0226 15:02:46.851586 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 15:02:46 crc kubenswrapper[4724]: E0226 15:02:46.853356 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b5bd47-dc7c-492d-8c33-cd7d528555f6" containerName="tempest-tests-tempest-tests-runner" Feb 26 15:02:46 crc kubenswrapper[4724]: I0226 15:02:46.853378 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b5bd47-dc7c-492d-8c33-cd7d528555f6" containerName="tempest-tests-tempest-tests-runner" Feb 26 15:02:46 crc kubenswrapper[4724]: I0226 15:02:46.853612 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b5bd47-dc7c-492d-8c33-cd7d528555f6" containerName="tempest-tests-tempest-tests-runner" Feb 26 15:02:46 crc kubenswrapper[4724]: I0226 15:02:46.854675 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:02:46 crc kubenswrapper[4724]: I0226 15:02:46.857680 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-khdhf" Feb 26 15:02:46 crc kubenswrapper[4724]: I0226 15:02:46.870366 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 15:02:46 crc kubenswrapper[4724]: I0226 15:02:46.981979 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78368336-7209-421d-b638-e47679769c6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:02:46 crc kubenswrapper[4724]: I0226 15:02:46.982128 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlh2k\" (UniqueName: \"kubernetes.io/projected/78368336-7209-421d-b638-e47679769c6d-kube-api-access-xlh2k\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78368336-7209-421d-b638-e47679769c6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:02:47 crc kubenswrapper[4724]: I0226 15:02:47.083972 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78368336-7209-421d-b638-e47679769c6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:02:47 crc kubenswrapper[4724]: I0226 15:02:47.084103 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlh2k\" (UniqueName: \"kubernetes.io/projected/78368336-7209-421d-b638-e47679769c6d-kube-api-access-xlh2k\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78368336-7209-421d-b638-e47679769c6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:02:47 crc kubenswrapper[4724]: I0226 15:02:47.087497 4724 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78368336-7209-421d-b638-e47679769c6d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:02:47 crc kubenswrapper[4724]: I0226 15:02:47.155262 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlh2k\" (UniqueName: \"kubernetes.io/projected/78368336-7209-421d-b638-e47679769c6d-kube-api-access-xlh2k\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78368336-7209-421d-b638-e47679769c6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:02:47 crc kubenswrapper[4724]: I0226 15:02:47.178206 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78368336-7209-421d-b638-e47679769c6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:02:47 crc kubenswrapper[4724]: I0226 15:02:47.483372 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:02:48 crc kubenswrapper[4724]: I0226 15:02:48.320649 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 15:02:48 crc kubenswrapper[4724]: I0226 15:02:48.344447 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 15:02:48 crc kubenswrapper[4724]: I0226 15:02:48.923327 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"78368336-7209-421d-b638-e47679769c6d","Type":"ContainerStarted","Data":"9189c00a67bee4be995eb3a159c95efcdc59fc7021a147dbc77dc20ab4f76935"} Feb 26 15:02:49 crc kubenswrapper[4724]: I0226 15:02:49.932460 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"78368336-7209-421d-b638-e47679769c6d","Type":"ContainerStarted","Data":"077d973c17529441a4b7907609ee1615acf137610477f97563273b0d81893f60"} Feb 26 15:02:49 crc kubenswrapper[4724]: I0226 15:02:49.952513 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.789139481 podStartE2EDuration="3.952395538s" podCreationTimestamp="2026-02-26 15:02:46 +0000 UTC" firstStartedPulling="2026-02-26 15:02:48.342282247 +0000 UTC m=+14234.998021362" lastFinishedPulling="2026-02-26 15:02:49.505538304 +0000 UTC m=+14236.161277419" observedRunningTime="2026-02-26 15:02:49.943609776 +0000 UTC m=+14236.599348891" watchObservedRunningTime="2026-02-26 15:02:49.952395538 +0000 UTC m=+14236.608134663" Feb 26 15:02:50 crc kubenswrapper[4724]: I0226 15:02:50.975677 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:02:50 crc kubenswrapper[4724]: E0226 15:02:50.976233 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:03:04 crc kubenswrapper[4724]: I0226 15:03:04.976120 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:03:04 crc kubenswrapper[4724]: E0226 15:03:04.977254 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:03:11 crc kubenswrapper[4724]: I0226 15:03:11.699600 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gpvcm/must-gather-btd5b"] Feb 26 15:03:11 crc kubenswrapper[4724]: I0226 15:03:11.703558 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gpvcm/must-gather-btd5b" Feb 26 15:03:11 crc kubenswrapper[4724]: I0226 15:03:11.708345 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gpvcm"/"kube-root-ca.crt" Feb 26 15:03:11 crc kubenswrapper[4724]: I0226 15:03:11.709591 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gpvcm"/"openshift-service-ca.crt" Feb 26 15:03:11 crc kubenswrapper[4724]: I0226 15:03:11.816765 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gpvcm/must-gather-btd5b"] Feb 26 15:03:11 crc kubenswrapper[4724]: I0226 15:03:11.846282 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plkdf\" (UniqueName: \"kubernetes.io/projected/c834cdec-42c9-43cf-93ed-975a34f0a532-kube-api-access-plkdf\") pod \"must-gather-btd5b\" (UID: \"c834cdec-42c9-43cf-93ed-975a34f0a532\") " pod="openshift-must-gather-gpvcm/must-gather-btd5b" Feb 26 15:03:11 crc kubenswrapper[4724]: I0226 15:03:11.846739 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c834cdec-42c9-43cf-93ed-975a34f0a532-must-gather-output\") pod \"must-gather-btd5b\" (UID: \"c834cdec-42c9-43cf-93ed-975a34f0a532\") " pod="openshift-must-gather-gpvcm/must-gather-btd5b" Feb 26 15:03:11 crc kubenswrapper[4724]: I0226 15:03:11.948892 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c834cdec-42c9-43cf-93ed-975a34f0a532-must-gather-output\") pod \"must-gather-btd5b\" (UID: \"c834cdec-42c9-43cf-93ed-975a34f0a532\") " pod="openshift-must-gather-gpvcm/must-gather-btd5b" Feb 26 15:03:11 crc kubenswrapper[4724]: I0226 15:03:11.948970 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plkdf\" (UniqueName: \"kubernetes.io/projected/c834cdec-42c9-43cf-93ed-975a34f0a532-kube-api-access-plkdf\") pod \"must-gather-btd5b\" (UID: \"c834cdec-42c9-43cf-93ed-975a34f0a532\") " pod="openshift-must-gather-gpvcm/must-gather-btd5b" Feb 26 15:03:11 crc kubenswrapper[4724]: I0226 15:03:11.949479 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c834cdec-42c9-43cf-93ed-975a34f0a532-must-gather-output\") pod \"must-gather-btd5b\" (UID: \"c834cdec-42c9-43cf-93ed-975a34f0a532\") " pod="openshift-must-gather-gpvcm/must-gather-btd5b" Feb 26 15:03:11 crc kubenswrapper[4724]: I0226 15:03:11.967472 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plkdf\" (UniqueName: \"kubernetes.io/projected/c834cdec-42c9-43cf-93ed-975a34f0a532-kube-api-access-plkdf\") pod \"must-gather-btd5b\" (UID: \"c834cdec-42c9-43cf-93ed-975a34f0a532\") " pod="openshift-must-gather-gpvcm/must-gather-btd5b" Feb 26 15:03:12 crc kubenswrapper[4724]: I0226 15:03:12.046249 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gpvcm/must-gather-btd5b" Feb 26 15:03:12 crc kubenswrapper[4724]: I0226 15:03:12.528389 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gpvcm/must-gather-btd5b"] Feb 26 15:03:13 crc kubenswrapper[4724]: I0226 15:03:13.181919 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gpvcm/must-gather-btd5b" event={"ID":"c834cdec-42c9-43cf-93ed-975a34f0a532","Type":"ContainerStarted","Data":"a336043198365ed895cb23a6918da3d2cba63d2680539e5a75c66af6b601f3bf"} Feb 26 15:03:13 crc kubenswrapper[4724]: I0226 15:03:13.183167 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gpvcm/must-gather-btd5b" event={"ID":"c834cdec-42c9-43cf-93ed-975a34f0a532","Type":"ContainerStarted","Data":"23791348c4653785d87246dc6859319a59d188710c3bf9d89f35132896d2b33c"} Feb 26 15:03:13 crc kubenswrapper[4724]: I0226 15:03:13.183255 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gpvcm/must-gather-btd5b" event={"ID":"c834cdec-42c9-43cf-93ed-975a34f0a532","Type":"ContainerStarted","Data":"5915f7f9d19070ada790f21626c3c835593cbaab00f66466a430c9be3c64e5d0"} Feb 26 15:03:13 crc kubenswrapper[4724]: I0226 15:03:13.210838 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gpvcm/must-gather-btd5b" podStartSLOduration=2.210815742 podStartE2EDuration="2.210815742s" podCreationTimestamp="2026-02-26 15:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 15:03:13.199065455 +0000 UTC m=+14259.854804570" watchObservedRunningTime="2026-02-26 15:03:13.210815742 +0000 UTC m=+14259.866554857" Feb 26 15:03:19 crc kubenswrapper[4724]: I0226 15:03:19.737812 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gpvcm/crc-debug-5nv8j"] Feb 26 15:03:19 crc kubenswrapper[4724]: I0226 15:03:19.739562 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gpvcm/crc-debug-5nv8j" Feb 26 15:03:19 crc kubenswrapper[4724]: I0226 15:03:19.741503 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gpvcm"/"default-dockercfg-tp9g5" Feb 26 15:03:19 crc kubenswrapper[4724]: I0226 15:03:19.896094 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmtbs\" (UniqueName: \"kubernetes.io/projected/fca8bdcb-261d-489b-897c-35d816391828-kube-api-access-fmtbs\") pod \"crc-debug-5nv8j\" (UID: \"fca8bdcb-261d-489b-897c-35d816391828\") " pod="openshift-must-gather-gpvcm/crc-debug-5nv8j" Feb 26 15:03:19 crc kubenswrapper[4724]: I0226 15:03:19.896424 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fca8bdcb-261d-489b-897c-35d816391828-host\") pod \"crc-debug-5nv8j\" (UID: \"fca8bdcb-261d-489b-897c-35d816391828\") " pod="openshift-must-gather-gpvcm/crc-debug-5nv8j" Feb 26 15:03:19 crc kubenswrapper[4724]: I0226 15:03:19.976101 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:03:19 crc kubenswrapper[4724]: E0226 15:03:19.976387 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:03:19 crc kubenswrapper[4724]: I0226 15:03:19.998109 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmtbs\" (UniqueName: \"kubernetes.io/projected/fca8bdcb-261d-489b-897c-35d816391828-kube-api-access-fmtbs\") pod \"crc-debug-5nv8j\" (UID: \"fca8bdcb-261d-489b-897c-35d816391828\") " pod="openshift-must-gather-gpvcm/crc-debug-5nv8j" Feb 26 15:03:19 crc kubenswrapper[4724]: I0226 15:03:19.998332 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fca8bdcb-261d-489b-897c-35d816391828-host\") pod \"crc-debug-5nv8j\" (UID: \"fca8bdcb-261d-489b-897c-35d816391828\") " pod="openshift-must-gather-gpvcm/crc-debug-5nv8j" Feb 26 15:03:19 crc kubenswrapper[4724]: I0226 15:03:19.998495 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fca8bdcb-261d-489b-897c-35d816391828-host\") pod \"crc-debug-5nv8j\" (UID: \"fca8bdcb-261d-489b-897c-35d816391828\") " pod="openshift-must-gather-gpvcm/crc-debug-5nv8j" Feb 26 15:03:20 crc kubenswrapper[4724]: I0226 15:03:20.016473 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmtbs\" (UniqueName: \"kubernetes.io/projected/fca8bdcb-261d-489b-897c-35d816391828-kube-api-access-fmtbs\") pod \"crc-debug-5nv8j\" (UID: \"fca8bdcb-261d-489b-897c-35d816391828\") " pod="openshift-must-gather-gpvcm/crc-debug-5nv8j" Feb 26 15:03:20 crc kubenswrapper[4724]: I0226 15:03:20.058993 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gpvcm/crc-debug-5nv8j" Feb 26 15:03:20 crc kubenswrapper[4724]: I0226 15:03:20.267422 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gpvcm/crc-debug-5nv8j" event={"ID":"fca8bdcb-261d-489b-897c-35d816391828","Type":"ContainerStarted","Data":"99a58de10ae65ded25b58d10048bca4bf3e6e016ab69330a4eb3f596dfc83ba1"} Feb 26 15:03:21 crc kubenswrapper[4724]: I0226 15:03:21.278779 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gpvcm/crc-debug-5nv8j" event={"ID":"fca8bdcb-261d-489b-897c-35d816391828","Type":"ContainerStarted","Data":"f7239ecfd13429cc4b117011d7457d30bf0871f2ad76c01507123249151f04e2"} Feb 26 15:03:21 crc kubenswrapper[4724]: I0226 15:03:21.295235 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gpvcm/crc-debug-5nv8j" podStartSLOduration=2.29521422 podStartE2EDuration="2.29521422s" podCreationTimestamp="2026-02-26 15:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 15:03:21.2912346 +0000 UTC m=+14267.946973725" watchObservedRunningTime="2026-02-26 15:03:21.29521422 +0000 UTC m=+14267.950953345" Feb 26 15:03:30 crc kubenswrapper[4724]: I0226 15:03:30.975894 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:03:30 crc kubenswrapper[4724]: E0226 15:03:30.976661 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:03:38 crc kubenswrapper[4724]: I0226 15:03:38.873446 4724 scope.go:117] "RemoveContainer" containerID="05475347bf257331de34983daab01a61b8d27a715517fc995fe57fa536462a9c" Feb 26 15:03:38 crc kubenswrapper[4724]: I0226 15:03:38.901972 4724 scope.go:117] "RemoveContainer" containerID="47c1a285444c26d91b1b31208ff30fafee4b7bc996657b02956682f1f3cbe6fc" Feb 26 15:03:38 crc kubenswrapper[4724]: I0226 15:03:38.951399 4724 scope.go:117] "RemoveContainer" containerID="7beb13c472ed3f82109fb98de2f08de967b011db55988244f1080fd5d20d51df" Feb 26 15:03:44 crc kubenswrapper[4724]: I0226 15:03:44.975042 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:03:44 crc kubenswrapper[4724]: E0226 15:03:44.975786 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:03:47 crc kubenswrapper[4724]: I0226 15:03:47.489426 4724 generic.go:334] "Generic (PLEG): container finished" podID="619d32ee-3dd1-4e24-9c99-0c1999cd458e" containerID="d31c1ac58b7c34251ec3cc1d3b1d3b5ab1e2a5f368f11108845abaa1741bcf7f" exitCode=0 Feb 26 15:03:47 crc kubenswrapper[4724]: I0226 15:03:47.489767 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pk8nj/crc-debug-qb77z" event={"ID":"619d32ee-3dd1-4e24-9c99-0c1999cd458e","Type":"ContainerDied","Data":"d31c1ac58b7c34251ec3cc1d3b1d3b5ab1e2a5f368f11108845abaa1741bcf7f"} Feb 26 15:03:48 crc kubenswrapper[4724]: I0226 15:03:48.595166 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/crc-debug-qb77z" Feb 26 15:03:48 crc kubenswrapper[4724]: I0226 15:03:48.634852 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pk8nj/crc-debug-qb77z"] Feb 26 15:03:48 crc kubenswrapper[4724]: I0226 15:03:48.643480 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pk8nj/crc-debug-qb77z"] Feb 26 15:03:48 crc kubenswrapper[4724]: I0226 15:03:48.691364 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/619d32ee-3dd1-4e24-9c99-0c1999cd458e-host\") pod \"619d32ee-3dd1-4e24-9c99-0c1999cd458e\" (UID: \"619d32ee-3dd1-4e24-9c99-0c1999cd458e\") " Feb 26 15:03:48 crc kubenswrapper[4724]: I0226 15:03:48.691510 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pxb8\" (UniqueName: \"kubernetes.io/projected/619d32ee-3dd1-4e24-9c99-0c1999cd458e-kube-api-access-8pxb8\") pod \"619d32ee-3dd1-4e24-9c99-0c1999cd458e\" (UID: \"619d32ee-3dd1-4e24-9c99-0c1999cd458e\") " Feb 26 15:03:48 crc kubenswrapper[4724]: I0226 15:03:48.691548 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/619d32ee-3dd1-4e24-9c99-0c1999cd458e-host" (OuterVolumeSpecName: "host") pod "619d32ee-3dd1-4e24-9c99-0c1999cd458e" (UID: "619d32ee-3dd1-4e24-9c99-0c1999cd458e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 15:03:48 crc kubenswrapper[4724]: I0226 15:03:48.692077 4724 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/619d32ee-3dd1-4e24-9c99-0c1999cd458e-host\") on node \"crc\" DevicePath \"\"" Feb 26 15:03:48 crc kubenswrapper[4724]: I0226 15:03:48.699399 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/619d32ee-3dd1-4e24-9c99-0c1999cd458e-kube-api-access-8pxb8" (OuterVolumeSpecName: "kube-api-access-8pxb8") pod "619d32ee-3dd1-4e24-9c99-0c1999cd458e" (UID: "619d32ee-3dd1-4e24-9c99-0c1999cd458e"). InnerVolumeSpecName "kube-api-access-8pxb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:03:48 crc kubenswrapper[4724]: I0226 15:03:48.793513 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pxb8\" (UniqueName: \"kubernetes.io/projected/619d32ee-3dd1-4e24-9c99-0c1999cd458e-kube-api-access-8pxb8\") on node \"crc\" DevicePath \"\"" Feb 26 15:03:49 crc kubenswrapper[4724]: I0226 15:03:49.504415 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe2f3185acff6dd690ecf193bdd03c62bee5334673d1976df7b22e941002ac1d" Feb 26 15:03:49 crc kubenswrapper[4724]: I0226 15:03:49.504457 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/crc-debug-qb77z" Feb 26 15:03:49 crc kubenswrapper[4724]: I0226 15:03:49.839830 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pk8nj/crc-debug-qhdvj"] Feb 26 15:03:49 crc kubenswrapper[4724]: E0226 15:03:49.840308 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619d32ee-3dd1-4e24-9c99-0c1999cd458e" containerName="container-00" Feb 26 15:03:49 crc kubenswrapper[4724]: I0226 15:03:49.840321 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="619d32ee-3dd1-4e24-9c99-0c1999cd458e" containerName="container-00" Feb 26 15:03:49 crc kubenswrapper[4724]: I0226 15:03:49.840553 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="619d32ee-3dd1-4e24-9c99-0c1999cd458e" containerName="container-00" Feb 26 15:03:49 crc kubenswrapper[4724]: I0226 15:03:49.841146 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/crc-debug-qhdvj" Feb 26 15:03:49 crc kubenswrapper[4724]: I0226 15:03:49.843026 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pk8nj"/"default-dockercfg-kcqnc" Feb 26 15:03:49 crc kubenswrapper[4724]: I0226 15:03:49.987142 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="619d32ee-3dd1-4e24-9c99-0c1999cd458e" path="/var/lib/kubelet/pods/619d32ee-3dd1-4e24-9c99-0c1999cd458e/volumes" Feb 26 15:03:50 crc kubenswrapper[4724]: I0226 15:03:50.015880 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9gkm\" (UniqueName: \"kubernetes.io/projected/84df0ecf-3763-4482-a4c3-097bd2c7f0bc-kube-api-access-v9gkm\") pod \"crc-debug-qhdvj\" (UID: \"84df0ecf-3763-4482-a4c3-097bd2c7f0bc\") " pod="openshift-must-gather-pk8nj/crc-debug-qhdvj" Feb 26 15:03:50 crc kubenswrapper[4724]: I0226 15:03:50.015932 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/84df0ecf-3763-4482-a4c3-097bd2c7f0bc-host\") pod \"crc-debug-qhdvj\" (UID: \"84df0ecf-3763-4482-a4c3-097bd2c7f0bc\") " pod="openshift-must-gather-pk8nj/crc-debug-qhdvj" Feb 26 15:03:50 crc kubenswrapper[4724]: I0226 15:03:50.118585 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9gkm\" (UniqueName: \"kubernetes.io/projected/84df0ecf-3763-4482-a4c3-097bd2c7f0bc-kube-api-access-v9gkm\") pod \"crc-debug-qhdvj\" (UID: \"84df0ecf-3763-4482-a4c3-097bd2c7f0bc\") " pod="openshift-must-gather-pk8nj/crc-debug-qhdvj" Feb 26 15:03:50 crc kubenswrapper[4724]: I0226 15:03:50.118632 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/84df0ecf-3763-4482-a4c3-097bd2c7f0bc-host\") pod \"crc-debug-qhdvj\" (UID: \"84df0ecf-3763-4482-a4c3-097bd2c7f0bc\") " pod="openshift-must-gather-pk8nj/crc-debug-qhdvj" Feb 26 15:03:50 crc kubenswrapper[4724]: I0226 15:03:50.118866 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/84df0ecf-3763-4482-a4c3-097bd2c7f0bc-host\") pod \"crc-debug-qhdvj\" (UID: \"84df0ecf-3763-4482-a4c3-097bd2c7f0bc\") " pod="openshift-must-gather-pk8nj/crc-debug-qhdvj" Feb 26 15:03:50 crc kubenswrapper[4724]: I0226 15:03:50.134776 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9gkm\" (UniqueName: \"kubernetes.io/projected/84df0ecf-3763-4482-a4c3-097bd2c7f0bc-kube-api-access-v9gkm\") pod \"crc-debug-qhdvj\" (UID: \"84df0ecf-3763-4482-a4c3-097bd2c7f0bc\") " pod="openshift-must-gather-pk8nj/crc-debug-qhdvj" Feb 26 15:03:50 crc kubenswrapper[4724]: I0226 15:03:50.154999 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/crc-debug-qhdvj" Feb 26 15:03:50 crc kubenswrapper[4724]: W0226 15:03:50.184609 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84df0ecf_3763_4482_a4c3_097bd2c7f0bc.slice/crio-d74efcc3351a15bbd142e508933b6276d3dcee3b498c84cd0547bc6f5c63a7d0 WatchSource:0}: Error finding container d74efcc3351a15bbd142e508933b6276d3dcee3b498c84cd0547bc6f5c63a7d0: Status 404 returned error can't find the container with id d74efcc3351a15bbd142e508933b6276d3dcee3b498c84cd0547bc6f5c63a7d0 Feb 26 15:03:50 crc kubenswrapper[4724]: I0226 15:03:50.517064 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pk8nj/crc-debug-qhdvj" event={"ID":"84df0ecf-3763-4482-a4c3-097bd2c7f0bc","Type":"ContainerStarted","Data":"a6a53b9d68b730a83db611f3012963d89e9c4750c79b3a183c0cd6369c9a52aa"} Feb 26 15:03:50 crc kubenswrapper[4724]: I0226 15:03:50.517385 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pk8nj/crc-debug-qhdvj" event={"ID":"84df0ecf-3763-4482-a4c3-097bd2c7f0bc","Type":"ContainerStarted","Data":"d74efcc3351a15bbd142e508933b6276d3dcee3b498c84cd0547bc6f5c63a7d0"} Feb 26 15:03:50 crc kubenswrapper[4724]: I0226 15:03:50.535313 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pk8nj/crc-debug-qhdvj" podStartSLOduration=1.5352960119999999 podStartE2EDuration="1.535296012s" podCreationTimestamp="2026-02-26 15:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 15:03:50.527728121 +0000 UTC m=+14297.183467236" watchObservedRunningTime="2026-02-26 15:03:50.535296012 +0000 UTC m=+14297.191035127" Feb 26 15:03:51 crc kubenswrapper[4724]: I0226 15:03:51.534908 4724 generic.go:334] "Generic (PLEG): container finished" podID="84df0ecf-3763-4482-a4c3-097bd2c7f0bc" containerID="a6a53b9d68b730a83db611f3012963d89e9c4750c79b3a183c0cd6369c9a52aa" exitCode=0 Feb 26 15:03:51 crc kubenswrapper[4724]: I0226 15:03:51.535162 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pk8nj/crc-debug-qhdvj" event={"ID":"84df0ecf-3763-4482-a4c3-097bd2c7f0bc","Type":"ContainerDied","Data":"a6a53b9d68b730a83db611f3012963d89e9c4750c79b3a183c0cd6369c9a52aa"} Feb 26 15:03:52 crc kubenswrapper[4724]: I0226 15:03:52.643826 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/crc-debug-qhdvj" Feb 26 15:03:52 crc kubenswrapper[4724]: I0226 15:03:52.673348 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pk8nj/crc-debug-qhdvj"] Feb 26 15:03:52 crc kubenswrapper[4724]: I0226 15:03:52.681566 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pk8nj/crc-debug-qhdvj"] Feb 26 15:03:52 crc kubenswrapper[4724]: I0226 15:03:52.785247 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9gkm\" (UniqueName: \"kubernetes.io/projected/84df0ecf-3763-4482-a4c3-097bd2c7f0bc-kube-api-access-v9gkm\") pod \"84df0ecf-3763-4482-a4c3-097bd2c7f0bc\" (UID: \"84df0ecf-3763-4482-a4c3-097bd2c7f0bc\") " Feb 26 15:03:52 crc kubenswrapper[4724]: I0226 15:03:52.785371 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/84df0ecf-3763-4482-a4c3-097bd2c7f0bc-host\") pod \"84df0ecf-3763-4482-a4c3-097bd2c7f0bc\" (UID: \"84df0ecf-3763-4482-a4c3-097bd2c7f0bc\") " Feb 26 15:03:52 crc kubenswrapper[4724]: I0226 15:03:52.785474 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84df0ecf-3763-4482-a4c3-097bd2c7f0bc-host" (OuterVolumeSpecName: "host") pod "84df0ecf-3763-4482-a4c3-097bd2c7f0bc" (UID: "84df0ecf-3763-4482-a4c3-097bd2c7f0bc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 15:03:52 crc kubenswrapper[4724]: I0226 15:03:52.785840 4724 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/84df0ecf-3763-4482-a4c3-097bd2c7f0bc-host\") on node \"crc\" DevicePath \"\"" Feb 26 15:03:52 crc kubenswrapper[4724]: I0226 15:03:52.801863 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84df0ecf-3763-4482-a4c3-097bd2c7f0bc-kube-api-access-v9gkm" (OuterVolumeSpecName: "kube-api-access-v9gkm") pod "84df0ecf-3763-4482-a4c3-097bd2c7f0bc" (UID: "84df0ecf-3763-4482-a4c3-097bd2c7f0bc"). InnerVolumeSpecName "kube-api-access-v9gkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:03:52 crc kubenswrapper[4724]: I0226 15:03:52.887880 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9gkm\" (UniqueName: \"kubernetes.io/projected/84df0ecf-3763-4482-a4c3-097bd2c7f0bc-kube-api-access-v9gkm\") on node \"crc\" DevicePath \"\"" Feb 26 15:03:53 crc kubenswrapper[4724]: I0226 15:03:53.552496 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d74efcc3351a15bbd142e508933b6276d3dcee3b498c84cd0547bc6f5c63a7d0" Feb 26 15:03:53 crc kubenswrapper[4724]: I0226 15:03:53.553868 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/crc-debug-qhdvj" Feb 26 15:03:53 crc kubenswrapper[4724]: I0226 15:03:53.917139 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pk8nj/crc-debug-6jmfr"] Feb 26 15:03:53 crc kubenswrapper[4724]: E0226 15:03:53.917719 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84df0ecf-3763-4482-a4c3-097bd2c7f0bc" containerName="container-00" Feb 26 15:03:53 crc kubenswrapper[4724]: I0226 15:03:53.917737 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="84df0ecf-3763-4482-a4c3-097bd2c7f0bc" containerName="container-00" Feb 26 15:03:53 crc kubenswrapper[4724]: I0226 15:03:53.917992 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="84df0ecf-3763-4482-a4c3-097bd2c7f0bc" containerName="container-00" Feb 26 15:03:53 crc kubenswrapper[4724]: I0226 15:03:53.918831 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/crc-debug-6jmfr" Feb 26 15:03:53 crc kubenswrapper[4724]: I0226 15:03:53.922753 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pk8nj"/"default-dockercfg-kcqnc" Feb 26 15:03:53 crc kubenswrapper[4724]: I0226 15:03:53.985196 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84df0ecf-3763-4482-a4c3-097bd2c7f0bc" path="/var/lib/kubelet/pods/84df0ecf-3763-4482-a4c3-097bd2c7f0bc/volumes" Feb 26 15:03:54 crc kubenswrapper[4724]: I0226 15:03:54.010716 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51e2ce71-49b9-4513-85c8-ef011c3cb7fd-host\") pod \"crc-debug-6jmfr\" (UID: \"51e2ce71-49b9-4513-85c8-ef011c3cb7fd\") " pod="openshift-must-gather-pk8nj/crc-debug-6jmfr" Feb 26 15:03:54 crc kubenswrapper[4724]: I0226 15:03:54.010793 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl6ts\" (UniqueName: \"kubernetes.io/projected/51e2ce71-49b9-4513-85c8-ef011c3cb7fd-kube-api-access-nl6ts\") pod \"crc-debug-6jmfr\" (UID: \"51e2ce71-49b9-4513-85c8-ef011c3cb7fd\") " pod="openshift-must-gather-pk8nj/crc-debug-6jmfr" Feb 26 15:03:54 crc kubenswrapper[4724]: I0226 15:03:54.112518 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl6ts\" (UniqueName: \"kubernetes.io/projected/51e2ce71-49b9-4513-85c8-ef011c3cb7fd-kube-api-access-nl6ts\") pod \"crc-debug-6jmfr\" (UID: \"51e2ce71-49b9-4513-85c8-ef011c3cb7fd\") " pod="openshift-must-gather-pk8nj/crc-debug-6jmfr" Feb 26 15:03:54 crc kubenswrapper[4724]: I0226 15:03:54.112736 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51e2ce71-49b9-4513-85c8-ef011c3cb7fd-host\") pod \"crc-debug-6jmfr\" (UID: \"51e2ce71-49b9-4513-85c8-ef011c3cb7fd\") " pod="openshift-must-gather-pk8nj/crc-debug-6jmfr" Feb 26 15:03:54 crc kubenswrapper[4724]: I0226 15:03:54.114229 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51e2ce71-49b9-4513-85c8-ef011c3cb7fd-host\") pod \"crc-debug-6jmfr\" (UID: \"51e2ce71-49b9-4513-85c8-ef011c3cb7fd\") " pod="openshift-must-gather-pk8nj/crc-debug-6jmfr" Feb 26 15:03:54 crc kubenswrapper[4724]: I0226 15:03:54.153940 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl6ts\" (UniqueName: \"kubernetes.io/projected/51e2ce71-49b9-4513-85c8-ef011c3cb7fd-kube-api-access-nl6ts\") pod \"crc-debug-6jmfr\" (UID: \"51e2ce71-49b9-4513-85c8-ef011c3cb7fd\") " pod="openshift-must-gather-pk8nj/crc-debug-6jmfr" Feb 26 15:03:54 crc kubenswrapper[4724]: I0226 15:03:54.237089 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/crc-debug-6jmfr" Feb 26 15:03:54 crc kubenswrapper[4724]: W0226 15:03:54.270614 4724 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51e2ce71_49b9_4513_85c8_ef011c3cb7fd.slice/crio-e12f445038bb5b5dab9223855471f1c41cdedc67ef6b2da63a9cd7e88701ebfd WatchSource:0}: Error finding container e12f445038bb5b5dab9223855471f1c41cdedc67ef6b2da63a9cd7e88701ebfd: Status 404 returned error can't find the container with id e12f445038bb5b5dab9223855471f1c41cdedc67ef6b2da63a9cd7e88701ebfd Feb 26 15:03:54 crc kubenswrapper[4724]: I0226 15:03:54.562387 4724 generic.go:334] "Generic (PLEG): container finished" podID="51e2ce71-49b9-4513-85c8-ef011c3cb7fd" containerID="99fc1d8fcf7ff95bc57720a1078342620c5137bdad3949ccbc042db3e436337e" exitCode=0 Feb 26 15:03:54 crc kubenswrapper[4724]: I0226 15:03:54.562431 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pk8nj/crc-debug-6jmfr" event={"ID":"51e2ce71-49b9-4513-85c8-ef011c3cb7fd","Type":"ContainerDied","Data":"99fc1d8fcf7ff95bc57720a1078342620c5137bdad3949ccbc042db3e436337e"} Feb 26 15:03:54 crc kubenswrapper[4724]: I0226 15:03:54.562462 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pk8nj/crc-debug-6jmfr" event={"ID":"51e2ce71-49b9-4513-85c8-ef011c3cb7fd","Type":"ContainerStarted","Data":"e12f445038bb5b5dab9223855471f1c41cdedc67ef6b2da63a9cd7e88701ebfd"} Feb 26 15:03:54 crc kubenswrapper[4724]: I0226 15:03:54.603539 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pk8nj/crc-debug-6jmfr"] Feb 26 15:03:54 crc kubenswrapper[4724]: I0226 15:03:54.614742 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pk8nj/crc-debug-6jmfr"] Feb 26 15:03:55 crc kubenswrapper[4724]: I0226 15:03:55.666907 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/crc-debug-6jmfr" Feb 26 15:03:55 crc kubenswrapper[4724]: I0226 15:03:55.843100 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51e2ce71-49b9-4513-85c8-ef011c3cb7fd-host\") pod \"51e2ce71-49b9-4513-85c8-ef011c3cb7fd\" (UID: \"51e2ce71-49b9-4513-85c8-ef011c3cb7fd\") " Feb 26 15:03:55 crc kubenswrapper[4724]: I0226 15:03:55.843175 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl6ts\" (UniqueName: \"kubernetes.io/projected/51e2ce71-49b9-4513-85c8-ef011c3cb7fd-kube-api-access-nl6ts\") pod \"51e2ce71-49b9-4513-85c8-ef011c3cb7fd\" (UID: \"51e2ce71-49b9-4513-85c8-ef011c3cb7fd\") " Feb 26 15:03:55 crc kubenswrapper[4724]: I0226 15:03:55.843237 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51e2ce71-49b9-4513-85c8-ef011c3cb7fd-host" (OuterVolumeSpecName: "host") pod "51e2ce71-49b9-4513-85c8-ef011c3cb7fd" (UID: "51e2ce71-49b9-4513-85c8-ef011c3cb7fd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 15:03:55 crc kubenswrapper[4724]: I0226 15:03:55.843701 4724 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51e2ce71-49b9-4513-85c8-ef011c3cb7fd-host\") on node \"crc\" DevicePath \"\"" Feb 26 15:03:55 crc kubenswrapper[4724]: I0226 15:03:55.869467 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e2ce71-49b9-4513-85c8-ef011c3cb7fd-kube-api-access-nl6ts" (OuterVolumeSpecName: "kube-api-access-nl6ts") pod "51e2ce71-49b9-4513-85c8-ef011c3cb7fd" (UID: "51e2ce71-49b9-4513-85c8-ef011c3cb7fd"). InnerVolumeSpecName "kube-api-access-nl6ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:03:55 crc kubenswrapper[4724]: I0226 15:03:55.945257 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl6ts\" (UniqueName: \"kubernetes.io/projected/51e2ce71-49b9-4513-85c8-ef011c3cb7fd-kube-api-access-nl6ts\") on node \"crc\" DevicePath \"\"" Feb 26 15:03:55 crc kubenswrapper[4724]: I0226 15:03:55.984881 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51e2ce71-49b9-4513-85c8-ef011c3cb7fd" path="/var/lib/kubelet/pods/51e2ce71-49b9-4513-85c8-ef011c3cb7fd/volumes" Feb 26 15:03:56 crc kubenswrapper[4724]: I0226 15:03:56.578114 4724 scope.go:117] "RemoveContainer" containerID="99fc1d8fcf7ff95bc57720a1078342620c5137bdad3949ccbc042db3e436337e" Feb 26 15:03:56 crc kubenswrapper[4724]: I0226 15:03:56.578201 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/crc-debug-6jmfr" Feb 26 15:03:56 crc kubenswrapper[4724]: I0226 15:03:56.975496 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:03:56 crc kubenswrapper[4724]: E0226 15:03:56.975903 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:04:00 crc kubenswrapper[4724]: I0226 15:04:00.183052 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535304-cxdzm"] Feb 26 15:04:00 crc kubenswrapper[4724]: E0226 15:04:00.183739 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e2ce71-49b9-4513-85c8-ef011c3cb7fd" containerName="container-00" Feb 26 15:04:00 crc kubenswrapper[4724]: I0226 15:04:00.183751 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e2ce71-49b9-4513-85c8-ef011c3cb7fd" containerName="container-00" Feb 26 15:04:00 crc kubenswrapper[4724]: I0226 15:04:00.183926 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e2ce71-49b9-4513-85c8-ef011c3cb7fd" containerName="container-00" Feb 26 15:04:00 crc kubenswrapper[4724]: I0226 15:04:00.184544 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535304-cxdzm" Feb 26 15:04:00 crc kubenswrapper[4724]: I0226 15:04:00.199518 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:04:00 crc kubenswrapper[4724]: I0226 15:04:00.199584 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 15:04:00 crc kubenswrapper[4724]: I0226 15:04:00.199947 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:04:00 crc kubenswrapper[4724]: I0226 15:04:00.216305 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535304-cxdzm"] Feb 26 15:04:00 crc kubenswrapper[4724]: I0226 15:04:00.325632 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvgj8\" (UniqueName: \"kubernetes.io/projected/f4947b58-8051-4d46-8de7-05973c9428ea-kube-api-access-cvgj8\") pod \"auto-csr-approver-29535304-cxdzm\" (UID: \"f4947b58-8051-4d46-8de7-05973c9428ea\") " pod="openshift-infra/auto-csr-approver-29535304-cxdzm" Feb 26 15:04:00 crc kubenswrapper[4724]: I0226 15:04:00.427279 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvgj8\" (UniqueName: \"kubernetes.io/projected/f4947b58-8051-4d46-8de7-05973c9428ea-kube-api-access-cvgj8\") pod \"auto-csr-approver-29535304-cxdzm\" (UID: \"f4947b58-8051-4d46-8de7-05973c9428ea\") " pod="openshift-infra/auto-csr-approver-29535304-cxdzm" Feb 26 15:04:00 crc kubenswrapper[4724]: I0226 15:04:00.445269 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvgj8\" (UniqueName: \"kubernetes.io/projected/f4947b58-8051-4d46-8de7-05973c9428ea-kube-api-access-cvgj8\") pod \"auto-csr-approver-29535304-cxdzm\" (UID: \"f4947b58-8051-4d46-8de7-05973c9428ea\") " pod="openshift-infra/auto-csr-approver-29535304-cxdzm" Feb 26 15:04:00 crc kubenswrapper[4724]: I0226 15:04:00.525001 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535304-cxdzm" Feb 26 15:04:01 crc kubenswrapper[4724]: I0226 15:04:01.088222 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535304-cxdzm"] Feb 26 15:04:01 crc kubenswrapper[4724]: I0226 15:04:01.639423 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535304-cxdzm" event={"ID":"f4947b58-8051-4d46-8de7-05973c9428ea","Type":"ContainerStarted","Data":"8cf014c99e48b11ada8902be9bdba89a6b93b4bd41f15b67fbd1979468bfc979"} Feb 26 15:04:03 crc kubenswrapper[4724]: I0226 15:04:03.663245 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535304-cxdzm" event={"ID":"f4947b58-8051-4d46-8de7-05973c9428ea","Type":"ContainerStarted","Data":"c7eacefbf2101237f34adf361505d80be3bbfedd62ec814335798547922c5cc3"} Feb 26 15:04:03 crc kubenswrapper[4724]: I0226 15:04:03.685633 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535304-cxdzm" podStartSLOduration=2.744070925 podStartE2EDuration="3.685617362s" podCreationTimestamp="2026-02-26 15:04:00 +0000 UTC" firstStartedPulling="2026-02-26 15:04:01.084943426 +0000 UTC m=+14307.740682541" lastFinishedPulling="2026-02-26 15:04:02.026489863 +0000 UTC m=+14308.682228978" observedRunningTime="2026-02-26 15:04:03.679717423 +0000 UTC m=+14310.335456558" watchObservedRunningTime="2026-02-26 15:04:03.685617362 +0000 UTC m=+14310.341356477" Feb 26 15:04:05 crc kubenswrapper[4724]: E0226 15:04:05.981267 4724 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4947b58_8051_4d46_8de7_05973c9428ea.slice/crio-c7eacefbf2101237f34adf361505d80be3bbfedd62ec814335798547922c5cc3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4947b58_8051_4d46_8de7_05973c9428ea.slice/crio-conmon-c7eacefbf2101237f34adf361505d80be3bbfedd62ec814335798547922c5cc3.scope\": RecentStats: unable to find data in memory cache]" Feb 26 15:04:06 crc kubenswrapper[4724]: I0226 15:04:06.686967 4724 generic.go:334] "Generic (PLEG): container finished" podID="f4947b58-8051-4d46-8de7-05973c9428ea" containerID="c7eacefbf2101237f34adf361505d80be3bbfedd62ec814335798547922c5cc3" exitCode=0 Feb 26 15:04:06 crc kubenswrapper[4724]: I0226 15:04:06.687020 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535304-cxdzm" event={"ID":"f4947b58-8051-4d46-8de7-05973c9428ea","Type":"ContainerDied","Data":"c7eacefbf2101237f34adf361505d80be3bbfedd62ec814335798547922c5cc3"} Feb 26 15:04:08 crc kubenswrapper[4724]: I0226 15:04:08.045725 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535304-cxdzm" Feb 26 15:04:08 crc kubenswrapper[4724]: I0226 15:04:08.076603 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvgj8\" (UniqueName: \"kubernetes.io/projected/f4947b58-8051-4d46-8de7-05973c9428ea-kube-api-access-cvgj8\") pod \"f4947b58-8051-4d46-8de7-05973c9428ea\" (UID: \"f4947b58-8051-4d46-8de7-05973c9428ea\") " Feb 26 15:04:08 crc kubenswrapper[4724]: I0226 15:04:08.083566 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4947b58-8051-4d46-8de7-05973c9428ea-kube-api-access-cvgj8" (OuterVolumeSpecName: "kube-api-access-cvgj8") pod "f4947b58-8051-4d46-8de7-05973c9428ea" (UID: "f4947b58-8051-4d46-8de7-05973c9428ea"). InnerVolumeSpecName "kube-api-access-cvgj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:04:08 crc kubenswrapper[4724]: I0226 15:04:08.178934 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvgj8\" (UniqueName: \"kubernetes.io/projected/f4947b58-8051-4d46-8de7-05973c9428ea-kube-api-access-cvgj8\") on node \"crc\" DevicePath \"\"" Feb 26 15:04:08 crc kubenswrapper[4724]: I0226 15:04:08.701794 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535304-cxdzm" event={"ID":"f4947b58-8051-4d46-8de7-05973c9428ea","Type":"ContainerDied","Data":"8cf014c99e48b11ada8902be9bdba89a6b93b4bd41f15b67fbd1979468bfc979"} Feb 26 15:04:08 crc kubenswrapper[4724]: I0226 15:04:08.701831 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cf014c99e48b11ada8902be9bdba89a6b93b4bd41f15b67fbd1979468bfc979" Feb 26 15:04:08 crc kubenswrapper[4724]: I0226 15:04:08.701860 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535304-cxdzm" Feb 26 15:04:08 crc kubenswrapper[4724]: I0226 15:04:08.765087 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535298-l9sj7"] Feb 26 15:04:08 crc kubenswrapper[4724]: I0226 15:04:08.774366 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535298-l9sj7"] Feb 26 15:04:08 crc kubenswrapper[4724]: I0226 15:04:08.975747 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:04:08 crc kubenswrapper[4724]: E0226 15:04:08.976052 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:04:09 crc kubenswrapper[4724]: I0226 15:04:09.985655 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67d51c93-371f-450b-bc05-2bbe03bfd362" path="/var/lib/kubelet/pods/67d51c93-371f-450b-bc05-2bbe03bfd362/volumes" Feb 26 15:04:19 crc kubenswrapper[4724]: I0226 15:04:19.795014 4724 generic.go:334] "Generic (PLEG): container finished" podID="fca8bdcb-261d-489b-897c-35d816391828" containerID="f7239ecfd13429cc4b117011d7457d30bf0871f2ad76c01507123249151f04e2" exitCode=0 Feb 26 15:04:19 crc kubenswrapper[4724]: I0226 15:04:19.795221 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gpvcm/crc-debug-5nv8j" event={"ID":"fca8bdcb-261d-489b-897c-35d816391828","Type":"ContainerDied","Data":"f7239ecfd13429cc4b117011d7457d30bf0871f2ad76c01507123249151f04e2"} Feb 26 15:04:20 crc kubenswrapper[4724]: I0226 15:04:20.890269 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gpvcm/crc-debug-5nv8j" Feb 26 15:04:20 crc kubenswrapper[4724]: I0226 15:04:20.916396 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gpvcm/crc-debug-5nv8j"] Feb 26 15:04:20 crc kubenswrapper[4724]: I0226 15:04:20.923301 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gpvcm/crc-debug-5nv8j"] Feb 26 15:04:20 crc kubenswrapper[4724]: I0226 15:04:20.925004 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fca8bdcb-261d-489b-897c-35d816391828-host\") pod \"fca8bdcb-261d-489b-897c-35d816391828\" (UID: \"fca8bdcb-261d-489b-897c-35d816391828\") " Feb 26 15:04:20 crc kubenswrapper[4724]: I0226 15:04:20.925118 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fca8bdcb-261d-489b-897c-35d816391828-host" (OuterVolumeSpecName: "host") pod "fca8bdcb-261d-489b-897c-35d816391828" (UID: "fca8bdcb-261d-489b-897c-35d816391828"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 15:04:20 crc kubenswrapper[4724]: I0226 15:04:20.925380 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmtbs\" (UniqueName: \"kubernetes.io/projected/fca8bdcb-261d-489b-897c-35d816391828-kube-api-access-fmtbs\") pod \"fca8bdcb-261d-489b-897c-35d816391828\" (UID: \"fca8bdcb-261d-489b-897c-35d816391828\") " Feb 26 15:04:20 crc kubenswrapper[4724]: I0226 15:04:20.925800 4724 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fca8bdcb-261d-489b-897c-35d816391828-host\") on node \"crc\" DevicePath \"\"" Feb 26 15:04:20 crc kubenswrapper[4724]: I0226 15:04:20.940350 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca8bdcb-261d-489b-897c-35d816391828-kube-api-access-fmtbs" (OuterVolumeSpecName: "kube-api-access-fmtbs") pod "fca8bdcb-261d-489b-897c-35d816391828" (UID: "fca8bdcb-261d-489b-897c-35d816391828"). InnerVolumeSpecName "kube-api-access-fmtbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:04:21 crc kubenswrapper[4724]: I0226 15:04:21.027212 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmtbs\" (UniqueName: \"kubernetes.io/projected/fca8bdcb-261d-489b-897c-35d816391828-kube-api-access-fmtbs\") on node \"crc\" DevicePath \"\"" Feb 26 15:04:21 crc kubenswrapper[4724]: I0226 15:04:21.817090 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99a58de10ae65ded25b58d10048bca4bf3e6e016ab69330a4eb3f596dfc83ba1" Feb 26 15:04:21 crc kubenswrapper[4724]: I0226 15:04:21.817440 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gpvcm/crc-debug-5nv8j" Feb 26 15:04:21 crc kubenswrapper[4724]: I0226 15:04:21.985025 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca8bdcb-261d-489b-897c-35d816391828" path="/var/lib/kubelet/pods/fca8bdcb-261d-489b-897c-35d816391828/volumes" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.293739 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gpvcm/crc-debug-n28m4"] Feb 26 15:04:22 crc kubenswrapper[4724]: E0226 15:04:22.294361 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca8bdcb-261d-489b-897c-35d816391828" containerName="container-00" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.294379 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca8bdcb-261d-489b-897c-35d816391828" containerName="container-00" Feb 26 15:04:22 crc kubenswrapper[4724]: E0226 15:04:22.294392 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4947b58-8051-4d46-8de7-05973c9428ea" containerName="oc" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.294399 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4947b58-8051-4d46-8de7-05973c9428ea" containerName="oc" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.294573 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca8bdcb-261d-489b-897c-35d816391828" containerName="container-00" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.294592 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4947b58-8051-4d46-8de7-05973c9428ea" containerName="oc" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.295201 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gpvcm/crc-debug-n28m4" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.297609 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gpvcm"/"default-dockercfg-tp9g5" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.401760 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4t8m8"] Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.404947 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.427616 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4t8m8"] Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.451887 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkq56\" (UniqueName: \"kubernetes.io/projected/0a1f0167-f063-43ec-a0e2-b9f7ebb05a22-kube-api-access-rkq56\") pod \"crc-debug-n28m4\" (UID: \"0a1f0167-f063-43ec-a0e2-b9f7ebb05a22\") " pod="openshift-must-gather-gpvcm/crc-debug-n28m4" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.451974 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a1f0167-f063-43ec-a0e2-b9f7ebb05a22-host\") pod \"crc-debug-n28m4\" (UID: \"0a1f0167-f063-43ec-a0e2-b9f7ebb05a22\") " pod="openshift-must-gather-gpvcm/crc-debug-n28m4" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.471353 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5466fc4f46-xdj8r_f9707878-82b6-46d7-b6c6-65745f7c72c3/barbican-api/0.log" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.553458 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a7081e-67c3-4dbe-a338-b66db8607aad-utilities\") pod \"certified-operators-4t8m8\" (UID: \"e0a7081e-67c3-4dbe-a338-b66db8607aad\") " pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.553533 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68rgn\" (UniqueName: \"kubernetes.io/projected/e0a7081e-67c3-4dbe-a338-b66db8607aad-kube-api-access-68rgn\") pod \"certified-operators-4t8m8\" (UID: \"e0a7081e-67c3-4dbe-a338-b66db8607aad\") " pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.553803 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkq56\" (UniqueName: \"kubernetes.io/projected/0a1f0167-f063-43ec-a0e2-b9f7ebb05a22-kube-api-access-rkq56\") pod \"crc-debug-n28m4\" (UID: \"0a1f0167-f063-43ec-a0e2-b9f7ebb05a22\") " pod="openshift-must-gather-gpvcm/crc-debug-n28m4" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.553979 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a1f0167-f063-43ec-a0e2-b9f7ebb05a22-host\") pod \"crc-debug-n28m4\" (UID: \"0a1f0167-f063-43ec-a0e2-b9f7ebb05a22\") " pod="openshift-must-gather-gpvcm/crc-debug-n28m4" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.554010 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a7081e-67c3-4dbe-a338-b66db8607aad-catalog-content\") pod \"certified-operators-4t8m8\" (UID: \"e0a7081e-67c3-4dbe-a338-b66db8607aad\") " pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.554107 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a1f0167-f063-43ec-a0e2-b9f7ebb05a22-host\") pod \"crc-debug-n28m4\" (UID: \"0a1f0167-f063-43ec-a0e2-b9f7ebb05a22\") " pod="openshift-must-gather-gpvcm/crc-debug-n28m4" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.569405 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkq56\" (UniqueName: \"kubernetes.io/projected/0a1f0167-f063-43ec-a0e2-b9f7ebb05a22-kube-api-access-rkq56\") pod \"crc-debug-n28m4\" (UID: \"0a1f0167-f063-43ec-a0e2-b9f7ebb05a22\") " pod="openshift-must-gather-gpvcm/crc-debug-n28m4" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.588586 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5466fc4f46-xdj8r_f9707878-82b6-46d7-b6c6-65745f7c72c3/barbican-api-log/0.log" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.610086 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gpvcm/crc-debug-n28m4" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.655955 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68rgn\" (UniqueName: \"kubernetes.io/projected/e0a7081e-67c3-4dbe-a338-b66db8607aad-kube-api-access-68rgn\") pod \"certified-operators-4t8m8\" (UID: \"e0a7081e-67c3-4dbe-a338-b66db8607aad\") " pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.656597 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a7081e-67c3-4dbe-a338-b66db8607aad-catalog-content\") pod \"certified-operators-4t8m8\" (UID: \"e0a7081e-67c3-4dbe-a338-b66db8607aad\") " pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.656171 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a7081e-67c3-4dbe-a338-b66db8607aad-catalog-content\") pod \"certified-operators-4t8m8\" (UID: \"e0a7081e-67c3-4dbe-a338-b66db8607aad\") " pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.656920 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a7081e-67c3-4dbe-a338-b66db8607aad-utilities\") pod \"certified-operators-4t8m8\" (UID: \"e0a7081e-67c3-4dbe-a338-b66db8607aad\") " pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.656701 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a7081e-67c3-4dbe-a338-b66db8607aad-utilities\") pod \"certified-operators-4t8m8\" (UID: \"e0a7081e-67c3-4dbe-a338-b66db8607aad\") " pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.678480 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68rgn\" (UniqueName: \"kubernetes.io/projected/e0a7081e-67c3-4dbe-a338-b66db8607aad-kube-api-access-68rgn\") pod \"certified-operators-4t8m8\" (UID: \"e0a7081e-67c3-4dbe-a338-b66db8607aad\") " pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.719571 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.840660 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gpvcm/crc-debug-n28m4" event={"ID":"0a1f0167-f063-43ec-a0e2-b9f7ebb05a22","Type":"ContainerStarted","Data":"b3cceb1f898b8d86cfb8931946e45feafaf1386b56f1583bb05c7b1d2b50cb5e"} Feb 26 15:04:22 crc kubenswrapper[4724]: I0226 15:04:22.976151 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:04:23 crc kubenswrapper[4724]: I0226 15:04:23.086905 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-59bb6b4c7b-c52zs_f4f8bc69-bc44-4cda-8799-9b3e0786ef81/barbican-keystone-listener/0.log" Feb 26 15:04:23 crc kubenswrapper[4724]: I0226 15:04:23.268023 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-59bb6b4c7b-c52zs_f4f8bc69-bc44-4cda-8799-9b3e0786ef81/barbican-keystone-listener-log/0.log" Feb 26 15:04:23 crc kubenswrapper[4724]: I0226 15:04:23.378449 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4t8m8"] Feb 26 15:04:23 crc kubenswrapper[4724]: I0226 15:04:23.493838 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-84bb945b69-xfww2_04c98d03-1308-4014-8703-2c58516595ca/barbican-worker/0.log" Feb 26 15:04:23 crc kubenswrapper[4724]: I0226 15:04:23.632942 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-84bb945b69-xfww2_04c98d03-1308-4014-8703-2c58516595ca/barbican-worker-log/0.log" Feb 26 15:04:23 crc kubenswrapper[4724]: I0226 15:04:23.684589 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7_fb1451db-04cb-41fc-b46a-3a64ea6e8528/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:23 crc kubenswrapper[4724]: I0226 15:04:23.851686 4724 generic.go:334] "Generic (PLEG): container finished" podID="e0a7081e-67c3-4dbe-a338-b66db8607aad" containerID="9623f4f7b845c98140e2cccf3124a2383c02da8561807d02de35015dd51e39ec" exitCode=0 Feb 26 15:04:23 crc kubenswrapper[4724]: I0226 15:04:23.851774 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t8m8" event={"ID":"e0a7081e-67c3-4dbe-a338-b66db8607aad","Type":"ContainerDied","Data":"9623f4f7b845c98140e2cccf3124a2383c02da8561807d02de35015dd51e39ec"} Feb 26 15:04:23 crc kubenswrapper[4724]: I0226 15:04:23.851868 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t8m8" event={"ID":"e0a7081e-67c3-4dbe-a338-b66db8607aad","Type":"ContainerStarted","Data":"d445d45ddf492a7b2d55b7a0eef228cf30d142dd6c814863a41503723c57c628"} Feb 26 15:04:23 crc kubenswrapper[4724]: I0226 15:04:23.855805 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"1413d2ccbd104e8150cde8d90f88242e089bd6ca48f9c203576affea50184696"} Feb 26 15:04:23 crc kubenswrapper[4724]: I0226 15:04:23.859085 4724 generic.go:334] "Generic (PLEG): container finished" podID="0a1f0167-f063-43ec-a0e2-b9f7ebb05a22" containerID="e315be8bf9d4ec81a6c49869422c6ee416f97b62bf15063d9c5ae928b975836f" exitCode=1 Feb 26 15:04:23 crc kubenswrapper[4724]: I0226 15:04:23.859203 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gpvcm/crc-debug-n28m4" event={"ID":"0a1f0167-f063-43ec-a0e2-b9f7ebb05a22","Type":"ContainerDied","Data":"e315be8bf9d4ec81a6c49869422c6ee416f97b62bf15063d9c5ae928b975836f"} Feb 26 15:04:23 crc kubenswrapper[4724]: I0226 15:04:23.977255 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gpvcm/crc-debug-n28m4"] Feb 26 15:04:24 crc kubenswrapper[4724]: I0226 15:04:24.095546 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gpvcm/crc-debug-n28m4"] Feb 26 15:04:24 crc kubenswrapper[4724]: I0226 15:04:24.202616 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3da6a1f6-3a11-4249-8038-9b41635e7011/ceilometer-central-agent/0.log" Feb 26 15:04:24 crc kubenswrapper[4724]: I0226 15:04:24.382499 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3da6a1f6-3a11-4249-8038-9b41635e7011/proxy-httpd/0.log" Feb 26 15:04:24 crc kubenswrapper[4724]: I0226 15:04:24.427966 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3da6a1f6-3a11-4249-8038-9b41635e7011/sg-core/0.log" Feb 26 15:04:24 crc kubenswrapper[4724]: I0226 15:04:24.477429 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3da6a1f6-3a11-4249-8038-9b41635e7011/ceilometer-notification-agent/0.log" Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.005220 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gpvcm/crc-debug-n28m4" Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.016164 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a66d564c-8f30-413c-8026-578de3a429d4/cinder-api-log/0.log" Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.047443 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a66d564c-8f30-413c-8026-578de3a429d4/cinder-api/0.log" Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.049974 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_67ba4493-2ccf-47d8-a018-eadc53f931cf/cinder-scheduler/0.log" Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.170659 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkq56\" (UniqueName: \"kubernetes.io/projected/0a1f0167-f063-43ec-a0e2-b9f7ebb05a22-kube-api-access-rkq56\") pod \"0a1f0167-f063-43ec-a0e2-b9f7ebb05a22\" (UID: \"0a1f0167-f063-43ec-a0e2-b9f7ebb05a22\") " Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.171019 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a1f0167-f063-43ec-a0e2-b9f7ebb05a22-host\") pod \"0a1f0167-f063-43ec-a0e2-b9f7ebb05a22\" (UID: \"0a1f0167-f063-43ec-a0e2-b9f7ebb05a22\") " Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.171633 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a1f0167-f063-43ec-a0e2-b9f7ebb05a22-host" (OuterVolumeSpecName: "host") pod "0a1f0167-f063-43ec-a0e2-b9f7ebb05a22" (UID: "0a1f0167-f063-43ec-a0e2-b9f7ebb05a22"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.194509 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a1f0167-f063-43ec-a0e2-b9f7ebb05a22-kube-api-access-rkq56" (OuterVolumeSpecName: "kube-api-access-rkq56") pod "0a1f0167-f063-43ec-a0e2-b9f7ebb05a22" (UID: "0a1f0167-f063-43ec-a0e2-b9f7ebb05a22"). InnerVolumeSpecName "kube-api-access-rkq56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.273865 4724 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a1f0167-f063-43ec-a0e2-b9f7ebb05a22-host\") on node \"crc\" DevicePath \"\"" Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.273903 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkq56\" (UniqueName: \"kubernetes.io/projected/0a1f0167-f063-43ec-a0e2-b9f7ebb05a22-kube-api-access-rkq56\") on node \"crc\" DevicePath \"\"" Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.405756 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_67ba4493-2ccf-47d8-a018-eadc53f931cf/probe/0.log" Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.606698 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-6c626_a96647e0-99f5-4a89-823e-87f946fbfc02/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.888170 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cceb1f898b8d86cfb8931946e45feafaf1386b56f1583bb05c7b1d2b50cb5e" Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.888441 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gpvcm/crc-debug-n28m4" Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.890570 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t8m8" event={"ID":"e0a7081e-67c3-4dbe-a338-b66db8607aad","Type":"ContainerStarted","Data":"0aca4ae9af10f984fcd996b23013f6c387f38b40e8527df87886672278123043"} Feb 26 15:04:25 crc kubenswrapper[4724]: I0226 15:04:25.985626 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a1f0167-f063-43ec-a0e2-b9f7ebb05a22" path="/var/lib/kubelet/pods/0a1f0167-f063-43ec-a0e2-b9f7ebb05a22/volumes" Feb 26 15:04:26 crc kubenswrapper[4724]: I0226 15:04:26.011425 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt_cdfbc2ed-ca25-4209-b3d8-d372bc73801e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:26 crc kubenswrapper[4724]: I0226 15:04:26.030163 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64f6bf65cc-sgjfx_10b37b6f-2173-460a-aebf-876cd4efc50a/init/0.log" Feb 26 15:04:26 crc kubenswrapper[4724]: I0226 15:04:26.336558 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64f6bf65cc-sgjfx_10b37b6f-2173-460a-aebf-876cd4efc50a/init/0.log" Feb 26 15:04:26 crc kubenswrapper[4724]: I0226 15:04:26.785470 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q_3587d474-38c2-4bdb-af02-8f03932c85bc/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:26 crc kubenswrapper[4724]: I0226 15:04:26.877408 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64f6bf65cc-sgjfx_10b37b6f-2173-460a-aebf-876cd4efc50a/dnsmasq-dns/0.log" Feb 26 15:04:26 crc kubenswrapper[4724]: I0226 15:04:26.888890 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3fdec6fc-d28c-456b-b3a9-6eae59d27655/glance-httpd/0.log" Feb 26 15:04:27 crc kubenswrapper[4724]: I0226 15:04:27.367316 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4468be96-ea3b-4b93-8c93-82b6e51401e1/glance-log/0.log" Feb 26 15:04:27 crc kubenswrapper[4724]: I0226 15:04:27.369359 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3fdec6fc-d28c-456b-b3a9-6eae59d27655/glance-log/0.log" Feb 26 15:04:27 crc kubenswrapper[4724]: I0226 15:04:27.425518 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4468be96-ea3b-4b93-8c93-82b6e51401e1/glance-httpd/0.log" Feb 26 15:04:27 crc kubenswrapper[4724]: I0226 15:04:27.934867 4724 generic.go:334] "Generic (PLEG): container finished" podID="e0a7081e-67c3-4dbe-a338-b66db8607aad" containerID="0aca4ae9af10f984fcd996b23013f6c387f38b40e8527df87886672278123043" exitCode=0 Feb 26 15:04:27 crc kubenswrapper[4724]: I0226 15:04:27.934939 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t8m8" event={"ID":"e0a7081e-67c3-4dbe-a338-b66db8607aad","Type":"ContainerDied","Data":"0aca4ae9af10f984fcd996b23013f6c387f38b40e8527df87886672278123043"} Feb 26 15:04:28 crc kubenswrapper[4724]: I0226 15:04:28.389936 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-78fbbcf444-k8n4t_791d107b-678e-448e-859c-864e9e66dd16/heat-engine/0.log" Feb 26 15:04:28 crc kubenswrapper[4724]: I0226 15:04:28.953005 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t8m8" event={"ID":"e0a7081e-67c3-4dbe-a338-b66db8607aad","Type":"ContainerStarted","Data":"9b390eb82b5b235d2230060e54f677846e483981dc92d96ae608ece3961c75a1"} Feb 26 15:04:28 crc kubenswrapper[4724]: I0226 15:04:28.972351 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4t8m8" podStartSLOduration=2.423183585 podStartE2EDuration="6.972333087s" podCreationTimestamp="2026-02-26 15:04:22 +0000 UTC" firstStartedPulling="2026-02-26 15:04:23.855432547 +0000 UTC m=+14330.511171652" lastFinishedPulling="2026-02-26 15:04:28.404582039 +0000 UTC m=+14335.060321154" observedRunningTime="2026-02-26 15:04:28.967933956 +0000 UTC m=+14335.623673071" watchObservedRunningTime="2026-02-26 15:04:28.972333087 +0000 UTC m=+14335.628072202" Feb 26 15:04:29 crc kubenswrapper[4724]: I0226 15:04:29.002059 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57977849d4-8s5ds_e4c4b3ae-030b-4e33-9779-2ffa39196a76/horizon/2.log" Feb 26 15:04:29 crc kubenswrapper[4724]: I0226 15:04:29.061158 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57977849d4-8s5ds_e4c4b3ae-030b-4e33-9779-2ffa39196a76/horizon/1.log" Feb 26 15:04:29 crc kubenswrapper[4724]: I0226 15:04:29.447679 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp_5f7c705e-b14f-49dc-9510-4c4b71838bbf/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:30 crc kubenswrapper[4724]: I0226 15:04:30.163288 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5bbc75466c-6dmf6_e57d7bd1-267a-4643-9581-8554109f7cba/heat-cfnapi/0.log" Feb 26 15:04:30 crc kubenswrapper[4724]: I0226 15:04:30.232907 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-pvg6v_34c7b1bf-1861-40ec-910b-36f494a396f6/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:30 crc kubenswrapper[4724]: I0226 15:04:30.390222 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-58cc4895d6-7zzgw_60dc589b-0663-4d44-a1aa-c57772731f5b/heat-api/0.log" Feb 26 15:04:30 crc kubenswrapper[4724]: I0226 15:04:30.798130 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29535181-fgzvv_b8280e7e-39bf-4ace-b878-cc9148026c74/keystone-cron/0.log" Feb 26 15:04:31 crc kubenswrapper[4724]: I0226 15:04:31.042662 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29535241-52mbf_5276bce5-b50f-415f-a487-2bcf33a42e0d/keystone-cron/0.log" Feb 26 15:04:31 crc kubenswrapper[4724]: I0226 15:04:31.169636 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29535301-44wcs_c640aad3-ad6f-456d-9901-0bb0a62b88e4/keystone-cron/0.log" Feb 26 15:04:31 crc kubenswrapper[4724]: I0226 15:04:31.471838 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57977849d4-8s5ds_e4c4b3ae-030b-4e33-9779-2ffa39196a76/horizon-log/0.log" Feb 26 15:04:31 crc kubenswrapper[4724]: I0226 15:04:31.601833 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4ea1726a-a8a4-4e5d-b39f-c8393e0dad54/kube-state-metrics/0.log" Feb 26 15:04:32 crc kubenswrapper[4724]: I0226 15:04:32.050366 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-678bf4f784-7wp9n_e21108d2-f9c8-4427-80c5-402ec0dbf689/keystone-api/0.log" Feb 26 15:04:32 crc kubenswrapper[4724]: I0226 15:04:32.127857 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-bwr97_8a0a7cda-6bc1-44ce-8d91-ca87271fb03e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:32 crc kubenswrapper[4724]: I0226 15:04:32.645011 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-cc56c757c-ds2pf_4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf/neutron-httpd/0.log" Feb 26 15:04:32 crc kubenswrapper[4724]: I0226 15:04:32.720391 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:04:32 crc kubenswrapper[4724]: I0226 15:04:32.720673 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:04:32 crc kubenswrapper[4724]: I0226 15:04:32.853565 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5_d044f276-fe55-46c7-ba3f-e566a7f73e5b/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:33 crc kubenswrapper[4724]: I0226 15:04:33.321906 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-cc56c757c-ds2pf_4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf/neutron-api/0.log" Feb 26 15:04:33 crc kubenswrapper[4724]: I0226 15:04:33.802646 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4t8m8" podUID="e0a7081e-67c3-4dbe-a338-b66db8607aad" containerName="registry-server" probeResult="failure" output=< Feb 26 15:04:33 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 15:04:33 crc kubenswrapper[4724]: > Feb 26 15:04:34 crc kubenswrapper[4724]: I0226 15:04:34.650267 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f/nova-cell0-conductor-conductor/0.log" Feb 26 15:04:34 crc kubenswrapper[4724]: I0226 15:04:34.803420 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8b01b6fe-7860-4ea8-9a62-4113061e1d42/nova-cell1-conductor-conductor/0.log" Feb 26 15:04:35 crc kubenswrapper[4724]: I0226 15:04:35.737349 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54/nova-cell1-novncproxy-novncproxy/0.log" Feb 26 15:04:35 crc kubenswrapper[4724]: I0226 15:04:35.826122 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-tm4z5_9b788179-93c8-43fa-9c05-ce6807179444/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:36 crc kubenswrapper[4724]: I0226 15:04:36.282454 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a3ba1adb-959d-470b-a25d-5967665793f3/nova-metadata-log/0.log" Feb 26 15:04:37 crc kubenswrapper[4724]: I0226 15:04:37.447578 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2496c701-9abc-4d28-8f5d-9cde4cefbabb/nova-api-log/0.log" Feb 26 15:04:37 crc kubenswrapper[4724]: I0226 15:04:37.913488 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_8972b4b1-55d2-433f-a7f0-886a242a9db2/nova-scheduler-scheduler/0.log" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.093027 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fxkc4"] Feb 26 15:04:38 crc kubenswrapper[4724]: E0226 15:04:38.093412 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a1f0167-f063-43ec-a0e2-b9f7ebb05a22" containerName="container-00" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.093427 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a1f0167-f063-43ec-a0e2-b9f7ebb05a22" containerName="container-00" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.093643 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a1f0167-f063-43ec-a0e2-b9f7ebb05a22" containerName="container-00" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.094900 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.104573 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b0d66ab1-513b-452a-9f31-bfc4b4be6c18/mysql-bootstrap/0.log" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.137094 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fxkc4"] Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.224214 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fc30da-8fc7-4653-a975-bb8411785579-utilities\") pod \"redhat-operators-fxkc4\" (UID: \"37fc30da-8fc7-4653-a975-bb8411785579\") " pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.224385 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fc30da-8fc7-4653-a975-bb8411785579-catalog-content\") pod \"redhat-operators-fxkc4\" (UID: \"37fc30da-8fc7-4653-a975-bb8411785579\") " pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.224420 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt5v7\" (UniqueName: \"kubernetes.io/projected/37fc30da-8fc7-4653-a975-bb8411785579-kube-api-access-bt5v7\") pod \"redhat-operators-fxkc4\" (UID: \"37fc30da-8fc7-4653-a975-bb8411785579\") " pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.326241 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fc30da-8fc7-4653-a975-bb8411785579-catalog-content\") pod \"redhat-operators-fxkc4\" (UID: \"37fc30da-8fc7-4653-a975-bb8411785579\") " pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.326283 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt5v7\" (UniqueName: \"kubernetes.io/projected/37fc30da-8fc7-4653-a975-bb8411785579-kube-api-access-bt5v7\") pod \"redhat-operators-fxkc4\" (UID: \"37fc30da-8fc7-4653-a975-bb8411785579\") " pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.326355 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fc30da-8fc7-4653-a975-bb8411785579-utilities\") pod \"redhat-operators-fxkc4\" (UID: \"37fc30da-8fc7-4653-a975-bb8411785579\") " pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.326836 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fc30da-8fc7-4653-a975-bb8411785579-utilities\") pod \"redhat-operators-fxkc4\" (UID: \"37fc30da-8fc7-4653-a975-bb8411785579\") " pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.327083 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fc30da-8fc7-4653-a975-bb8411785579-catalog-content\") pod \"redhat-operators-fxkc4\" (UID: \"37fc30da-8fc7-4653-a975-bb8411785579\") " pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.366123 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt5v7\" (UniqueName: \"kubernetes.io/projected/37fc30da-8fc7-4653-a975-bb8411785579-kube-api-access-bt5v7\") pod \"redhat-operators-fxkc4\" (UID: \"37fc30da-8fc7-4653-a975-bb8411785579\") " pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.427696 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.496114 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b0d66ab1-513b-452a-9f31-bfc4b4be6c18/mysql-bootstrap/0.log" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.511071 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b0d66ab1-513b-452a-9f31-bfc4b4be6c18/galera/0.log" Feb 26 15:04:38 crc kubenswrapper[4724]: I0226 15:04:38.894615 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6abc9b19-0018-46d1-a119-0ffb069a1795/mysql-bootstrap/0.log" Feb 26 15:04:39 crc kubenswrapper[4724]: I0226 15:04:39.041984 4724 scope.go:117] "RemoveContainer" containerID="60d8c213b0a54202992b4956fe573fab6027bbd3c4d4e0cff4cad94e933e6d13" Feb 26 15:04:39 crc kubenswrapper[4724]: I0226 15:04:39.059752 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxkc4" event={"ID":"37fc30da-8fc7-4653-a975-bb8411785579","Type":"ContainerStarted","Data":"c8680466bbeff2ca34d657e762112aaf3924dd3071a557ece5542cebeda914fd"} Feb 26 15:04:39 crc kubenswrapper[4724]: I0226 15:04:39.072116 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fxkc4"] Feb 26 15:04:39 crc kubenswrapper[4724]: I0226 15:04:39.212982 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6abc9b19-0018-46d1-a119-0ffb069a1795/mysql-bootstrap/0.log" Feb 26 15:04:39 crc kubenswrapper[4724]: I0226 15:04:39.342972 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6abc9b19-0018-46d1-a119-0ffb069a1795/galera/0.log" Feb 26 15:04:39 crc kubenswrapper[4724]: I0226 15:04:39.730963 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed/openstackclient/0.log" Feb 26 15:04:40 crc kubenswrapper[4724]: I0226 15:04:40.069617 4724 generic.go:334] "Generic (PLEG): container finished" podID="37fc30da-8fc7-4653-a975-bb8411785579" containerID="4871c091d5bc11fe66df9c03934ba0e589a9d839387c502fd6c42dfd9a364692" exitCode=0 Feb 26 15:04:40 crc kubenswrapper[4724]: I0226 15:04:40.069659 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxkc4" event={"ID":"37fc30da-8fc7-4653-a975-bb8411785579","Type":"ContainerDied","Data":"4871c091d5bc11fe66df9c03934ba0e589a9d839387c502fd6c42dfd9a364692"} Feb 26 15:04:40 crc kubenswrapper[4724]: I0226 15:04:40.206225 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2496c701-9abc-4d28-8f5d-9cde4cefbabb/nova-api-api/0.log" Feb 26 15:04:40 crc kubenswrapper[4724]: I0226 15:04:40.409738 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wm86x_9784324f-b3cf-403e-9e3f-c5298a5257eb/openstack-network-exporter/0.log" Feb 26 15:04:40 crc kubenswrapper[4724]: I0226 15:04:40.765388 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wsr8k_5948e8de-f31a-4efb-80dc-e8dfb083ab79/ovsdb-server-init/0.log" Feb 26 15:04:41 crc kubenswrapper[4724]: I0226 15:04:41.198329 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wsr8k_5948e8de-f31a-4efb-80dc-e8dfb083ab79/ovsdb-server/0.log" Feb 26 15:04:41 crc kubenswrapper[4724]: I0226 15:04:41.209503 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wsr8k_5948e8de-f31a-4efb-80dc-e8dfb083ab79/ovsdb-server-init/0.log" Feb 26 15:04:41 crc kubenswrapper[4724]: I0226 15:04:41.312195 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wsr8k_5948e8de-f31a-4efb-80dc-e8dfb083ab79/ovs-vswitchd/0.log" Feb 26 15:04:41 crc kubenswrapper[4724]: I0226 15:04:41.598236 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-x9682_5b8939ea-2d97-461c-ad75-cba4379157f7/ovn-controller/0.log" Feb 26 15:04:41 crc kubenswrapper[4724]: I0226 15:04:41.860961 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-qw42n_33c4673e-f3b9-4bbf-a97d-39412344f6c8/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:42 crc kubenswrapper[4724]: I0226 15:04:42.117831 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxkc4" event={"ID":"37fc30da-8fc7-4653-a975-bb8411785579","Type":"ContainerStarted","Data":"172ba54a4adb2d7f911ebe97a9af4b2069f6750f6da14ca197375d728607adde"} Feb 26 15:04:42 crc kubenswrapper[4724]: I0226 15:04:42.369698 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_619c3911-f86d-468d-b689-e939b16388e2/ovn-northd/0.log" Feb 26 15:04:42 crc kubenswrapper[4724]: I0226 15:04:42.395424 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_619c3911-f86d-468d-b689-e939b16388e2/openstack-network-exporter/0.log" Feb 26 15:04:43 crc kubenswrapper[4724]: I0226 15:04:43.293966 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_028cb20f-b715-40db-94c1-38bfb934ef53/openstack-network-exporter/0.log" Feb 26 15:04:43 crc kubenswrapper[4724]: I0226 15:04:43.488076 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_028cb20f-b715-40db-94c1-38bfb934ef53/ovsdbserver-nb/0.log" Feb 26 15:04:43 crc kubenswrapper[4724]: I0226 15:04:43.761533 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6f3d9665-0fdf-4b18-a4cb-1e84f24327ca/openstack-network-exporter/0.log" Feb 26 15:04:43 crc kubenswrapper[4724]: I0226 15:04:43.798222 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4t8m8" podUID="e0a7081e-67c3-4dbe-a338-b66db8607aad" containerName="registry-server" probeResult="failure" output=< Feb 26 15:04:43 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 15:04:43 crc kubenswrapper[4724]: > Feb 26 15:04:43 crc kubenswrapper[4724]: I0226 15:04:43.883215 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6f3d9665-0fdf-4b18-a4cb-1e84f24327ca/ovsdbserver-sb/0.log" Feb 26 15:04:44 crc kubenswrapper[4724]: I0226 15:04:44.333822 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a3ba1adb-959d-470b-a25d-5967665793f3/nova-metadata-metadata/0.log" Feb 26 15:04:45 crc kubenswrapper[4724]: I0226 15:04:45.136752 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-869f945844-vjsk6_8896d359-383e-4f56-a18d-2d8a913d05a4/placement-api/0.log" Feb 26 15:04:45 crc kubenswrapper[4724]: I0226 15:04:45.349223 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-869f945844-vjsk6_8896d359-383e-4f56-a18d-2d8a913d05a4/placement-log/0.log" Feb 26 15:04:45 crc kubenswrapper[4724]: I0226 15:04:45.431141 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df/setup-container/0.log" Feb 26 15:04:45 crc kubenswrapper[4724]: I0226 15:04:45.537940 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df/setup-container/0.log" Feb 26 15:04:45 crc kubenswrapper[4724]: I0226 15:04:45.800891 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df/rabbitmq/0.log" Feb 26 15:04:45 crc kubenswrapper[4724]: I0226 15:04:45.910125 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bad75855-a326-41f0-8b17-c83e5be398b9/setup-container/0.log" Feb 26 15:04:46 crc kubenswrapper[4724]: I0226 15:04:46.139757 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bad75855-a326-41f0-8b17-c83e5be398b9/setup-container/0.log" Feb 26 15:04:46 crc kubenswrapper[4724]: I0226 15:04:46.326595 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bad75855-a326-41f0-8b17-c83e5be398b9/rabbitmq/0.log" Feb 26 15:04:46 crc kubenswrapper[4724]: I0226 15:04:46.616155 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn_fa584460-b6d4-4fe8-b351-f55f6c5a969a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:46 crc kubenswrapper[4724]: I0226 15:04:46.983009 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2_49850149-79d3-4700-801a-c2630caba9c9/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:47 crc kubenswrapper[4724]: I0226 15:04:47.052046 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7c86p_de96567c-d135-4e9a-b847-ce90658d94be/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:47 crc kubenswrapper[4724]: I0226 15:04:47.454837 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-rds5w_cad1abca-ca70-4988-804c-ca6d35ba05d7/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:47 crc kubenswrapper[4724]: I0226 15:04:47.511946 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gwnp7_2206a227-78b8-4ca1-a425-fb061de91843/ssh-known-hosts-edpm-deployment/0.log" Feb 26 15:04:47 crc kubenswrapper[4724]: I0226 15:04:47.880986 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-746558bfbf-gbdpm_acbb8b99-0b04-48c7-904e-a5c5304813a3/proxy-server/0.log" Feb 26 15:04:48 crc kubenswrapper[4724]: I0226 15:04:48.428915 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-7kkhs_e7412680-68df-4ebb-9961-8a89d8f83176/swift-ring-rebalance/0.log" Feb 26 15:04:49 crc kubenswrapper[4724]: I0226 15:04:49.073303 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/account-auditor/0.log" Feb 26 15:04:49 crc kubenswrapper[4724]: I0226 15:04:49.145612 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-746558bfbf-gbdpm_acbb8b99-0b04-48c7-904e-a5c5304813a3/proxy-httpd/0.log" Feb 26 15:04:49 crc kubenswrapper[4724]: I0226 15:04:49.203124 4724 generic.go:334] "Generic (PLEG): container finished" podID="37fc30da-8fc7-4653-a975-bb8411785579" containerID="172ba54a4adb2d7f911ebe97a9af4b2069f6750f6da14ca197375d728607adde" exitCode=0 Feb 26 15:04:49 crc kubenswrapper[4724]: I0226 15:04:49.203620 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxkc4" event={"ID":"37fc30da-8fc7-4653-a975-bb8411785579","Type":"ContainerDied","Data":"172ba54a4adb2d7f911ebe97a9af4b2069f6750f6da14ca197375d728607adde"} Feb 26 15:04:49 crc kubenswrapper[4724]: I0226 15:04:49.445355 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/account-reaper/0.log" Feb 26 15:04:49 crc kubenswrapper[4724]: I0226 15:04:49.564309 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/account-server/0.log" Feb 26 15:04:49 crc kubenswrapper[4724]: I0226 15:04:49.602779 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/container-auditor/0.log" Feb 26 15:04:49 crc kubenswrapper[4724]: I0226 15:04:49.961691 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/container-updater/0.log" Feb 26 15:04:49 crc kubenswrapper[4724]: I0226 15:04:49.970983 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/container-server/0.log" Feb 26 15:04:50 crc kubenswrapper[4724]: I0226 15:04:50.023958 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/container-replicator/0.log" Feb 26 15:04:50 crc kubenswrapper[4724]: I0226 15:04:50.037659 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/account-replicator/0.log" Feb 26 15:04:50 crc kubenswrapper[4724]: I0226 15:04:50.422397 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/object-auditor/0.log" Feb 26 15:04:50 crc kubenswrapper[4724]: I0226 15:04:50.521606 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/object-expirer/0.log" Feb 26 15:04:50 crc kubenswrapper[4724]: I0226 15:04:50.615819 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/object-replicator/0.log" Feb 26 15:04:50 crc kubenswrapper[4724]: I0226 15:04:50.806188 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/object-server/0.log" Feb 26 15:04:50 crc kubenswrapper[4724]: I0226 15:04:50.842019 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/object-updater/0.log" Feb 26 15:04:51 crc kubenswrapper[4724]: I0226 15:04:51.202684 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/rsync/0.log" Feb 26 15:04:51 crc kubenswrapper[4724]: I0226 15:04:51.283058 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxkc4" event={"ID":"37fc30da-8fc7-4653-a975-bb8411785579","Type":"ContainerStarted","Data":"f13aa57f9fa43a7edbd68e4ad0386fce99906e8aaee36733924242df2851546f"} Feb 26 15:04:51 crc kubenswrapper[4724]: I0226 15:04:51.318377 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fxkc4" podStartSLOduration=3.418199328 podStartE2EDuration="13.31834543s" podCreationTimestamp="2026-02-26 15:04:38 +0000 UTC" firstStartedPulling="2026-02-26 15:04:40.071978381 +0000 UTC m=+14346.727717496" lastFinishedPulling="2026-02-26 15:04:49.972124483 +0000 UTC m=+14356.627863598" observedRunningTime="2026-02-26 15:04:51.30924101 +0000 UTC m=+14357.964980125" watchObservedRunningTime="2026-02-26 15:04:51.31834543 +0000 UTC m=+14357.974084545" Feb 26 15:04:51 crc kubenswrapper[4724]: I0226 15:04:51.405154 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/swift-recon-cron/0.log" Feb 26 15:04:51 crc kubenswrapper[4724]: I0226 15:04:51.868237 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-stfrc_b9209966-a73c-4858-8faf-9053e5447993/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:51 crc kubenswrapper[4724]: I0226 15:04:51.957288 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-multi-thread-testing_14b6ff63-4a92-49d9-9d37-0f2092545b77/tempest-tests-tempest-tests-runner/0.log" Feb 26 15:04:52 crc kubenswrapper[4724]: I0226 15:04:52.653573 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_78368336-7209-421d-b638-e47679769c6d/test-operator-logs-container/0.log" Feb 26 15:04:52 crc kubenswrapper[4724]: I0226 15:04:52.939877 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-thread-testing_b9b5bd47-dc7c-492d-8c33-cd7d528555f6/tempest-tests-tempest-tests-runner/0.log" Feb 26 15:04:53 crc kubenswrapper[4724]: I0226 15:04:53.426665 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd_e4b3aebd-40f4-47b8-836b-dd94ef4010af/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:04:53 crc kubenswrapper[4724]: I0226 15:04:53.783383 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4t8m8" podUID="e0a7081e-67c3-4dbe-a338-b66db8607aad" containerName="registry-server" probeResult="failure" output=< Feb 26 15:04:53 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 15:04:53 crc kubenswrapper[4724]: > Feb 26 15:04:54 crc kubenswrapper[4724]: I0226 15:04:54.781656 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b70be877-253f-4859-ae54-bd241f38cb93/memcached/0.log" Feb 26 15:04:58 crc kubenswrapper[4724]: I0226 15:04:58.428987 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:04:58 crc kubenswrapper[4724]: I0226 15:04:58.430333 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:04:59 crc kubenswrapper[4724]: I0226 15:04:59.475667 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fxkc4" podUID="37fc30da-8fc7-4653-a975-bb8411785579" containerName="registry-server" probeResult="failure" output=< Feb 26 15:04:59 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 15:04:59 crc kubenswrapper[4724]: > Feb 26 15:05:02 crc kubenswrapper[4724]: I0226 15:05:02.774331 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:05:02 crc kubenswrapper[4724]: I0226 15:05:02.830160 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:05:03 crc kubenswrapper[4724]: I0226 15:05:03.021728 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4t8m8"] Feb 26 15:05:04 crc kubenswrapper[4724]: I0226 15:05:04.388220 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4t8m8" podUID="e0a7081e-67c3-4dbe-a338-b66db8607aad" containerName="registry-server" containerID="cri-o://9b390eb82b5b235d2230060e54f677846e483981dc92d96ae608ece3961c75a1" gracePeriod=2 Feb 26 15:05:05 crc kubenswrapper[4724]: I0226 15:05:05.394736 4724 generic.go:334] "Generic (PLEG): container finished" podID="e0a7081e-67c3-4dbe-a338-b66db8607aad" containerID="9b390eb82b5b235d2230060e54f677846e483981dc92d96ae608ece3961c75a1" exitCode=0 Feb 26 15:05:05 crc kubenswrapper[4724]: I0226 15:05:05.394919 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t8m8" event={"ID":"e0a7081e-67c3-4dbe-a338-b66db8607aad","Type":"ContainerDied","Data":"9b390eb82b5b235d2230060e54f677846e483981dc92d96ae608ece3961c75a1"} Feb 26 15:05:05 crc kubenswrapper[4724]: I0226 15:05:05.790475 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:05:05 crc kubenswrapper[4724]: I0226 15:05:05.917771 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a7081e-67c3-4dbe-a338-b66db8607aad-catalog-content\") pod \"e0a7081e-67c3-4dbe-a338-b66db8607aad\" (UID: \"e0a7081e-67c3-4dbe-a338-b66db8607aad\") " Feb 26 15:05:05 crc kubenswrapper[4724]: I0226 15:05:05.917959 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a7081e-67c3-4dbe-a338-b66db8607aad-utilities\") pod \"e0a7081e-67c3-4dbe-a338-b66db8607aad\" (UID: \"e0a7081e-67c3-4dbe-a338-b66db8607aad\") " Feb 26 15:05:05 crc kubenswrapper[4724]: I0226 15:05:05.918008 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68rgn\" (UniqueName: \"kubernetes.io/projected/e0a7081e-67c3-4dbe-a338-b66db8607aad-kube-api-access-68rgn\") pod \"e0a7081e-67c3-4dbe-a338-b66db8607aad\" (UID: \"e0a7081e-67c3-4dbe-a338-b66db8607aad\") " Feb 26 15:05:05 crc kubenswrapper[4724]: I0226 15:05:05.922402 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0a7081e-67c3-4dbe-a338-b66db8607aad-utilities" (OuterVolumeSpecName: "utilities") pod "e0a7081e-67c3-4dbe-a338-b66db8607aad" (UID: "e0a7081e-67c3-4dbe-a338-b66db8607aad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:05:05 crc kubenswrapper[4724]: I0226 15:05:05.930459 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a7081e-67c3-4dbe-a338-b66db8607aad-kube-api-access-68rgn" (OuterVolumeSpecName: "kube-api-access-68rgn") pod "e0a7081e-67c3-4dbe-a338-b66db8607aad" (UID: "e0a7081e-67c3-4dbe-a338-b66db8607aad"). InnerVolumeSpecName "kube-api-access-68rgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:05:05 crc kubenswrapper[4724]: I0226 15:05:05.983510 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0a7081e-67c3-4dbe-a338-b66db8607aad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0a7081e-67c3-4dbe-a338-b66db8607aad" (UID: "e0a7081e-67c3-4dbe-a338-b66db8607aad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:05:06 crc kubenswrapper[4724]: I0226 15:05:06.020134 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a7081e-67c3-4dbe-a338-b66db8607aad-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:05:06 crc kubenswrapper[4724]: I0226 15:05:06.020174 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68rgn\" (UniqueName: \"kubernetes.io/projected/e0a7081e-67c3-4dbe-a338-b66db8607aad-kube-api-access-68rgn\") on node \"crc\" DevicePath \"\"" Feb 26 15:05:06 crc kubenswrapper[4724]: I0226 15:05:06.020200 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a7081e-67c3-4dbe-a338-b66db8607aad-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:05:06 crc kubenswrapper[4724]: I0226 15:05:06.403459 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t8m8" event={"ID":"e0a7081e-67c3-4dbe-a338-b66db8607aad","Type":"ContainerDied","Data":"d445d45ddf492a7b2d55b7a0eef228cf30d142dd6c814863a41503723c57c628"} Feb 26 15:05:06 crc kubenswrapper[4724]: I0226 15:05:06.403781 4724 scope.go:117] "RemoveContainer" containerID="9b390eb82b5b235d2230060e54f677846e483981dc92d96ae608ece3961c75a1" Feb 26 15:05:06 crc kubenswrapper[4724]: I0226 15:05:06.403542 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4t8m8" Feb 26 15:05:06 crc kubenswrapper[4724]: I0226 15:05:06.423275 4724 scope.go:117] "RemoveContainer" containerID="0aca4ae9af10f984fcd996b23013f6c387f38b40e8527df87886672278123043" Feb 26 15:05:06 crc kubenswrapper[4724]: I0226 15:05:06.427967 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4t8m8"] Feb 26 15:05:06 crc kubenswrapper[4724]: I0226 15:05:06.437757 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4t8m8"] Feb 26 15:05:06 crc kubenswrapper[4724]: I0226 15:05:06.441341 4724 scope.go:117] "RemoveContainer" containerID="9623f4f7b845c98140e2cccf3124a2383c02da8561807d02de35015dd51e39ec" Feb 26 15:05:07 crc kubenswrapper[4724]: I0226 15:05:07.986404 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a7081e-67c3-4dbe-a338-b66db8607aad" path="/var/lib/kubelet/pods/e0a7081e-67c3-4dbe-a338-b66db8607aad/volumes" Feb 26 15:05:09 crc kubenswrapper[4724]: I0226 15:05:09.486768 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fxkc4" podUID="37fc30da-8fc7-4653-a975-bb8411785579" containerName="registry-server" probeResult="failure" output=< Feb 26 15:05:09 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 15:05:09 crc kubenswrapper[4724]: > Feb 26 15:05:19 crc kubenswrapper[4724]: I0226 15:05:19.477799 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fxkc4" podUID="37fc30da-8fc7-4653-a975-bb8411785579" containerName="registry-server" probeResult="failure" output=< Feb 26 15:05:19 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 15:05:19 crc kubenswrapper[4724]: > Feb 26 15:05:29 crc kubenswrapper[4724]: I0226 15:05:29.483225 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fxkc4" podUID="37fc30da-8fc7-4653-a975-bb8411785579" containerName="registry-server" probeResult="failure" output=< Feb 26 15:05:29 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 15:05:29 crc kubenswrapper[4724]: > Feb 26 15:05:32 crc kubenswrapper[4724]: I0226 15:05:32.389942 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5466fc4f46-xdj8r_f9707878-82b6-46d7-b6c6-65745f7c72c3/barbican-api/0.log" Feb 26 15:05:32 crc kubenswrapper[4724]: I0226 15:05:32.389943 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5466fc4f46-xdj8r_f9707878-82b6-46d7-b6c6-65745f7c72c3/barbican-api-log/0.log" Feb 26 15:05:32 crc kubenswrapper[4724]: I0226 15:05:32.664893 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-59bb6b4c7b-c52zs_f4f8bc69-bc44-4cda-8799-9b3e0786ef81/barbican-keystone-listener/0.log" Feb 26 15:05:32 crc kubenswrapper[4724]: I0226 15:05:32.814856 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-59bb6b4c7b-c52zs_f4f8bc69-bc44-4cda-8799-9b3e0786ef81/barbican-keystone-listener-log/0.log" Feb 26 15:05:32 crc kubenswrapper[4724]: I0226 15:05:32.918742 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-84bb945b69-xfww2_04c98d03-1308-4014-8703-2c58516595ca/barbican-worker/0.log" Feb 26 15:05:32 crc kubenswrapper[4724]: I0226 15:05:32.971230 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-84bb945b69-xfww2_04c98d03-1308-4014-8703-2c58516595ca/barbican-worker-log/0.log" Feb 26 15:05:33 crc kubenswrapper[4724]: I0226 15:05:33.123916 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-npvx7_fb1451db-04cb-41fc-b46a-3a64ea6e8528/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:05:33 crc kubenswrapper[4724]: I0226 15:05:33.350507 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3da6a1f6-3a11-4249-8038-9b41635e7011/ceilometer-central-agent/0.log" Feb 26 15:05:33 crc kubenswrapper[4724]: I0226 15:05:33.410754 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3da6a1f6-3a11-4249-8038-9b41635e7011/ceilometer-notification-agent/0.log" Feb 26 15:05:33 crc kubenswrapper[4724]: I0226 15:05:33.470817 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3da6a1f6-3a11-4249-8038-9b41635e7011/proxy-httpd/0.log" Feb 26 15:05:33 crc kubenswrapper[4724]: I0226 15:05:33.517239 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3da6a1f6-3a11-4249-8038-9b41635e7011/sg-core/0.log" Feb 26 15:05:33 crc kubenswrapper[4724]: I0226 15:05:33.750503 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a66d564c-8f30-413c-8026-578de3a429d4/cinder-api-log/0.log" Feb 26 15:05:33 crc kubenswrapper[4724]: I0226 15:05:33.938477 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a66d564c-8f30-413c-8026-578de3a429d4/cinder-api/0.log" Feb 26 15:05:34 crc kubenswrapper[4724]: I0226 15:05:34.112330 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_67ba4493-2ccf-47d8-a018-eadc53f931cf/probe/0.log" Feb 26 15:05:34 crc kubenswrapper[4724]: I0226 15:05:34.223665 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_67ba4493-2ccf-47d8-a018-eadc53f931cf/cinder-scheduler/0.log" Feb 26 15:05:34 crc kubenswrapper[4724]: I0226 15:05:34.408097 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-6c626_a96647e0-99f5-4a89-823e-87f946fbfc02/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:05:34 crc kubenswrapper[4724]: I0226 15:05:34.617208 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-pl4pt_cdfbc2ed-ca25-4209-b3d8-d372bc73801e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:05:34 crc kubenswrapper[4724]: I0226 15:05:34.669335 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64f6bf65cc-sgjfx_10b37b6f-2173-460a-aebf-876cd4efc50a/init/0.log" Feb 26 15:05:34 crc kubenswrapper[4724]: I0226 15:05:34.790911 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64f6bf65cc-sgjfx_10b37b6f-2173-460a-aebf-876cd4efc50a/init/0.log" Feb 26 15:05:35 crc kubenswrapper[4724]: I0226 15:05:35.072318 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xkp9q_3587d474-38c2-4bdb-af02-8f03932c85bc/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:05:35 crc kubenswrapper[4724]: I0226 15:05:35.222640 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64f6bf65cc-sgjfx_10b37b6f-2173-460a-aebf-876cd4efc50a/dnsmasq-dns/0.log" Feb 26 15:05:35 crc kubenswrapper[4724]: I0226 15:05:35.378733 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3fdec6fc-d28c-456b-b3a9-6eae59d27655/glance-httpd/0.log" Feb 26 15:05:35 crc kubenswrapper[4724]: I0226 15:05:35.422489 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3fdec6fc-d28c-456b-b3a9-6eae59d27655/glance-log/0.log" Feb 26 15:05:35 crc kubenswrapper[4724]: I0226 15:05:35.839204 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4468be96-ea3b-4b93-8c93-82b6e51401e1/glance-httpd/0.log" Feb 26 15:05:35 crc kubenswrapper[4724]: I0226 15:05:35.858208 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4468be96-ea3b-4b93-8c93-82b6e51401e1/glance-log/0.log" Feb 26 15:05:36 crc kubenswrapper[4724]: I0226 15:05:36.693887 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-78fbbcf444-k8n4t_791d107b-678e-448e-859c-864e9e66dd16/heat-engine/0.log" Feb 26 15:05:37 crc kubenswrapper[4724]: I0226 15:05:37.307060 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57977849d4-8s5ds_e4c4b3ae-030b-4e33-9779-2ffa39196a76/horizon/2.log" Feb 26 15:05:37 crc kubenswrapper[4724]: I0226 15:05:37.455358 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57977849d4-8s5ds_e4c4b3ae-030b-4e33-9779-2ffa39196a76/horizon/1.log" Feb 26 15:05:38 crc kubenswrapper[4724]: I0226 15:05:38.036772 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-9h9cp_5f7c705e-b14f-49dc-9510-4c4b71838bbf/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:05:38 crc kubenswrapper[4724]: I0226 15:05:38.406608 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7_3d0adab1-1760-4649-9b4a-63dbe6bf84a2/util/0.log" Feb 26 15:05:38 crc kubenswrapper[4724]: I0226 15:05:38.547400 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-pvg6v_34c7b1bf-1861-40ec-910b-36f494a396f6/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:05:38 crc kubenswrapper[4724]: I0226 15:05:38.834292 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5bbc75466c-6dmf6_e57d7bd1-267a-4643-9581-8554109f7cba/heat-cfnapi/0.log" Feb 26 15:05:38 crc kubenswrapper[4724]: I0226 15:05:38.870562 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7_3d0adab1-1760-4649-9b4a-63dbe6bf84a2/pull/0.log" Feb 26 15:05:38 crc kubenswrapper[4724]: I0226 15:05:38.882253 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7_3d0adab1-1760-4649-9b4a-63dbe6bf84a2/pull/0.log" Feb 26 15:05:38 crc kubenswrapper[4724]: I0226 15:05:38.963696 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7_3d0adab1-1760-4649-9b4a-63dbe6bf84a2/util/0.log" Feb 26 15:05:39 crc kubenswrapper[4724]: I0226 15:05:39.013549 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-58cc4895d6-7zzgw_60dc589b-0663-4d44-a1aa-c57772731f5b/heat-api/0.log" Feb 26 15:05:39 crc kubenswrapper[4724]: I0226 15:05:39.324319 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29535181-fgzvv_b8280e7e-39bf-4ace-b878-cc9148026c74/keystone-cron/0.log" Feb 26 15:05:39 crc kubenswrapper[4724]: I0226 15:05:39.451881 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7_3d0adab1-1760-4649-9b4a-63dbe6bf84a2/extract/0.log" Feb 26 15:05:39 crc kubenswrapper[4724]: I0226 15:05:39.452848 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7_3d0adab1-1760-4649-9b4a-63dbe6bf84a2/pull/0.log" Feb 26 15:05:39 crc kubenswrapper[4724]: I0226 15:05:39.486691 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fxkc4" podUID="37fc30da-8fc7-4653-a975-bb8411785579" containerName="registry-server" probeResult="failure" output=< Feb 26 15:05:39 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 15:05:39 crc kubenswrapper[4724]: > Feb 26 15:05:39 crc kubenswrapper[4724]: I0226 15:05:39.530646 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29535241-52mbf_5276bce5-b50f-415f-a487-2bcf33a42e0d/keystone-cron/0.log" Feb 26 15:05:39 crc kubenswrapper[4724]: I0226 15:05:39.606803 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7_3d0adab1-1760-4649-9b4a-63dbe6bf84a2/util/0.log" Feb 26 15:05:39 crc kubenswrapper[4724]: I0226 15:05:39.940900 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29535301-44wcs_c640aad3-ad6f-456d-9901-0bb0a62b88e4/keystone-cron/0.log" Feb 26 15:05:40 crc kubenswrapper[4724]: I0226 15:05:40.300140 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4ea1726a-a8a4-4e5d-b39f-c8393e0dad54/kube-state-metrics/0.log" Feb 26 15:05:40 crc kubenswrapper[4724]: I0226 15:05:40.350813 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-p58p8_9ae55185-83a5-47ea-b54f-01b31471f512/manager/0.log" Feb 26 15:05:40 crc kubenswrapper[4724]: I0226 15:05:40.461836 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-bwr97_8a0a7cda-6bc1-44ce-8d91-ca87271fb03e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:05:40 crc kubenswrapper[4724]: I0226 15:05:40.877854 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-6qq4t_1c97807f-b47f-4762-80d8-a296d8108e19/manager/0.log" Feb 26 15:05:41 crc kubenswrapper[4724]: I0226 15:05:41.290086 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-cc56c757c-ds2pf_4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf/neutron-httpd/0.log" Feb 26 15:05:41 crc kubenswrapper[4724]: I0226 15:05:41.361107 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-wxw2f_bc959a10-5f94-4d38-87d2-dda60f8ae078/manager/0.log" Feb 26 15:05:41 crc kubenswrapper[4724]: I0226 15:05:41.450025 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-6zmmb_731a1439-aa83-4119-ae37-23f526e6e73a/manager/0.log" Feb 26 15:05:41 crc kubenswrapper[4724]: I0226 15:05:41.654284 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57977849d4-8s5ds_e4c4b3ae-030b-4e33-9779-2ffa39196a76/horizon-log/0.log" Feb 26 15:05:42 crc kubenswrapper[4724]: I0226 15:05:42.047334 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-7cxg5_d044f276-fe55-46c7-ba3f-e566a7f73e5b/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:05:42 crc kubenswrapper[4724]: I0226 15:05:42.376706 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-pxtxm_2178a458-4e8c-4d30-bbdb-8a0ef864fd80/manager/0.log" Feb 26 15:05:42 crc kubenswrapper[4724]: I0226 15:05:42.465311 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-678bf4f784-7wp9n_e21108d2-f9c8-4427-80c5-402ec0dbf689/keystone-api/0.log" Feb 26 15:05:42 crc kubenswrapper[4724]: I0226 15:05:42.813153 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-8n9qj_2714a834-e9ca-40b1-a73c-2b890783f29e/manager/0.log" Feb 26 15:05:42 crc kubenswrapper[4724]: I0226 15:05:42.938437 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-cc56c757c-ds2pf_4a2f193e-a9f5-4bf9-8039-cf8ae4393ecf/neutron-api/0.log" Feb 26 15:05:43 crc kubenswrapper[4724]: I0226 15:05:43.226546 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-rqsqh_d2b50788-4e25-4589-84b8-00851a2a18b7/manager/0.log" Feb 26 15:05:43 crc kubenswrapper[4724]: I0226 15:05:43.269435 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-bfrsl_5495b5cf-e7d2-4fbf-98a4-ca9fc0276a73/manager/0.log" Feb 26 15:05:43 crc kubenswrapper[4724]: I0226 15:05:43.692582 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-vp8fp_da86929c-f438-4994-80be-1a7aa3b7b76e/manager/0.log" Feb 26 15:05:43 crc kubenswrapper[4724]: I0226 15:05:43.894000 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-wjjjc_193a7bdd-a3a7-493d-8c99-a04d591e3a19/manager/0.log" Feb 26 15:05:44 crc kubenswrapper[4724]: I0226 15:05:44.109244 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8b01b6fe-7860-4ea8-9a62-4113061e1d42/nova-cell1-conductor-conductor/0.log" Feb 26 15:05:44 crc kubenswrapper[4724]: I0226 15:05:44.118264 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_aa8d5ebb-2c8d-43a0-b68f-a6f2e8afaa4f/nova-cell0-conductor-conductor/0.log" Feb 26 15:05:44 crc kubenswrapper[4724]: I0226 15:05:44.172942 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-k75jd_9bcf19f6-1ed9-4315-a263-1bd5c8da7774/manager/0.log" Feb 26 15:05:44 crc kubenswrapper[4724]: I0226 15:05:44.197413 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-rjxxw_5b790d8b-575d-462e-a9b1-512d91261517/manager/0.log" Feb 26 15:05:44 crc kubenswrapper[4724]: I0226 15:05:44.500855 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l_39700bc5-43f0-49b6-b510-523322e34eb5/manager/0.log" Feb 26 15:05:44 crc kubenswrapper[4724]: I0226 15:05:44.987640 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-76b6d74844-bpg9d_20b666d6-e71f-4bdb-b71d-44ac3a0c74c6/operator/0.log" Feb 26 15:05:45 crc kubenswrapper[4724]: I0226 15:05:45.022893 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_56a5c934-fb2e-4ef6-9639-4ab0bd4c7a54/nova-cell1-novncproxy-novncproxy/0.log" Feb 26 15:05:45 crc kubenswrapper[4724]: I0226 15:05:45.107526 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-tm4z5_9b788179-93c8-43fa-9c05-ce6807179444/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:05:45 crc kubenswrapper[4724]: I0226 15:05:45.333290 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5f2tw_ee48a99c-cb5f-4564-9631-daeae942461e/registry-server/0.log" Feb 26 15:05:45 crc kubenswrapper[4724]: I0226 15:05:45.576290 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a3ba1adb-959d-470b-a25d-5967665793f3/nova-metadata-log/0.log" Feb 26 15:05:45 crc kubenswrapper[4724]: I0226 15:05:45.887294 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-pwg6d_cd71c91d-33bb-4eae-9f27-84f39ef7653d/manager/0.log" Feb 26 15:05:45 crc kubenswrapper[4724]: I0226 15:05:45.966981 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-6j4fp_d49437c7-7f60-4304-b216-dcf93e31be87/manager/0.log" Feb 26 15:05:46 crc kubenswrapper[4724]: I0226 15:05:46.359725 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2r984_dc7781b3-4d7b-4855-8e76-bb3ad2028a9c/operator/0.log" Feb 26 15:05:46 crc kubenswrapper[4724]: I0226 15:05:46.713837 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-ntcpr_d90588ea-6237-4fd0-a321-9c6db1e07525/manager/0.log" Feb 26 15:05:48 crc kubenswrapper[4724]: I0226 15:05:48.115231 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-f8k6f_897f5a3f-a04e-4725-8a9a-0ce91c8bb372/manager/0.log" Feb 26 15:05:48 crc kubenswrapper[4724]: I0226 15:05:48.157618 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-589c568786-qhw9r_b14d5ade-65f3-4402-bacd-5acc8ef39ce5/manager/0.log" Feb 26 15:05:48 crc kubenswrapper[4724]: I0226 15:05:48.487979 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-pgtk4_d04a7b9b-e3a4-4876-bb57-10d86295d9c0/manager/0.log" Feb 26 15:05:48 crc kubenswrapper[4724]: I0226 15:05:48.797116 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_8972b4b1-55d2-433f-a7f0-886a242a9db2/nova-scheduler-scheduler/0.log" Feb 26 15:05:49 crc kubenswrapper[4724]: I0226 15:05:49.402896 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2496c701-9abc-4d28-8f5d-9cde4cefbabb/nova-api-log/0.log" Feb 26 15:05:49 crc kubenswrapper[4724]: I0226 15:05:49.486124 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fxkc4" podUID="37fc30da-8fc7-4653-a975-bb8411785579" containerName="registry-server" probeResult="failure" output=< Feb 26 15:05:49 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 15:05:49 crc kubenswrapper[4724]: > Feb 26 15:05:49 crc kubenswrapper[4724]: I0226 15:05:49.498106 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b0d66ab1-513b-452a-9f31-bfc4b4be6c18/mysql-bootstrap/0.log" Feb 26 15:05:49 crc kubenswrapper[4724]: I0226 15:05:49.833318 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b0d66ab1-513b-452a-9f31-bfc4b4be6c18/mysql-bootstrap/0.log" Feb 26 15:05:49 crc kubenswrapper[4724]: I0226 15:05:49.840782 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b0d66ab1-513b-452a-9f31-bfc4b4be6c18/galera/0.log" Feb 26 15:05:50 crc kubenswrapper[4724]: I0226 15:05:50.092787 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6abc9b19-0018-46d1-a119-0ffb069a1795/mysql-bootstrap/0.log" Feb 26 15:05:50 crc kubenswrapper[4724]: I0226 15:05:50.492361 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6abc9b19-0018-46d1-a119-0ffb069a1795/mysql-bootstrap/0.log" Feb 26 15:05:50 crc kubenswrapper[4724]: I0226 15:05:50.534689 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6abc9b19-0018-46d1-a119-0ffb069a1795/galera/0.log" Feb 26 15:05:50 crc kubenswrapper[4724]: I0226 15:05:50.812224 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75d9b57894-2862v_48de473d-2e43-44ee-b0d1-db2c8e11fc2b/manager/0.log" Feb 26 15:05:50 crc kubenswrapper[4724]: I0226 15:05:50.831228 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f73e6f92-20c0-4d6c-98e2-3ae4d2dfcfed/openstackclient/0.log" Feb 26 15:05:51 crc kubenswrapper[4724]: I0226 15:05:51.209265 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wm86x_9784324f-b3cf-403e-9e3f-c5298a5257eb/openstack-network-exporter/0.log" Feb 26 15:05:51 crc kubenswrapper[4724]: I0226 15:05:51.562961 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wsr8k_5948e8de-f31a-4efb-80dc-e8dfb083ab79/ovsdb-server-init/0.log" Feb 26 15:05:51 crc kubenswrapper[4724]: I0226 15:05:51.973236 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wsr8k_5948e8de-f31a-4efb-80dc-e8dfb083ab79/ovsdb-server-init/0.log" Feb 26 15:05:52 crc kubenswrapper[4724]: I0226 15:05:52.023067 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wsr8k_5948e8de-f31a-4efb-80dc-e8dfb083ab79/ovs-vswitchd/0.log" Feb 26 15:05:52 crc kubenswrapper[4724]: I0226 15:05:52.251535 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wsr8k_5948e8de-f31a-4efb-80dc-e8dfb083ab79/ovsdb-server/0.log" Feb 26 15:05:52 crc kubenswrapper[4724]: I0226 15:05:52.277400 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-sbkcr_0cfee1c3-df60-4944-a16e-e01dd310f2c4/manager/0.log" Feb 26 15:05:52 crc kubenswrapper[4724]: I0226 15:05:52.545881 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-x9682_5b8939ea-2d97-461c-ad75-cba4379157f7/ovn-controller/0.log" Feb 26 15:05:52 crc kubenswrapper[4724]: I0226 15:05:52.936234 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-qw42n_33c4673e-f3b9-4bbf-a97d-39412344f6c8/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:05:52 crc kubenswrapper[4724]: I0226 15:05:52.938282 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2496c701-9abc-4d28-8f5d-9cde4cefbabb/nova-api-api/0.log" Feb 26 15:05:53 crc kubenswrapper[4724]: I0226 15:05:53.243486 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_619c3911-f86d-468d-b689-e939b16388e2/ovn-northd/0.log" Feb 26 15:05:53 crc kubenswrapper[4724]: I0226 15:05:53.568988 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_619c3911-f86d-468d-b689-e939b16388e2/openstack-network-exporter/0.log" Feb 26 15:05:53 crc kubenswrapper[4724]: I0226 15:05:53.617386 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_028cb20f-b715-40db-94c1-38bfb934ef53/openstack-network-exporter/0.log" Feb 26 15:05:53 crc kubenswrapper[4724]: I0226 15:05:53.917498 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_028cb20f-b715-40db-94c1-38bfb934ef53/ovsdbserver-nb/0.log" Feb 26 15:05:53 crc kubenswrapper[4724]: I0226 15:05:53.991342 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6f3d9665-0fdf-4b18-a4cb-1e84f24327ca/openstack-network-exporter/0.log" Feb 26 15:05:54 crc kubenswrapper[4724]: I0226 15:05:54.067299 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-q9sb4_f0ccafa2-8b59-49e6-b881-ffaee0c98646/manager/0.log" Feb 26 15:05:54 crc kubenswrapper[4724]: I0226 15:05:54.315171 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6f3d9665-0fdf-4b18-a4cb-1e84f24327ca/ovsdbserver-sb/0.log" Feb 26 15:05:55 crc kubenswrapper[4724]: I0226 15:05:55.056135 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df/setup-container/0.log" Feb 26 15:05:55 crc kubenswrapper[4724]: I0226 15:05:55.218165 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-869f945844-vjsk6_8896d359-383e-4f56-a18d-2d8a913d05a4/placement-api/0.log" Feb 26 15:05:55 crc kubenswrapper[4724]: I0226 15:05:55.543644 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df/setup-container/0.log" Feb 26 15:05:55 crc kubenswrapper[4724]: I0226 15:05:55.579402 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-869f945844-vjsk6_8896d359-383e-4f56-a18d-2d8a913d05a4/placement-log/0.log" Feb 26 15:05:55 crc kubenswrapper[4724]: I0226 15:05:55.704081 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_da62ca3a-60df-4af3-8b0e-9dd3e8ffd0df/rabbitmq/0.log" Feb 26 15:05:56 crc kubenswrapper[4724]: I0226 15:05:56.095257 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bad75855-a326-41f0-8b17-c83e5be398b9/setup-container/0.log" Feb 26 15:05:56 crc kubenswrapper[4724]: I0226 15:05:56.438535 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bad75855-a326-41f0-8b17-c83e5be398b9/setup-container/0.log" Feb 26 15:05:56 crc kubenswrapper[4724]: I0226 15:05:56.454741 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bad75855-a326-41f0-8b17-c83e5be398b9/rabbitmq/0.log" Feb 26 15:05:56 crc kubenswrapper[4724]: I0226 15:05:56.835037 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-k7xpn_fa584460-b6d4-4fe8-b351-f55f6c5a969a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:05:56 crc kubenswrapper[4724]: I0226 15:05:56.984779 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7c86p_de96567c-d135-4e9a-b847-ce90658d94be/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:05:57 crc kubenswrapper[4724]: I0226 15:05:57.394704 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-xhpt2_49850149-79d3-4700-801a-c2630caba9c9/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:05:57 crc kubenswrapper[4724]: I0226 15:05:57.457661 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a3ba1adb-959d-470b-a25d-5967665793f3/nova-metadata-metadata/0.log" Feb 26 15:05:57 crc kubenswrapper[4724]: I0226 15:05:57.690883 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-rds5w_cad1abca-ca70-4988-804c-ca6d35ba05d7/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:05:57 crc kubenswrapper[4724]: I0226 15:05:57.821154 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gwnp7_2206a227-78b8-4ca1-a425-fb061de91843/ssh-known-hosts-edpm-deployment/0.log" Feb 26 15:05:58 crc kubenswrapper[4724]: I0226 15:05:58.123874 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-746558bfbf-gbdpm_acbb8b99-0b04-48c7-904e-a5c5304813a3/proxy-server/0.log" Feb 26 15:05:58 crc kubenswrapper[4724]: I0226 15:05:58.265197 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-7kkhs_e7412680-68df-4ebb-9961-8a89d8f83176/swift-ring-rebalance/0.log" Feb 26 15:05:58 crc kubenswrapper[4724]: I0226 15:05:58.529378 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:05:58 crc kubenswrapper[4724]: I0226 15:05:58.560346 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/account-reaper/0.log" Feb 26 15:05:58 crc kubenswrapper[4724]: I0226 15:05:58.586944 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/account-auditor/0.log" Feb 26 15:05:58 crc kubenswrapper[4724]: I0226 15:05:58.589124 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:05:58 crc kubenswrapper[4724]: I0226 15:05:58.630990 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fxkc4"] Feb 26 15:05:58 crc kubenswrapper[4724]: I0226 15:05:58.646086 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-746558bfbf-gbdpm_acbb8b99-0b04-48c7-904e-a5c5304813a3/proxy-httpd/0.log" Feb 26 15:05:58 crc kubenswrapper[4724]: I0226 15:05:58.876438 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/account-server/0.log" Feb 26 15:05:58 crc kubenswrapper[4724]: I0226 15:05:58.898111 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/container-auditor/0.log" Feb 26 15:05:59 crc kubenswrapper[4724]: I0226 15:05:59.052128 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/account-replicator/0.log" Feb 26 15:05:59 crc kubenswrapper[4724]: I0226 15:05:59.171862 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/container-replicator/0.log" Feb 26 15:05:59 crc kubenswrapper[4724]: I0226 15:05:59.295709 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/container-updater/0.log" Feb 26 15:05:59 crc kubenswrapper[4724]: I0226 15:05:59.331045 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/container-server/0.log" Feb 26 15:05:59 crc kubenswrapper[4724]: I0226 15:05:59.560010 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/object-expirer/0.log" Feb 26 15:05:59 crc kubenswrapper[4724]: I0226 15:05:59.585811 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/object-auditor/0.log" Feb 26 15:05:59 crc kubenswrapper[4724]: I0226 15:05:59.752978 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/object-server/0.log" Feb 26 15:05:59 crc kubenswrapper[4724]: I0226 15:05:59.769773 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/object-replicator/0.log" Feb 26 15:05:59 crc kubenswrapper[4724]: I0226 15:05:59.928100 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/rsync/0.log" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.061360 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fxkc4" podUID="37fc30da-8fc7-4653-a975-bb8411785579" containerName="registry-server" containerID="cri-o://f13aa57f9fa43a7edbd68e4ad0386fce99906e8aaee36733924242df2851546f" gracePeriod=2 Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.235742 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/object-updater/0.log" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.271514 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535306-hsqnv"] Feb 26 15:06:00 crc kubenswrapper[4724]: E0226 15:06:00.284271 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a7081e-67c3-4dbe-a338-b66db8607aad" containerName="registry-server" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.284300 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a7081e-67c3-4dbe-a338-b66db8607aad" containerName="registry-server" Feb 26 15:06:00 crc kubenswrapper[4724]: E0226 15:06:00.284343 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a7081e-67c3-4dbe-a338-b66db8607aad" containerName="extract-utilities" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.284351 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a7081e-67c3-4dbe-a338-b66db8607aad" containerName="extract-utilities" Feb 26 15:06:00 crc kubenswrapper[4724]: E0226 15:06:00.284362 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a7081e-67c3-4dbe-a338-b66db8607aad" containerName="extract-content" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.284367 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a7081e-67c3-4dbe-a338-b66db8607aad" containerName="extract-content" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.286889 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a7081e-67c3-4dbe-a338-b66db8607aad" containerName="registry-server" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.295280 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535306-hsqnv" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.324503 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.324517 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.324773 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.356088 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d5750fa4-34c3-4c23-b0cc-af9726d3034c/swift-recon-cron/0.log" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.370131 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535306-hsqnv"] Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.467293 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csz8f\" (UniqueName: \"kubernetes.io/projected/226418ca-21f5-40bb-9864-9f7f1cd2b562-kube-api-access-csz8f\") pod \"auto-csr-approver-29535306-hsqnv\" (UID: \"226418ca-21f5-40bb-9864-9f7f1cd2b562\") " pod="openshift-infra/auto-csr-approver-29535306-hsqnv" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.568674 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csz8f\" (UniqueName: \"kubernetes.io/projected/226418ca-21f5-40bb-9864-9f7f1cd2b562-kube-api-access-csz8f\") pod \"auto-csr-approver-29535306-hsqnv\" (UID: \"226418ca-21f5-40bb-9864-9f7f1cd2b562\") " pod="openshift-infra/auto-csr-approver-29535306-hsqnv" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.652824 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csz8f\" (UniqueName: \"kubernetes.io/projected/226418ca-21f5-40bb-9864-9f7f1cd2b562-kube-api-access-csz8f\") pod \"auto-csr-approver-29535306-hsqnv\" (UID: \"226418ca-21f5-40bb-9864-9f7f1cd2b562\") " pod="openshift-infra/auto-csr-approver-29535306-hsqnv" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.780695 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-stfrc_b9209966-a73c-4858-8faf-9053e5447993/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.917538 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-thread-testing_b9b5bd47-dc7c-492d-8c33-cd7d528555f6/tempest-tests-tempest-tests-runner/0.log" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.942652 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535306-hsqnv" Feb 26 15:06:00 crc kubenswrapper[4724]: I0226 15:06:00.960076 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-multi-thread-testing_14b6ff63-4a92-49d9-9d37-0f2092545b77/tempest-tests-tempest-tests-runner/0.log" Feb 26 15:06:01 crc kubenswrapper[4724]: I0226 15:06:01.095407 4724 generic.go:334] "Generic (PLEG): container finished" podID="37fc30da-8fc7-4653-a975-bb8411785579" containerID="f13aa57f9fa43a7edbd68e4ad0386fce99906e8aaee36733924242df2851546f" exitCode=0 Feb 26 15:06:01 crc kubenswrapper[4724]: I0226 15:06:01.095454 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxkc4" event={"ID":"37fc30da-8fc7-4653-a975-bb8411785579","Type":"ContainerDied","Data":"f13aa57f9fa43a7edbd68e4ad0386fce99906e8aaee36733924242df2851546f"} Feb 26 15:06:01 crc kubenswrapper[4724]: I0226 15:06:01.288935 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_78368336-7209-421d-b638-e47679769c6d/test-operator-logs-container/0.log" Feb 26 15:06:01 crc kubenswrapper[4724]: I0226 15:06:01.585098 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-2v6hd_e4b3aebd-40f4-47b8-836b-dd94ef4010af/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:06:01 crc kubenswrapper[4724]: I0226 15:06:01.593658 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:06:01 crc kubenswrapper[4724]: I0226 15:06:01.705980 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt5v7\" (UniqueName: \"kubernetes.io/projected/37fc30da-8fc7-4653-a975-bb8411785579-kube-api-access-bt5v7\") pod \"37fc30da-8fc7-4653-a975-bb8411785579\" (UID: \"37fc30da-8fc7-4653-a975-bb8411785579\") " Feb 26 15:06:01 crc kubenswrapper[4724]: I0226 15:06:01.706260 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fc30da-8fc7-4653-a975-bb8411785579-catalog-content\") pod \"37fc30da-8fc7-4653-a975-bb8411785579\" (UID: \"37fc30da-8fc7-4653-a975-bb8411785579\") " Feb 26 15:06:01 crc kubenswrapper[4724]: I0226 15:06:01.706337 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fc30da-8fc7-4653-a975-bb8411785579-utilities\") pod \"37fc30da-8fc7-4653-a975-bb8411785579\" (UID: \"37fc30da-8fc7-4653-a975-bb8411785579\") " Feb 26 15:06:01 crc kubenswrapper[4724]: I0226 15:06:01.715716 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37fc30da-8fc7-4653-a975-bb8411785579-utilities" (OuterVolumeSpecName: "utilities") pod "37fc30da-8fc7-4653-a975-bb8411785579" (UID: "37fc30da-8fc7-4653-a975-bb8411785579"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:06:01 crc kubenswrapper[4724]: I0226 15:06:01.755735 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37fc30da-8fc7-4653-a975-bb8411785579-kube-api-access-bt5v7" (OuterVolumeSpecName: "kube-api-access-bt5v7") pod "37fc30da-8fc7-4653-a975-bb8411785579" (UID: "37fc30da-8fc7-4653-a975-bb8411785579"). InnerVolumeSpecName "kube-api-access-bt5v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:06:01 crc kubenswrapper[4724]: I0226 15:06:01.758040 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535306-hsqnv"] Feb 26 15:06:01 crc kubenswrapper[4724]: I0226 15:06:01.813460 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt5v7\" (UniqueName: \"kubernetes.io/projected/37fc30da-8fc7-4653-a975-bb8411785579-kube-api-access-bt5v7\") on node \"crc\" DevicePath \"\"" Feb 26 15:06:01 crc kubenswrapper[4724]: I0226 15:06:01.813495 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fc30da-8fc7-4653-a975-bb8411785579-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:06:01 crc kubenswrapper[4724]: I0226 15:06:01.904507 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37fc30da-8fc7-4653-a975-bb8411785579-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37fc30da-8fc7-4653-a975-bb8411785579" (UID: "37fc30da-8fc7-4653-a975-bb8411785579"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:06:01 crc kubenswrapper[4724]: I0226 15:06:01.918375 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fc30da-8fc7-4653-a975-bb8411785579-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:06:02 crc kubenswrapper[4724]: I0226 15:06:02.114001 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535306-hsqnv" event={"ID":"226418ca-21f5-40bb-9864-9f7f1cd2b562","Type":"ContainerStarted","Data":"077b850805e3971305110cd09fd8bbe34017e7b160d62414076a0311156816be"} Feb 26 15:06:02 crc kubenswrapper[4724]: I0226 15:06:02.119332 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxkc4" event={"ID":"37fc30da-8fc7-4653-a975-bb8411785579","Type":"ContainerDied","Data":"c8680466bbeff2ca34d657e762112aaf3924dd3071a557ece5542cebeda914fd"} Feb 26 15:06:02 crc kubenswrapper[4724]: I0226 15:06:02.119387 4724 scope.go:117] "RemoveContainer" containerID="f13aa57f9fa43a7edbd68e4ad0386fce99906e8aaee36733924242df2851546f" Feb 26 15:06:02 crc kubenswrapper[4724]: I0226 15:06:02.119544 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fxkc4" Feb 26 15:06:02 crc kubenswrapper[4724]: I0226 15:06:02.202323 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fxkc4"] Feb 26 15:06:02 crc kubenswrapper[4724]: I0226 15:06:02.214798 4724 scope.go:117] "RemoveContainer" containerID="172ba54a4adb2d7f911ebe97a9af4b2069f6750f6da14ca197375d728607adde" Feb 26 15:06:02 crc kubenswrapper[4724]: I0226 15:06:02.223024 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fxkc4"] Feb 26 15:06:02 crc kubenswrapper[4724]: I0226 15:06:02.261061 4724 scope.go:117] "RemoveContainer" containerID="4871c091d5bc11fe66df9c03934ba0e589a9d839387c502fd6c42dfd9a364692" Feb 26 15:06:04 crc kubenswrapper[4724]: I0226 15:06:04.040894 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37fc30da-8fc7-4653-a975-bb8411785579" path="/var/lib/kubelet/pods/37fc30da-8fc7-4653-a975-bb8411785579/volumes" Feb 26 15:06:04 crc kubenswrapper[4724]: I0226 15:06:04.145240 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535306-hsqnv" event={"ID":"226418ca-21f5-40bb-9864-9f7f1cd2b562","Type":"ContainerStarted","Data":"0e4114b2aa49fbf316363875311209c11c305440f5b255e6d69931206eeb73f5"} Feb 26 15:06:04 crc kubenswrapper[4724]: I0226 15:06:04.159810 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535306-hsqnv" podStartSLOduration=3.110105365 podStartE2EDuration="4.159719141s" podCreationTimestamp="2026-02-26 15:06:00 +0000 UTC" firstStartedPulling="2026-02-26 15:06:01.820094298 +0000 UTC m=+14428.475833413" lastFinishedPulling="2026-02-26 15:06:02.869708074 +0000 UTC m=+14429.525447189" observedRunningTime="2026-02-26 15:06:04.157799252 +0000 UTC m=+14430.813538357" watchObservedRunningTime="2026-02-26 15:06:04.159719141 +0000 UTC m=+14430.815458256" Feb 26 15:06:06 crc kubenswrapper[4724]: I0226 15:06:06.193332 4724 generic.go:334] "Generic (PLEG): container finished" podID="226418ca-21f5-40bb-9864-9f7f1cd2b562" containerID="0e4114b2aa49fbf316363875311209c11c305440f5b255e6d69931206eeb73f5" exitCode=0 Feb 26 15:06:06 crc kubenswrapper[4724]: I0226 15:06:06.193411 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535306-hsqnv" event={"ID":"226418ca-21f5-40bb-9864-9f7f1cd2b562","Type":"ContainerDied","Data":"0e4114b2aa49fbf316363875311209c11c305440f5b255e6d69931206eeb73f5"} Feb 26 15:06:07 crc kubenswrapper[4724]: I0226 15:06:07.607989 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535306-hsqnv" Feb 26 15:06:07 crc kubenswrapper[4724]: I0226 15:06:07.746114 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csz8f\" (UniqueName: \"kubernetes.io/projected/226418ca-21f5-40bb-9864-9f7f1cd2b562-kube-api-access-csz8f\") pod \"226418ca-21f5-40bb-9864-9f7f1cd2b562\" (UID: \"226418ca-21f5-40bb-9864-9f7f1cd2b562\") " Feb 26 15:06:07 crc kubenswrapper[4724]: I0226 15:06:07.769763 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226418ca-21f5-40bb-9864-9f7f1cd2b562-kube-api-access-csz8f" (OuterVolumeSpecName: "kube-api-access-csz8f") pod "226418ca-21f5-40bb-9864-9f7f1cd2b562" (UID: "226418ca-21f5-40bb-9864-9f7f1cd2b562"). InnerVolumeSpecName "kube-api-access-csz8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:06:07 crc kubenswrapper[4724]: I0226 15:06:07.848641 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csz8f\" (UniqueName: \"kubernetes.io/projected/226418ca-21f5-40bb-9864-9f7f1cd2b562-kube-api-access-csz8f\") on node \"crc\" DevicePath \"\"" Feb 26 15:06:08 crc kubenswrapper[4724]: I0226 15:06:08.010844 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b70be877-253f-4859-ae54-bd241f38cb93/memcached/0.log" Feb 26 15:06:08 crc kubenswrapper[4724]: I0226 15:06:08.231540 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535306-hsqnv" event={"ID":"226418ca-21f5-40bb-9864-9f7f1cd2b562","Type":"ContainerDied","Data":"077b850805e3971305110cd09fd8bbe34017e7b160d62414076a0311156816be"} Feb 26 15:06:08 crc kubenswrapper[4724]: I0226 15:06:08.231582 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="077b850805e3971305110cd09fd8bbe34017e7b160d62414076a0311156816be" Feb 26 15:06:08 crc kubenswrapper[4724]: I0226 15:06:08.231588 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535306-hsqnv" Feb 26 15:06:08 crc kubenswrapper[4724]: I0226 15:06:08.291806 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535300-4spbs"] Feb 26 15:06:08 crc kubenswrapper[4724]: I0226 15:06:08.302365 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535300-4spbs"] Feb 26 15:06:09 crc kubenswrapper[4724]: I0226 15:06:09.985239 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f89669d5-8f04-41d9-9cf6-a490ed30d9ab" path="/var/lib/kubelet/pods/f89669d5-8f04-41d9-9cf6-a490ed30d9ab/volumes" Feb 26 15:06:23 crc kubenswrapper[4724]: I0226 15:06:23.457955 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xw4vt_e87b7bd7-9d39-48f0-b896-fe5da437416f/control-plane-machine-set-operator/1.log" Feb 26 15:06:23 crc kubenswrapper[4724]: I0226 15:06:23.533739 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xw4vt_e87b7bd7-9d39-48f0-b896-fe5da437416f/control-plane-machine-set-operator/0.log" Feb 26 15:06:23 crc kubenswrapper[4724]: I0226 15:06:23.754554 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4f5jn_6a9effc4-1c10-46ae-9762-1f3308aa9bc9/kube-rbac-proxy/0.log" Feb 26 15:06:23 crc kubenswrapper[4724]: I0226 15:06:23.820093 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4f5jn_6a9effc4-1c10-46ae-9762-1f3308aa9bc9/machine-api-operator/0.log" Feb 26 15:06:35 crc kubenswrapper[4724]: I0226 15:06:35.838102 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7_3d0adab1-1760-4649-9b4a-63dbe6bf84a2/util/0.log" Feb 26 15:06:36 crc kubenswrapper[4724]: I0226 15:06:36.027929 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7_3d0adab1-1760-4649-9b4a-63dbe6bf84a2/util/0.log" Feb 26 15:06:36 crc kubenswrapper[4724]: I0226 15:06:36.032768 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7_3d0adab1-1760-4649-9b4a-63dbe6bf84a2/pull/0.log" Feb 26 15:06:36 crc kubenswrapper[4724]: I0226 15:06:36.131072 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7_3d0adab1-1760-4649-9b4a-63dbe6bf84a2/pull/0.log" Feb 26 15:06:36 crc kubenswrapper[4724]: I0226 15:06:36.369344 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7_3d0adab1-1760-4649-9b4a-63dbe6bf84a2/pull/0.log" Feb 26 15:06:36 crc kubenswrapper[4724]: I0226 15:06:36.375688 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7_3d0adab1-1760-4649-9b4a-63dbe6bf84a2/extract/0.log" Feb 26 15:06:36 crc kubenswrapper[4724]: I0226 15:06:36.393254 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93b7970c463ccbf365687347dab4fbce06e8c440f4e56586fd17895e8bvxcl7_3d0adab1-1760-4649-9b4a-63dbe6bf84a2/util/0.log" Feb 26 15:06:36 crc kubenswrapper[4724]: I0226 15:06:36.985052 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-p58p8_9ae55185-83a5-47ea-b54f-01b31471f512/manager/0.log" Feb 26 15:06:37 crc kubenswrapper[4724]: I0226 15:06:37.459412 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-6qq4t_1c97807f-b47f-4762-80d8-a296d8108e19/manager/0.log" Feb 26 15:06:37 crc kubenswrapper[4724]: I0226 15:06:37.687389 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-wxw2f_bc959a10-5f94-4d38-87d2-dda60f8ae078/manager/0.log" Feb 26 15:06:37 crc kubenswrapper[4724]: I0226 15:06:37.971739 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-6zmmb_731a1439-aa83-4119-ae37-23f526e6e73a/manager/0.log" Feb 26 15:06:38 crc kubenswrapper[4724]: I0226 15:06:38.812709 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-8n9qj_2714a834-e9ca-40b1-a73c-2b890783f29e/manager/0.log" Feb 26 15:06:38 crc kubenswrapper[4724]: I0226 15:06:38.920969 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-pxtxm_2178a458-4e8c-4d30-bbdb-8a0ef864fd80/manager/0.log" Feb 26 15:06:39 crc kubenswrapper[4724]: I0226 15:06:39.376171 4724 scope.go:117] "RemoveContainer" containerID="82093ce08f4487947505b3ff08128b3a1b537a6002888de364404e9767c6f960" Feb 26 15:06:39 crc kubenswrapper[4724]: I0226 15:06:39.413655 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-rqsqh_d2b50788-4e25-4589-84b8-00851a2a18b7/manager/0.log" Feb 26 15:06:39 crc kubenswrapper[4724]: I0226 15:06:39.958049 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-bfrsl_5495b5cf-e7d2-4fbf-98a4-ca9fc0276a73/manager/0.log" Feb 26 15:06:40 crc kubenswrapper[4724]: I0226 15:06:40.726263 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-vp8fp_da86929c-f438-4994-80be-1a7aa3b7b76e/manager/0.log" Feb 26 15:06:41 crc kubenswrapper[4724]: I0226 15:06:41.068639 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-h8dsz_949d93dc-988e-49b8-9fde-63c227730e7a/cert-manager-controller/0.log" Feb 26 15:06:41 crc kubenswrapper[4724]: I0226 15:06:41.330481 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-wjjjc_193a7bdd-a3a7-493d-8c99-a04d591e3a19/manager/0.log" Feb 26 15:06:41 crc kubenswrapper[4724]: I0226 15:06:41.424754 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-n2mfn_edc23874-b08b-4197-8662-4daac14a41bb/cert-manager-cainjector/0.log" Feb 26 15:06:41 crc kubenswrapper[4724]: I0226 15:06:41.491513 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-sbkcr_0cfee1c3-df60-4944-a16e-e01dd310f2c4/manager/0.log" Feb 26 15:06:41 crc kubenswrapper[4724]: I0226 15:06:41.742713 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-k75jd_9bcf19f6-1ed9-4315-a263-1bd5c8da7774/manager/0.log" Feb 26 15:06:41 crc kubenswrapper[4724]: I0226 15:06:41.782707 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-4h46l_bd772542-67d2-4628-9b09-34bc55eec26d/cert-manager-webhook/0.log" Feb 26 15:06:41 crc kubenswrapper[4724]: I0226 15:06:41.993776 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-rjxxw_5b790d8b-575d-462e-a9b1-512d91261517/manager/0.log" Feb 26 15:06:42 crc kubenswrapper[4724]: I0226 15:06:42.190357 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cvqj4l_39700bc5-43f0-49b6-b510-523322e34eb5/manager/0.log" Feb 26 15:06:42 crc kubenswrapper[4724]: I0226 15:06:42.306767 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-q9sb4_f0ccafa2-8b59-49e6-b881-ffaee0c98646/manager/0.log" Feb 26 15:06:42 crc kubenswrapper[4724]: I0226 15:06:42.454825 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-76b6d74844-bpg9d_20b666d6-e71f-4bdb-b71d-44ac3a0c74c6/operator/0.log" Feb 26 15:06:42 crc kubenswrapper[4724]: I0226 15:06:42.704479 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5f2tw_ee48a99c-cb5f-4564-9631-daeae942461e/registry-server/0.log" Feb 26 15:06:42 crc kubenswrapper[4724]: I0226 15:06:42.999272 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-pwg6d_cd71c91d-33bb-4eae-9f27-84f39ef7653d/manager/0.log" Feb 26 15:06:43 crc kubenswrapper[4724]: I0226 15:06:43.305463 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-6j4fp_d49437c7-7f60-4304-b216-dcf93e31be87/manager/0.log" Feb 26 15:06:43 crc kubenswrapper[4724]: I0226 15:06:43.593530 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2r984_dc7781b3-4d7b-4855-8e76-bb3ad2028a9c/operator/0.log" Feb 26 15:06:43 crc kubenswrapper[4724]: I0226 15:06:43.695262 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75d9b57894-2862v_48de473d-2e43-44ee-b0d1-db2c8e11fc2b/manager/0.log" Feb 26 15:06:43 crc kubenswrapper[4724]: I0226 15:06:43.828567 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-ntcpr_d90588ea-6237-4fd0-a321-9c6db1e07525/manager/0.log" Feb 26 15:06:43 crc kubenswrapper[4724]: I0226 15:06:43.997551 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-589c568786-qhw9r_b14d5ade-65f3-4402-bacd-5acc8ef39ce5/manager/0.log" Feb 26 15:06:44 crc kubenswrapper[4724]: I0226 15:06:44.054983 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-f8k6f_897f5a3f-a04e-4725-8a9a-0ce91c8bb372/manager/0.log" Feb 26 15:06:44 crc kubenswrapper[4724]: I0226 15:06:44.151563 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-pgtk4_d04a7b9b-e3a4-4876-bb57-10d86295d9c0/manager/0.log" Feb 26 15:06:46 crc kubenswrapper[4724]: I0226 15:06:46.906267 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:06:46 crc kubenswrapper[4724]: I0226 15:06:46.906532 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:06:58 crc kubenswrapper[4724]: I0226 15:06:58.835652 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-g97p9_bffbb3a0-67ab-485c-a82c-1acf6925532e/nmstate-console-plugin/0.log" Feb 26 15:06:59 crc kubenswrapper[4724]: I0226 15:06:59.010622 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7cg6v_9897fa30-971d-4825-9dea-05da142cc1d1/nmstate-handler/0.log" Feb 26 15:06:59 crc kubenswrapper[4724]: I0226 15:06:59.074338 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-4k9lv_c61316af-28e2-4430-a8d4-058db8a35946/kube-rbac-proxy/0.log" Feb 26 15:06:59 crc kubenswrapper[4724]: I0226 15:06:59.178085 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-4k9lv_c61316af-28e2-4430-a8d4-058db8a35946/nmstate-metrics/0.log" Feb 26 15:06:59 crc kubenswrapper[4724]: I0226 15:06:59.419827 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-2qxwx_25512be6-334e-4f85-9466-8505e3f3eb51/nmstate-operator/0.log" Feb 26 15:06:59 crc kubenswrapper[4724]: I0226 15:06:59.442592 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-r6xm9_2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb/nmstate-webhook/0.log" Feb 26 15:07:07 crc kubenswrapper[4724]: I0226 15:07:07.213118 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xw4vt_e87b7bd7-9d39-48f0-b896-fe5da437416f/control-plane-machine-set-operator/1.log" Feb 26 15:07:07 crc kubenswrapper[4724]: I0226 15:07:07.261767 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xw4vt_e87b7bd7-9d39-48f0-b896-fe5da437416f/control-plane-machine-set-operator/0.log" Feb 26 15:07:07 crc kubenswrapper[4724]: I0226 15:07:07.491245 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4f5jn_6a9effc4-1c10-46ae-9762-1f3308aa9bc9/machine-api-operator/0.log" Feb 26 15:07:07 crc kubenswrapper[4724]: I0226 15:07:07.495220 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4f5jn_6a9effc4-1c10-46ae-9762-1f3308aa9bc9/kube-rbac-proxy/0.log" Feb 26 15:07:16 crc kubenswrapper[4724]: I0226 15:07:16.906172 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:07:16 crc kubenswrapper[4724]: I0226 15:07:16.906764 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:07:23 crc kubenswrapper[4724]: I0226 15:07:23.964709 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-h8dsz_949d93dc-988e-49b8-9fde-63c227730e7a/cert-manager-controller/0.log" Feb 26 15:07:24 crc kubenswrapper[4724]: I0226 15:07:24.239717 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-n2mfn_edc23874-b08b-4197-8662-4daac14a41bb/cert-manager-cainjector/0.log" Feb 26 15:07:24 crc kubenswrapper[4724]: I0226 15:07:24.361329 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-4h46l_bd772542-67d2-4628-9b09-34bc55eec26d/cert-manager-webhook/0.log" Feb 26 15:07:33 crc kubenswrapper[4724]: I0226 15:07:33.637549 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-cg9xd_665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4/kube-rbac-proxy/0.log" Feb 26 15:07:33 crc kubenswrapper[4724]: I0226 15:07:33.731357 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-cg9xd_665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4/controller/0.log" Feb 26 15:07:34 crc kubenswrapper[4724]: I0226 15:07:34.060095 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-frr-files/0.log" Feb 26 15:07:34 crc kubenswrapper[4724]: I0226 15:07:34.259614 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-frr-files/0.log" Feb 26 15:07:34 crc kubenswrapper[4724]: I0226 15:07:34.327169 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-reloader/0.log" Feb 26 15:07:34 crc kubenswrapper[4724]: I0226 15:07:34.347611 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-metrics/0.log" Feb 26 15:07:34 crc kubenswrapper[4724]: I0226 15:07:34.407167 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-reloader/0.log" Feb 26 15:07:34 crc kubenswrapper[4724]: I0226 15:07:34.635781 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-frr-files/0.log" Feb 26 15:07:34 crc kubenswrapper[4724]: I0226 15:07:34.652358 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-metrics/0.log" Feb 26 15:07:34 crc kubenswrapper[4724]: I0226 15:07:34.681131 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-metrics/0.log" Feb 26 15:07:34 crc kubenswrapper[4724]: I0226 15:07:34.707211 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-reloader/0.log" Feb 26 15:07:34 crc kubenswrapper[4724]: I0226 15:07:34.906358 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-frr-files/0.log" Feb 26 15:07:34 crc kubenswrapper[4724]: I0226 15:07:34.964839 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-reloader/0.log" Feb 26 15:07:34 crc kubenswrapper[4724]: I0226 15:07:34.979332 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-metrics/0.log" Feb 26 15:07:35 crc kubenswrapper[4724]: I0226 15:07:35.017789 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/controller/0.log" Feb 26 15:07:35 crc kubenswrapper[4724]: I0226 15:07:35.233058 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/frr-metrics/0.log" Feb 26 15:07:35 crc kubenswrapper[4724]: I0226 15:07:35.313954 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/kube-rbac-proxy/0.log" Feb 26 15:07:35 crc kubenswrapper[4724]: I0226 15:07:35.404407 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/kube-rbac-proxy-frr/0.log" Feb 26 15:07:36 crc kubenswrapper[4724]: I0226 15:07:36.002959 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/reloader/0.log" Feb 26 15:07:36 crc kubenswrapper[4724]: I0226 15:07:36.024220 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-xv452_933b3336-9cea-4b27-92e3-3fcf69076040/frr-k8s-webhook-server/0.log" Feb 26 15:07:36 crc kubenswrapper[4724]: I0226 15:07:36.299836 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-64754968d5-4ktxs_c749ff83-c2b1-49fc-b99a-1a8f7bda31fa/manager/0.log" Feb 26 15:07:36 crc kubenswrapper[4724]: I0226 15:07:36.522627 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6c46bf8994-k9qf2_c07d2449-8a13-4ae2-832b-30904057f00c/webhook-server/0.log" Feb 26 15:07:36 crc kubenswrapper[4724]: I0226 15:07:36.721823 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5vsqn_9b29feac-9647-448f-8c83-e33894da59dd/kube-rbac-proxy/0.log" Feb 26 15:07:37 crc kubenswrapper[4724]: I0226 15:07:37.451946 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5vsqn_9b29feac-9647-448f-8c83-e33894da59dd/speaker/0.log" Feb 26 15:07:38 crc kubenswrapper[4724]: I0226 15:07:38.409896 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/frr/0.log" Feb 26 15:07:42 crc kubenswrapper[4724]: I0226 15:07:42.235186 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-g97p9_bffbb3a0-67ab-485c-a82c-1acf6925532e/nmstate-console-plugin/0.log" Feb 26 15:07:42 crc kubenswrapper[4724]: I0226 15:07:42.395348 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7cg6v_9897fa30-971d-4825-9dea-05da142cc1d1/nmstate-handler/0.log" Feb 26 15:07:42 crc kubenswrapper[4724]: I0226 15:07:42.543790 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-4k9lv_c61316af-28e2-4430-a8d4-058db8a35946/kube-rbac-proxy/0.log" Feb 26 15:07:42 crc kubenswrapper[4724]: I0226 15:07:42.616925 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-4k9lv_c61316af-28e2-4430-a8d4-058db8a35946/nmstate-metrics/0.log" Feb 26 15:07:42 crc kubenswrapper[4724]: I0226 15:07:42.752646 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-2qxwx_25512be6-334e-4f85-9466-8505e3f3eb51/nmstate-operator/0.log" Feb 26 15:07:42 crc kubenswrapper[4724]: I0226 15:07:42.980122 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-r6xm9_2d9f7960-c1f4-40e2-bda5-8b9bf8c72bfb/nmstate-webhook/0.log" Feb 26 15:07:46 crc kubenswrapper[4724]: I0226 15:07:46.906384 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:07:46 crc kubenswrapper[4724]: I0226 15:07:46.906768 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:07:46 crc kubenswrapper[4724]: I0226 15:07:46.906829 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 15:07:46 crc kubenswrapper[4724]: I0226 15:07:46.907708 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1413d2ccbd104e8150cde8d90f88242e089bd6ca48f9c203576affea50184696"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 15:07:46 crc kubenswrapper[4724]: I0226 15:07:46.907777 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://1413d2ccbd104e8150cde8d90f88242e089bd6ca48f9c203576affea50184696" gracePeriod=600 Feb 26 15:07:47 crc kubenswrapper[4724]: I0226 15:07:47.266068 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="1413d2ccbd104e8150cde8d90f88242e089bd6ca48f9c203576affea50184696" exitCode=0 Feb 26 15:07:47 crc kubenswrapper[4724]: I0226 15:07:47.266118 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"1413d2ccbd104e8150cde8d90f88242e089bd6ca48f9c203576affea50184696"} Feb 26 15:07:47 crc kubenswrapper[4724]: I0226 15:07:47.266150 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerStarted","Data":"b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a"} Feb 26 15:07:47 crc kubenswrapper[4724]: I0226 15:07:47.266168 4724 scope.go:117] "RemoveContainer" containerID="5aaba2f34f042cf6ec9e248977e4747c9f24fd8e0b1dd6a8ccadd9f8133915e2" Feb 26 15:07:55 crc kubenswrapper[4724]: I0226 15:07:55.115921 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s_648c1a76-a342-4f33-b06e-3a7969b0e1bb/util/0.log" Feb 26 15:07:55 crc kubenswrapper[4724]: I0226 15:07:55.492269 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s_648c1a76-a342-4f33-b06e-3a7969b0e1bb/util/0.log" Feb 26 15:07:55 crc kubenswrapper[4724]: I0226 15:07:55.564921 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s_648c1a76-a342-4f33-b06e-3a7969b0e1bb/pull/0.log" Feb 26 15:07:55 crc kubenswrapper[4724]: I0226 15:07:55.581565 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s_648c1a76-a342-4f33-b06e-3a7969b0e1bb/pull/0.log" Feb 26 15:07:55 crc kubenswrapper[4724]: I0226 15:07:55.803431 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s_648c1a76-a342-4f33-b06e-3a7969b0e1bb/util/0.log" Feb 26 15:07:55 crc kubenswrapper[4724]: I0226 15:07:55.854518 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s_648c1a76-a342-4f33-b06e-3a7969b0e1bb/pull/0.log" Feb 26 15:07:55 crc kubenswrapper[4724]: I0226 15:07:55.887918 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s_648c1a76-a342-4f33-b06e-3a7969b0e1bb/extract/0.log" Feb 26 15:07:56 crc kubenswrapper[4724]: I0226 15:07:56.088017 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8s9wz_0ce62393-2f46-4fd6-b3f9-dabc3a65d917/extract-utilities/0.log" Feb 26 15:07:56 crc kubenswrapper[4724]: I0226 15:07:56.294519 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8s9wz_0ce62393-2f46-4fd6-b3f9-dabc3a65d917/extract-utilities/0.log" Feb 26 15:07:56 crc kubenswrapper[4724]: I0226 15:07:56.297885 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8s9wz_0ce62393-2f46-4fd6-b3f9-dabc3a65d917/extract-content/0.log" Feb 26 15:07:56 crc kubenswrapper[4724]: I0226 15:07:56.308679 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8s9wz_0ce62393-2f46-4fd6-b3f9-dabc3a65d917/extract-content/0.log" Feb 26 15:07:56 crc kubenswrapper[4724]: I0226 15:07:56.600252 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8s9wz_0ce62393-2f46-4fd6-b3f9-dabc3a65d917/extract-utilities/0.log" Feb 26 15:07:56 crc kubenswrapper[4724]: I0226 15:07:56.676502 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8s9wz_0ce62393-2f46-4fd6-b3f9-dabc3a65d917/extract-content/0.log" Feb 26 15:07:56 crc kubenswrapper[4724]: I0226 15:07:56.935375 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8s9wz_0ce62393-2f46-4fd6-b3f9-dabc3a65d917/registry-server/0.log" Feb 26 15:07:56 crc kubenswrapper[4724]: I0226 15:07:56.966650 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xpspf_95e6bf93-9eb4-4b41-9428-39cf8e781456/extract-utilities/0.log" Feb 26 15:07:57 crc kubenswrapper[4724]: I0226 15:07:57.247578 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xpspf_95e6bf93-9eb4-4b41-9428-39cf8e781456/extract-utilities/0.log" Feb 26 15:07:57 crc kubenswrapper[4724]: I0226 15:07:57.267115 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xpspf_95e6bf93-9eb4-4b41-9428-39cf8e781456/extract-content/0.log" Feb 26 15:07:57 crc kubenswrapper[4724]: I0226 15:07:57.270324 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xpspf_95e6bf93-9eb4-4b41-9428-39cf8e781456/extract-content/0.log" Feb 26 15:07:57 crc kubenswrapper[4724]: I0226 15:07:57.593712 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xpspf_95e6bf93-9eb4-4b41-9428-39cf8e781456/extract-utilities/0.log" Feb 26 15:07:57 crc kubenswrapper[4724]: I0226 15:07:57.612130 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xpspf_95e6bf93-9eb4-4b41-9428-39cf8e781456/extract-content/0.log" Feb 26 15:07:57 crc kubenswrapper[4724]: I0226 15:07:57.858748 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p_55df853e-3e28-4871-8b98-ac9bc1a02cbf/util/0.log" Feb 26 15:07:58 crc kubenswrapper[4724]: I0226 15:07:58.332255 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p_55df853e-3e28-4871-8b98-ac9bc1a02cbf/util/0.log" Feb 26 15:07:58 crc kubenswrapper[4724]: I0226 15:07:58.335718 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p_55df853e-3e28-4871-8b98-ac9bc1a02cbf/pull/0.log" Feb 26 15:07:58 crc kubenswrapper[4724]: I0226 15:07:58.337868 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xpspf_95e6bf93-9eb4-4b41-9428-39cf8e781456/registry-server/0.log" Feb 26 15:07:58 crc kubenswrapper[4724]: I0226 15:07:58.497336 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p_55df853e-3e28-4871-8b98-ac9bc1a02cbf/pull/0.log" Feb 26 15:07:58 crc kubenswrapper[4724]: I0226 15:07:58.663461 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p_55df853e-3e28-4871-8b98-ac9bc1a02cbf/pull/0.log" Feb 26 15:07:58 crc kubenswrapper[4724]: I0226 15:07:58.674033 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p_55df853e-3e28-4871-8b98-ac9bc1a02cbf/util/0.log" Feb 26 15:07:58 crc kubenswrapper[4724]: I0226 15:07:58.743611 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p_55df853e-3e28-4871-8b98-ac9bc1a02cbf/extract/0.log" Feb 26 15:07:59 crc kubenswrapper[4724]: I0226 15:07:59.272241 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-rtjt6_2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3/marketplace-operator/0.log" Feb 26 15:07:59 crc kubenswrapper[4724]: I0226 15:07:59.354081 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8h6mc_e8868abd-2431-4e5b-98d6-574ca6449d4b/extract-utilities/0.log" Feb 26 15:07:59 crc kubenswrapper[4724]: I0226 15:07:59.566358 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8h6mc_e8868abd-2431-4e5b-98d6-574ca6449d4b/extract-utilities/0.log" Feb 26 15:07:59 crc kubenswrapper[4724]: I0226 15:07:59.589195 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8h6mc_e8868abd-2431-4e5b-98d6-574ca6449d4b/extract-content/0.log" Feb 26 15:07:59 crc kubenswrapper[4724]: I0226 15:07:59.681524 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8h6mc_e8868abd-2431-4e5b-98d6-574ca6449d4b/extract-content/0.log" Feb 26 15:07:59 crc kubenswrapper[4724]: I0226 15:07:59.896989 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8h6mc_e8868abd-2431-4e5b-98d6-574ca6449d4b/extract-content/0.log" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.033125 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8h6mc_e8868abd-2431-4e5b-98d6-574ca6449d4b/extract-utilities/0.log" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.173501 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535308-mjbmn"] Feb 26 15:08:00 crc kubenswrapper[4724]: E0226 15:08:00.209569 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fc30da-8fc7-4653-a975-bb8411785579" containerName="extract-content" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.209597 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fc30da-8fc7-4653-a975-bb8411785579" containerName="extract-content" Feb 26 15:08:00 crc kubenswrapper[4724]: E0226 15:08:00.209622 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fc30da-8fc7-4653-a975-bb8411785579" containerName="extract-utilities" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.209630 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fc30da-8fc7-4653-a975-bb8411785579" containerName="extract-utilities" Feb 26 15:08:00 crc kubenswrapper[4724]: E0226 15:08:00.209641 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226418ca-21f5-40bb-9864-9f7f1cd2b562" containerName="oc" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.209647 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="226418ca-21f5-40bb-9864-9f7f1cd2b562" containerName="oc" Feb 26 15:08:00 crc kubenswrapper[4724]: E0226 15:08:00.209663 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fc30da-8fc7-4653-a975-bb8411785579" containerName="registry-server" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.209669 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fc30da-8fc7-4653-a975-bb8411785579" containerName="registry-server" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.210029 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="226418ca-21f5-40bb-9864-9f7f1cd2b562" containerName="oc" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.210047 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="37fc30da-8fc7-4653-a975-bb8411785579" containerName="registry-server" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.219093 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535308-mjbmn"] Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.221195 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535308-mjbmn" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.231018 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.231081 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.232213 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.305079 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/extract-utilities/0.log" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.313836 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsclt\" (UniqueName: \"kubernetes.io/projected/863b6279-0e0b-4216-be32-a72df2eb498e-kube-api-access-vsclt\") pod \"auto-csr-approver-29535308-mjbmn\" (UID: \"863b6279-0e0b-4216-be32-a72df2eb498e\") " pod="openshift-infra/auto-csr-approver-29535308-mjbmn" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.417014 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsclt\" (UniqueName: \"kubernetes.io/projected/863b6279-0e0b-4216-be32-a72df2eb498e-kube-api-access-vsclt\") pod \"auto-csr-approver-29535308-mjbmn\" (UID: \"863b6279-0e0b-4216-be32-a72df2eb498e\") " pod="openshift-infra/auto-csr-approver-29535308-mjbmn" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.427889 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8h6mc_e8868abd-2431-4e5b-98d6-574ca6449d4b/registry-server/0.log" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.455286 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsclt\" (UniqueName: \"kubernetes.io/projected/863b6279-0e0b-4216-be32-a72df2eb498e-kube-api-access-vsclt\") pod \"auto-csr-approver-29535308-mjbmn\" (UID: \"863b6279-0e0b-4216-be32-a72df2eb498e\") " pod="openshift-infra/auto-csr-approver-29535308-mjbmn" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.554104 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535308-mjbmn" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.618894 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/extract-utilities/0.log" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.652780 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/extract-content/0.log" Feb 26 15:08:00 crc kubenswrapper[4724]: I0226 15:08:00.737773 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/extract-content/0.log" Feb 26 15:08:01 crc kubenswrapper[4724]: I0226 15:08:01.061209 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535308-mjbmn"] Feb 26 15:08:01 crc kubenswrapper[4724]: I0226 15:08:01.091435 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 15:08:01 crc kubenswrapper[4724]: I0226 15:08:01.111110 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/registry-server/1.log" Feb 26 15:08:01 crc kubenswrapper[4724]: I0226 15:08:01.145114 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/extract-content/0.log" Feb 26 15:08:01 crc kubenswrapper[4724]: I0226 15:08:01.194617 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/extract-utilities/0.log" Feb 26 15:08:01 crc kubenswrapper[4724]: I0226 15:08:01.202496 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/registry-server/2.log" Feb 26 15:08:01 crc kubenswrapper[4724]: I0226 15:08:01.387679 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535308-mjbmn" event={"ID":"863b6279-0e0b-4216-be32-a72df2eb498e","Type":"ContainerStarted","Data":"0f95bd701286908a74389847f1fe696764e7257393921c69e5e70aabf4270f6f"} Feb 26 15:08:03 crc kubenswrapper[4724]: I0226 15:08:03.403831 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535308-mjbmn" event={"ID":"863b6279-0e0b-4216-be32-a72df2eb498e","Type":"ContainerStarted","Data":"160efd908daf8a7f28d5fef3310f9df50d819f2afe9fffccbbd746fca99e2f64"} Feb 26 15:08:03 crc kubenswrapper[4724]: I0226 15:08:03.431004 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535308-mjbmn" podStartSLOduration=2.474262375 podStartE2EDuration="3.430964424s" podCreationTimestamp="2026-02-26 15:08:00 +0000 UTC" firstStartedPulling="2026-02-26 15:08:01.08932937 +0000 UTC m=+14547.745068485" lastFinishedPulling="2026-02-26 15:08:02.046031419 +0000 UTC m=+14548.701770534" observedRunningTime="2026-02-26 15:08:03.424158102 +0000 UTC m=+14550.079897217" watchObservedRunningTime="2026-02-26 15:08:03.430964424 +0000 UTC m=+14550.086703539" Feb 26 15:08:04 crc kubenswrapper[4724]: I0226 15:08:04.412990 4724 generic.go:334] "Generic (PLEG): container finished" podID="863b6279-0e0b-4216-be32-a72df2eb498e" containerID="160efd908daf8a7f28d5fef3310f9df50d819f2afe9fffccbbd746fca99e2f64" exitCode=0 Feb 26 15:08:04 crc kubenswrapper[4724]: I0226 15:08:04.413273 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535308-mjbmn" event={"ID":"863b6279-0e0b-4216-be32-a72df2eb498e","Type":"ContainerDied","Data":"160efd908daf8a7f28d5fef3310f9df50d819f2afe9fffccbbd746fca99e2f64"} Feb 26 15:08:05 crc kubenswrapper[4724]: I0226 15:08:05.823653 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535308-mjbmn" Feb 26 15:08:05 crc kubenswrapper[4724]: I0226 15:08:05.937812 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsclt\" (UniqueName: \"kubernetes.io/projected/863b6279-0e0b-4216-be32-a72df2eb498e-kube-api-access-vsclt\") pod \"863b6279-0e0b-4216-be32-a72df2eb498e\" (UID: \"863b6279-0e0b-4216-be32-a72df2eb498e\") " Feb 26 15:08:05 crc kubenswrapper[4724]: I0226 15:08:05.943556 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863b6279-0e0b-4216-be32-a72df2eb498e-kube-api-access-vsclt" (OuterVolumeSpecName: "kube-api-access-vsclt") pod "863b6279-0e0b-4216-be32-a72df2eb498e" (UID: "863b6279-0e0b-4216-be32-a72df2eb498e"). InnerVolumeSpecName "kube-api-access-vsclt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:08:06 crc kubenswrapper[4724]: I0226 15:08:06.045385 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsclt\" (UniqueName: \"kubernetes.io/projected/863b6279-0e0b-4216-be32-a72df2eb498e-kube-api-access-vsclt\") on node \"crc\" DevicePath \"\"" Feb 26 15:08:06 crc kubenswrapper[4724]: I0226 15:08:06.434681 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535308-mjbmn" event={"ID":"863b6279-0e0b-4216-be32-a72df2eb498e","Type":"ContainerDied","Data":"0f95bd701286908a74389847f1fe696764e7257393921c69e5e70aabf4270f6f"} Feb 26 15:08:06 crc kubenswrapper[4724]: I0226 15:08:06.434985 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535308-mjbmn" Feb 26 15:08:06 crc kubenswrapper[4724]: I0226 15:08:06.435708 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f95bd701286908a74389847f1fe696764e7257393921c69e5e70aabf4270f6f" Feb 26 15:08:06 crc kubenswrapper[4724]: I0226 15:08:06.483510 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535302-k7qtw"] Feb 26 15:08:06 crc kubenswrapper[4724]: I0226 15:08:06.491114 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535302-k7qtw"] Feb 26 15:08:07 crc kubenswrapper[4724]: I0226 15:08:07.985691 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46881f2d-840f-462a-ad89-af75f272e60c" path="/var/lib/kubelet/pods/46881f2d-840f-462a-ad89-af75f272e60c/volumes" Feb 26 15:08:18 crc kubenswrapper[4724]: I0226 15:08:18.754074 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-cg9xd_665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4/controller/0.log" Feb 26 15:08:18 crc kubenswrapper[4724]: I0226 15:08:18.759347 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-cg9xd_665cc442-6cd7-4069-b0b4-2e2ee8a0b7d4/kube-rbac-proxy/0.log" Feb 26 15:08:19 crc kubenswrapper[4724]: I0226 15:08:19.078156 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-frr-files/0.log" Feb 26 15:08:19 crc kubenswrapper[4724]: I0226 15:08:19.302397 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-frr-files/0.log" Feb 26 15:08:19 crc kubenswrapper[4724]: I0226 15:08:19.360947 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-reloader/0.log" Feb 26 15:08:19 crc kubenswrapper[4724]: I0226 15:08:19.376671 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-metrics/0.log" Feb 26 15:08:19 crc kubenswrapper[4724]: I0226 15:08:19.388946 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-reloader/0.log" Feb 26 15:08:19 crc kubenswrapper[4724]: I0226 15:08:19.589570 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-metrics/0.log" Feb 26 15:08:19 crc kubenswrapper[4724]: I0226 15:08:19.633957 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-reloader/0.log" Feb 26 15:08:19 crc kubenswrapper[4724]: I0226 15:08:19.640974 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-frr-files/0.log" Feb 26 15:08:19 crc kubenswrapper[4724]: I0226 15:08:19.772787 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-metrics/0.log" Feb 26 15:08:19 crc kubenswrapper[4724]: I0226 15:08:19.905385 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-reloader/0.log" Feb 26 15:08:19 crc kubenswrapper[4724]: I0226 15:08:19.931449 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-frr-files/0.log" Feb 26 15:08:19 crc kubenswrapper[4724]: I0226 15:08:19.948956 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/cp-metrics/0.log" Feb 26 15:08:20 crc kubenswrapper[4724]: I0226 15:08:20.060692 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/controller/0.log" Feb 26 15:08:20 crc kubenswrapper[4724]: I0226 15:08:20.211321 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/kube-rbac-proxy/0.log" Feb 26 15:08:20 crc kubenswrapper[4724]: I0226 15:08:20.299101 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/frr-metrics/0.log" Feb 26 15:08:20 crc kubenswrapper[4724]: I0226 15:08:20.443037 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/kube-rbac-proxy-frr/0.log" Feb 26 15:08:20 crc kubenswrapper[4724]: I0226 15:08:20.532631 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/reloader/0.log" Feb 26 15:08:20 crc kubenswrapper[4724]: I0226 15:08:20.730912 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-xv452_933b3336-9cea-4b27-92e3-3fcf69076040/frr-k8s-webhook-server/0.log" Feb 26 15:08:21 crc kubenswrapper[4724]: I0226 15:08:21.026069 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-64754968d5-4ktxs_c749ff83-c2b1-49fc-b99a-1a8f7bda31fa/manager/0.log" Feb 26 15:08:21 crc kubenswrapper[4724]: I0226 15:08:21.436057 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6c46bf8994-k9qf2_c07d2449-8a13-4ae2-832b-30904057f00c/webhook-server/0.log" Feb 26 15:08:21 crc kubenswrapper[4724]: I0226 15:08:21.479887 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5vsqn_9b29feac-9647-448f-8c83-e33894da59dd/kube-rbac-proxy/0.log" Feb 26 15:08:22 crc kubenswrapper[4724]: I0226 15:08:22.623978 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5vsqn_9b29feac-9647-448f-8c83-e33894da59dd/speaker/0.log" Feb 26 15:08:24 crc kubenswrapper[4724]: I0226 15:08:24.049619 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b86hc_d848b417-9306-4564-b059-0dc84bd7ec1a/frr/0.log" Feb 26 15:08:39 crc kubenswrapper[4724]: I0226 15:08:39.620453 4724 scope.go:117] "RemoveContainer" containerID="d496e389c292d1ab6abd8a9a8d439dc22cac1226f7428977b61156a56fdc0ae8" Feb 26 15:08:39 crc kubenswrapper[4724]: I0226 15:08:39.781063 4724 scope.go:117] "RemoveContainer" containerID="d31c1ac58b7c34251ec3cc1d3b1d3b5ab1e2a5f368f11108845abaa1741bcf7f" Feb 26 15:08:45 crc kubenswrapper[4724]: I0226 15:08:45.227535 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s_648c1a76-a342-4f33-b06e-3a7969b0e1bb/util/0.log" Feb 26 15:08:45 crc kubenswrapper[4724]: I0226 15:08:45.394381 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s_648c1a76-a342-4f33-b06e-3a7969b0e1bb/util/0.log" Feb 26 15:08:45 crc kubenswrapper[4724]: I0226 15:08:45.527899 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s_648c1a76-a342-4f33-b06e-3a7969b0e1bb/pull/0.log" Feb 26 15:08:45 crc kubenswrapper[4724]: I0226 15:08:45.565979 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s_648c1a76-a342-4f33-b06e-3a7969b0e1bb/pull/0.log" Feb 26 15:08:45 crc kubenswrapper[4724]: I0226 15:08:45.774799 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s_648c1a76-a342-4f33-b06e-3a7969b0e1bb/util/0.log" Feb 26 15:08:45 crc kubenswrapper[4724]: I0226 15:08:45.828503 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s_648c1a76-a342-4f33-b06e-3a7969b0e1bb/extract/0.log" Feb 26 15:08:45 crc kubenswrapper[4724]: I0226 15:08:45.912718 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82jf29s_648c1a76-a342-4f33-b06e-3a7969b0e1bb/pull/0.log" Feb 26 15:08:46 crc kubenswrapper[4724]: I0226 15:08:46.071731 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8s9wz_0ce62393-2f46-4fd6-b3f9-dabc3a65d917/extract-utilities/0.log" Feb 26 15:08:46 crc kubenswrapper[4724]: I0226 15:08:46.249521 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8s9wz_0ce62393-2f46-4fd6-b3f9-dabc3a65d917/extract-utilities/0.log" Feb 26 15:08:46 crc kubenswrapper[4724]: I0226 15:08:46.385352 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8s9wz_0ce62393-2f46-4fd6-b3f9-dabc3a65d917/extract-content/0.log" Feb 26 15:08:46 crc kubenswrapper[4724]: I0226 15:08:46.405597 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8s9wz_0ce62393-2f46-4fd6-b3f9-dabc3a65d917/extract-content/0.log" Feb 26 15:08:46 crc kubenswrapper[4724]: I0226 15:08:46.558517 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8s9wz_0ce62393-2f46-4fd6-b3f9-dabc3a65d917/extract-content/0.log" Feb 26 15:08:46 crc kubenswrapper[4724]: I0226 15:08:46.623440 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8s9wz_0ce62393-2f46-4fd6-b3f9-dabc3a65d917/extract-utilities/0.log" Feb 26 15:08:46 crc kubenswrapper[4724]: I0226 15:08:46.954353 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8s9wz_0ce62393-2f46-4fd6-b3f9-dabc3a65d917/registry-server/0.log" Feb 26 15:08:46 crc kubenswrapper[4724]: I0226 15:08:46.975171 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xpspf_95e6bf93-9eb4-4b41-9428-39cf8e781456/extract-utilities/0.log" Feb 26 15:08:47 crc kubenswrapper[4724]: I0226 15:08:47.090783 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xpspf_95e6bf93-9eb4-4b41-9428-39cf8e781456/extract-content/0.log" Feb 26 15:08:47 crc kubenswrapper[4724]: I0226 15:08:47.121649 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xpspf_95e6bf93-9eb4-4b41-9428-39cf8e781456/extract-utilities/0.log" Feb 26 15:08:47 crc kubenswrapper[4724]: I0226 15:08:47.193701 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xpspf_95e6bf93-9eb4-4b41-9428-39cf8e781456/extract-content/0.log" Feb 26 15:08:47 crc kubenswrapper[4724]: I0226 15:08:47.394361 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xpspf_95e6bf93-9eb4-4b41-9428-39cf8e781456/extract-content/0.log" Feb 26 15:08:47 crc kubenswrapper[4724]: I0226 15:08:47.529166 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xpspf_95e6bf93-9eb4-4b41-9428-39cf8e781456/extract-utilities/0.log" Feb 26 15:08:47 crc kubenswrapper[4724]: I0226 15:08:47.658937 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p_55df853e-3e28-4871-8b98-ac9bc1a02cbf/util/0.log" Feb 26 15:08:48 crc kubenswrapper[4724]: I0226 15:08:48.130332 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xpspf_95e6bf93-9eb4-4b41-9428-39cf8e781456/registry-server/0.log" Feb 26 15:08:48 crc kubenswrapper[4724]: I0226 15:08:48.158094 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p_55df853e-3e28-4871-8b98-ac9bc1a02cbf/pull/0.log" Feb 26 15:08:48 crc kubenswrapper[4724]: I0226 15:08:48.160893 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p_55df853e-3e28-4871-8b98-ac9bc1a02cbf/util/0.log" Feb 26 15:08:48 crc kubenswrapper[4724]: I0226 15:08:48.213612 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p_55df853e-3e28-4871-8b98-ac9bc1a02cbf/pull/0.log" Feb 26 15:08:48 crc kubenswrapper[4724]: I0226 15:08:48.338971 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p_55df853e-3e28-4871-8b98-ac9bc1a02cbf/extract/0.log" Feb 26 15:08:48 crc kubenswrapper[4724]: I0226 15:08:48.379099 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p_55df853e-3e28-4871-8b98-ac9bc1a02cbf/util/0.log" Feb 26 15:08:48 crc kubenswrapper[4724]: I0226 15:08:48.434444 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f47xz2p_55df853e-3e28-4871-8b98-ac9bc1a02cbf/pull/0.log" Feb 26 15:08:48 crc kubenswrapper[4724]: I0226 15:08:48.612424 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-rtjt6_2edf1cee-54e6-4ffa-93ea-d09a2a74d8a3/marketplace-operator/0.log" Feb 26 15:08:48 crc kubenswrapper[4724]: I0226 15:08:48.642007 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8h6mc_e8868abd-2431-4e5b-98d6-574ca6449d4b/extract-utilities/0.log" Feb 26 15:08:48 crc kubenswrapper[4724]: I0226 15:08:48.925417 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8h6mc_e8868abd-2431-4e5b-98d6-574ca6449d4b/extract-content/0.log" Feb 26 15:08:48 crc kubenswrapper[4724]: I0226 15:08:48.947758 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8h6mc_e8868abd-2431-4e5b-98d6-574ca6449d4b/extract-content/0.log" Feb 26 15:08:48 crc kubenswrapper[4724]: I0226 15:08:48.957065 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8h6mc_e8868abd-2431-4e5b-98d6-574ca6449d4b/extract-utilities/0.log" Feb 26 15:08:49 crc kubenswrapper[4724]: I0226 15:08:49.198983 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8h6mc_e8868abd-2431-4e5b-98d6-574ca6449d4b/extract-utilities/0.log" Feb 26 15:08:49 crc kubenswrapper[4724]: I0226 15:08:49.204253 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8h6mc_e8868abd-2431-4e5b-98d6-574ca6449d4b/extract-content/0.log" Feb 26 15:08:49 crc kubenswrapper[4724]: I0226 15:08:49.494350 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/extract-utilities/0.log" Feb 26 15:08:49 crc kubenswrapper[4724]: I0226 15:08:49.641976 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8h6mc_e8868abd-2431-4e5b-98d6-574ca6449d4b/registry-server/0.log" Feb 26 15:08:49 crc kubenswrapper[4724]: I0226 15:08:49.752910 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/extract-utilities/0.log" Feb 26 15:08:49 crc kubenswrapper[4724]: I0226 15:08:49.783392 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/extract-content/0.log" Feb 26 15:08:49 crc kubenswrapper[4724]: I0226 15:08:49.815556 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/extract-content/0.log" Feb 26 15:08:49 crc kubenswrapper[4724]: I0226 15:08:49.963818 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/extract-content/0.log" Feb 26 15:08:49 crc kubenswrapper[4724]: I0226 15:08:49.964899 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/extract-utilities/0.log" Feb 26 15:08:50 crc kubenswrapper[4724]: I0226 15:08:50.061156 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/registry-server/1.log" Feb 26 15:08:50 crc kubenswrapper[4724]: I0226 15:08:50.145669 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xr87w_550ea3fc-915a-433b-9b60-2a6febd5afe4/registry-server/2.log" Feb 26 15:08:58 crc kubenswrapper[4724]: I0226 15:08:58.942831 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9szvw"] Feb 26 15:08:58 crc kubenswrapper[4724]: E0226 15:08:58.947250 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863b6279-0e0b-4216-be32-a72df2eb498e" containerName="oc" Feb 26 15:08:58 crc kubenswrapper[4724]: I0226 15:08:58.947276 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="863b6279-0e0b-4216-be32-a72df2eb498e" containerName="oc" Feb 26 15:08:58 crc kubenswrapper[4724]: I0226 15:08:58.948033 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="863b6279-0e0b-4216-be32-a72df2eb498e" containerName="oc" Feb 26 15:08:58 crc kubenswrapper[4724]: I0226 15:08:58.951066 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:08:58 crc kubenswrapper[4724]: I0226 15:08:58.967537 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9szvw"] Feb 26 15:08:59 crc kubenswrapper[4724]: I0226 15:08:59.050437 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5d4m\" (UniqueName: \"kubernetes.io/projected/71d790c3-14f6-496d-9f34-1b947b927697-kube-api-access-x5d4m\") pod \"community-operators-9szvw\" (UID: \"71d790c3-14f6-496d-9f34-1b947b927697\") " pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:08:59 crc kubenswrapper[4724]: I0226 15:08:59.050748 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71d790c3-14f6-496d-9f34-1b947b927697-utilities\") pod \"community-operators-9szvw\" (UID: \"71d790c3-14f6-496d-9f34-1b947b927697\") " pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:08:59 crc kubenswrapper[4724]: I0226 15:08:59.050904 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71d790c3-14f6-496d-9f34-1b947b927697-catalog-content\") pod \"community-operators-9szvw\" (UID: \"71d790c3-14f6-496d-9f34-1b947b927697\") " pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:08:59 crc kubenswrapper[4724]: I0226 15:08:59.153473 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d4m\" (UniqueName: \"kubernetes.io/projected/71d790c3-14f6-496d-9f34-1b947b927697-kube-api-access-x5d4m\") pod \"community-operators-9szvw\" (UID: \"71d790c3-14f6-496d-9f34-1b947b927697\") " pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:08:59 crc kubenswrapper[4724]: I0226 15:08:59.153939 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71d790c3-14f6-496d-9f34-1b947b927697-utilities\") pod \"community-operators-9szvw\" (UID: \"71d790c3-14f6-496d-9f34-1b947b927697\") " pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:08:59 crc kubenswrapper[4724]: I0226 15:08:59.154226 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71d790c3-14f6-496d-9f34-1b947b927697-catalog-content\") pod \"community-operators-9szvw\" (UID: \"71d790c3-14f6-496d-9f34-1b947b927697\") " pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:08:59 crc kubenswrapper[4724]: I0226 15:08:59.154884 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71d790c3-14f6-496d-9f34-1b947b927697-utilities\") pod \"community-operators-9szvw\" (UID: \"71d790c3-14f6-496d-9f34-1b947b927697\") " pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:08:59 crc kubenswrapper[4724]: I0226 15:08:59.155127 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71d790c3-14f6-496d-9f34-1b947b927697-catalog-content\") pod \"community-operators-9szvw\" (UID: \"71d790c3-14f6-496d-9f34-1b947b927697\") " pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:08:59 crc kubenswrapper[4724]: I0226 15:08:59.190933 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5d4m\" (UniqueName: \"kubernetes.io/projected/71d790c3-14f6-496d-9f34-1b947b927697-kube-api-access-x5d4m\") pod \"community-operators-9szvw\" (UID: \"71d790c3-14f6-496d-9f34-1b947b927697\") " pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:08:59 crc kubenswrapper[4724]: I0226 15:08:59.275971 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:09:00 crc kubenswrapper[4724]: I0226 15:09:00.441126 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9szvw"] Feb 26 15:09:00 crc kubenswrapper[4724]: I0226 15:09:00.942407 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9szvw" event={"ID":"71d790c3-14f6-496d-9f34-1b947b927697","Type":"ContainerDied","Data":"d8c96758306c81978881838552090a620b8917b86ec35b492d661acdef811ae1"} Feb 26 15:09:00 crc kubenswrapper[4724]: I0226 15:09:00.942571 4724 generic.go:334] "Generic (PLEG): container finished" podID="71d790c3-14f6-496d-9f34-1b947b927697" containerID="d8c96758306c81978881838552090a620b8917b86ec35b492d661acdef811ae1" exitCode=0 Feb 26 15:09:00 crc kubenswrapper[4724]: I0226 15:09:00.942858 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9szvw" event={"ID":"71d790c3-14f6-496d-9f34-1b947b927697","Type":"ContainerStarted","Data":"cb192a726c061141fa0770ec94d9e8119f120228e9de02a7bf4cf725a35fd412"} Feb 26 15:09:01 crc kubenswrapper[4724]: I0226 15:09:01.968999 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9szvw" event={"ID":"71d790c3-14f6-496d-9f34-1b947b927697","Type":"ContainerStarted","Data":"593c09922796a778c1212d772e6b1c14ecc89036ce5893d1e37db1b34be3dee1"} Feb 26 15:09:03 crc kubenswrapper[4724]: I0226 15:09:03.985935 4724 generic.go:334] "Generic (PLEG): container finished" podID="71d790c3-14f6-496d-9f34-1b947b927697" containerID="593c09922796a778c1212d772e6b1c14ecc89036ce5893d1e37db1b34be3dee1" exitCode=0 Feb 26 15:09:03 crc kubenswrapper[4724]: I0226 15:09:03.985998 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9szvw" event={"ID":"71d790c3-14f6-496d-9f34-1b947b927697","Type":"ContainerDied","Data":"593c09922796a778c1212d772e6b1c14ecc89036ce5893d1e37db1b34be3dee1"} Feb 26 15:09:04 crc kubenswrapper[4724]: I0226 15:09:04.997359 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9szvw" event={"ID":"71d790c3-14f6-496d-9f34-1b947b927697","Type":"ContainerStarted","Data":"7701163823b0099f865ef186b8ee33a91f0c2cf30ade5744ae77b81854707d13"} Feb 26 15:09:05 crc kubenswrapper[4724]: I0226 15:09:05.020742 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9szvw" podStartSLOduration=3.546014136 podStartE2EDuration="7.018625901s" podCreationTimestamp="2026-02-26 15:08:58 +0000 UTC" firstStartedPulling="2026-02-26 15:09:00.943599513 +0000 UTC m=+14607.599338638" lastFinishedPulling="2026-02-26 15:09:04.416211288 +0000 UTC m=+14611.071950403" observedRunningTime="2026-02-26 15:09:05.011719787 +0000 UTC m=+14611.667458902" watchObservedRunningTime="2026-02-26 15:09:05.018625901 +0000 UTC m=+14611.674365016" Feb 26 15:09:09 crc kubenswrapper[4724]: I0226 15:09:09.276697 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:09:09 crc kubenswrapper[4724]: I0226 15:09:09.277203 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:09:10 crc kubenswrapper[4724]: I0226 15:09:10.354750 4724 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9szvw" podUID="71d790c3-14f6-496d-9f34-1b947b927697" containerName="registry-server" probeResult="failure" output=< Feb 26 15:09:10 crc kubenswrapper[4724]: timeout: failed to connect service ":50051" within 1s Feb 26 15:09:10 crc kubenswrapper[4724]: > Feb 26 15:09:19 crc kubenswrapper[4724]: I0226 15:09:19.366351 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:09:19 crc kubenswrapper[4724]: I0226 15:09:19.438844 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:09:19 crc kubenswrapper[4724]: I0226 15:09:19.613104 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9szvw"] Feb 26 15:09:21 crc kubenswrapper[4724]: I0226 15:09:21.116324 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9szvw" podUID="71d790c3-14f6-496d-9f34-1b947b927697" containerName="registry-server" containerID="cri-o://7701163823b0099f865ef186b8ee33a91f0c2cf30ade5744ae77b81854707d13" gracePeriod=2 Feb 26 15:09:21 crc kubenswrapper[4724]: I0226 15:09:21.831240 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:09:21 crc kubenswrapper[4724]: I0226 15:09:21.936898 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5d4m\" (UniqueName: \"kubernetes.io/projected/71d790c3-14f6-496d-9f34-1b947b927697-kube-api-access-x5d4m\") pod \"71d790c3-14f6-496d-9f34-1b947b927697\" (UID: \"71d790c3-14f6-496d-9f34-1b947b927697\") " Feb 26 15:09:21 crc kubenswrapper[4724]: I0226 15:09:21.937032 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71d790c3-14f6-496d-9f34-1b947b927697-catalog-content\") pod \"71d790c3-14f6-496d-9f34-1b947b927697\" (UID: \"71d790c3-14f6-496d-9f34-1b947b927697\") " Feb 26 15:09:21 crc kubenswrapper[4724]: I0226 15:09:21.937115 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71d790c3-14f6-496d-9f34-1b947b927697-utilities\") pod \"71d790c3-14f6-496d-9f34-1b947b927697\" (UID: \"71d790c3-14f6-496d-9f34-1b947b927697\") " Feb 26 15:09:21 crc kubenswrapper[4724]: I0226 15:09:21.946186 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71d790c3-14f6-496d-9f34-1b947b927697-utilities" (OuterVolumeSpecName: "utilities") pod "71d790c3-14f6-496d-9f34-1b947b927697" (UID: "71d790c3-14f6-496d-9f34-1b947b927697"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:09:21 crc kubenswrapper[4724]: I0226 15:09:21.969800 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71d790c3-14f6-496d-9f34-1b947b927697-kube-api-access-x5d4m" (OuterVolumeSpecName: "kube-api-access-x5d4m") pod "71d790c3-14f6-496d-9f34-1b947b927697" (UID: "71d790c3-14f6-496d-9f34-1b947b927697"). InnerVolumeSpecName "kube-api-access-x5d4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.046082 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5d4m\" (UniqueName: \"kubernetes.io/projected/71d790c3-14f6-496d-9f34-1b947b927697-kube-api-access-x5d4m\") on node \"crc\" DevicePath \"\"" Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.046113 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71d790c3-14f6-496d-9f34-1b947b927697-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.156563 4724 generic.go:334] "Generic (PLEG): container finished" podID="71d790c3-14f6-496d-9f34-1b947b927697" containerID="7701163823b0099f865ef186b8ee33a91f0c2cf30ade5744ae77b81854707d13" exitCode=0 Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.156606 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9szvw" event={"ID":"71d790c3-14f6-496d-9f34-1b947b927697","Type":"ContainerDied","Data":"7701163823b0099f865ef186b8ee33a91f0c2cf30ade5744ae77b81854707d13"} Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.156635 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9szvw" event={"ID":"71d790c3-14f6-496d-9f34-1b947b927697","Type":"ContainerDied","Data":"cb192a726c061141fa0770ec94d9e8119f120228e9de02a7bf4cf725a35fd412"} Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.156653 4724 scope.go:117] "RemoveContainer" containerID="7701163823b0099f865ef186b8ee33a91f0c2cf30ade5744ae77b81854707d13" Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.156749 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9szvw" Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.172907 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71d790c3-14f6-496d-9f34-1b947b927697-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71d790c3-14f6-496d-9f34-1b947b927697" (UID: "71d790c3-14f6-496d-9f34-1b947b927697"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.191782 4724 scope.go:117] "RemoveContainer" containerID="593c09922796a778c1212d772e6b1c14ecc89036ce5893d1e37db1b34be3dee1" Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.230216 4724 scope.go:117] "RemoveContainer" containerID="d8c96758306c81978881838552090a620b8917b86ec35b492d661acdef811ae1" Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.258459 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71d790c3-14f6-496d-9f34-1b947b927697-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.276497 4724 scope.go:117] "RemoveContainer" containerID="7701163823b0099f865ef186b8ee33a91f0c2cf30ade5744ae77b81854707d13" Feb 26 15:09:22 crc kubenswrapper[4724]: E0226 15:09:22.278642 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7701163823b0099f865ef186b8ee33a91f0c2cf30ade5744ae77b81854707d13\": container with ID starting with 7701163823b0099f865ef186b8ee33a91f0c2cf30ade5744ae77b81854707d13 not found: ID does not exist" containerID="7701163823b0099f865ef186b8ee33a91f0c2cf30ade5744ae77b81854707d13" Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.278688 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7701163823b0099f865ef186b8ee33a91f0c2cf30ade5744ae77b81854707d13"} err="failed to get container status \"7701163823b0099f865ef186b8ee33a91f0c2cf30ade5744ae77b81854707d13\": rpc error: code = NotFound desc = could not find container \"7701163823b0099f865ef186b8ee33a91f0c2cf30ade5744ae77b81854707d13\": container with ID starting with 7701163823b0099f865ef186b8ee33a91f0c2cf30ade5744ae77b81854707d13 not found: ID does not exist" Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.278718 4724 scope.go:117] "RemoveContainer" containerID="593c09922796a778c1212d772e6b1c14ecc89036ce5893d1e37db1b34be3dee1" Feb 26 15:09:22 crc kubenswrapper[4724]: E0226 15:09:22.279242 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"593c09922796a778c1212d772e6b1c14ecc89036ce5893d1e37db1b34be3dee1\": container with ID starting with 593c09922796a778c1212d772e6b1c14ecc89036ce5893d1e37db1b34be3dee1 not found: ID does not exist" containerID="593c09922796a778c1212d772e6b1c14ecc89036ce5893d1e37db1b34be3dee1" Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.279288 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"593c09922796a778c1212d772e6b1c14ecc89036ce5893d1e37db1b34be3dee1"} err="failed to get container status \"593c09922796a778c1212d772e6b1c14ecc89036ce5893d1e37db1b34be3dee1\": rpc error: code = NotFound desc = could not find container \"593c09922796a778c1212d772e6b1c14ecc89036ce5893d1e37db1b34be3dee1\": container with ID starting with 593c09922796a778c1212d772e6b1c14ecc89036ce5893d1e37db1b34be3dee1 not found: ID does not exist" Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.279314 4724 scope.go:117] "RemoveContainer" containerID="d8c96758306c81978881838552090a620b8917b86ec35b492d661acdef811ae1" Feb 26 15:09:22 crc kubenswrapper[4724]: E0226 15:09:22.279636 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8c96758306c81978881838552090a620b8917b86ec35b492d661acdef811ae1\": container with ID starting with d8c96758306c81978881838552090a620b8917b86ec35b492d661acdef811ae1 not found: ID does not exist" containerID="d8c96758306c81978881838552090a620b8917b86ec35b492d661acdef811ae1" Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.279657 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8c96758306c81978881838552090a620b8917b86ec35b492d661acdef811ae1"} err="failed to get container status \"d8c96758306c81978881838552090a620b8917b86ec35b492d661acdef811ae1\": rpc error: code = NotFound desc = could not find container \"d8c96758306c81978881838552090a620b8917b86ec35b492d661acdef811ae1\": container with ID starting with d8c96758306c81978881838552090a620b8917b86ec35b492d661acdef811ae1 not found: ID does not exist" Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.500062 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9szvw"] Feb 26 15:09:22 crc kubenswrapper[4724]: I0226 15:09:22.511658 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9szvw"] Feb 26 15:09:23 crc kubenswrapper[4724]: I0226 15:09:23.988088 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71d790c3-14f6-496d-9f34-1b947b927697" path="/var/lib/kubelet/pods/71d790c3-14f6-496d-9f34-1b947b927697/volumes" Feb 26 15:09:39 crc kubenswrapper[4724]: I0226 15:09:39.915233 4724 scope.go:117] "RemoveContainer" containerID="f7239ecfd13429cc4b117011d7457d30bf0871f2ad76c01507123249151f04e2" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.170015 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gtbpr"] Feb 26 15:09:51 crc kubenswrapper[4724]: E0226 15:09:51.171077 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71d790c3-14f6-496d-9f34-1b947b927697" containerName="registry-server" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.171096 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="71d790c3-14f6-496d-9f34-1b947b927697" containerName="registry-server" Feb 26 15:09:51 crc kubenswrapper[4724]: E0226 15:09:51.171129 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71d790c3-14f6-496d-9f34-1b947b927697" containerName="extract-utilities" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.171138 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="71d790c3-14f6-496d-9f34-1b947b927697" containerName="extract-utilities" Feb 26 15:09:51 crc kubenswrapper[4724]: E0226 15:09:51.171174 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71d790c3-14f6-496d-9f34-1b947b927697" containerName="extract-content" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.171200 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="71d790c3-14f6-496d-9f34-1b947b927697" containerName="extract-content" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.171441 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="71d790c3-14f6-496d-9f34-1b947b927697" containerName="registry-server" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.178682 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.199679 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gtbpr"] Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.332327 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-878m8\" (UniqueName: \"kubernetes.io/projected/522d1ecc-9814-40cd-a21f-48a9aa9a2940-kube-api-access-878m8\") pod \"redhat-marketplace-gtbpr\" (UID: \"522d1ecc-9814-40cd-a21f-48a9aa9a2940\") " pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.332408 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/522d1ecc-9814-40cd-a21f-48a9aa9a2940-utilities\") pod \"redhat-marketplace-gtbpr\" (UID: \"522d1ecc-9814-40cd-a21f-48a9aa9a2940\") " pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.332491 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/522d1ecc-9814-40cd-a21f-48a9aa9a2940-catalog-content\") pod \"redhat-marketplace-gtbpr\" (UID: \"522d1ecc-9814-40cd-a21f-48a9aa9a2940\") " pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.433642 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/522d1ecc-9814-40cd-a21f-48a9aa9a2940-utilities\") pod \"redhat-marketplace-gtbpr\" (UID: \"522d1ecc-9814-40cd-a21f-48a9aa9a2940\") " pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.433759 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/522d1ecc-9814-40cd-a21f-48a9aa9a2940-catalog-content\") pod \"redhat-marketplace-gtbpr\" (UID: \"522d1ecc-9814-40cd-a21f-48a9aa9a2940\") " pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.433829 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-878m8\" (UniqueName: \"kubernetes.io/projected/522d1ecc-9814-40cd-a21f-48a9aa9a2940-kube-api-access-878m8\") pod \"redhat-marketplace-gtbpr\" (UID: \"522d1ecc-9814-40cd-a21f-48a9aa9a2940\") " pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.434480 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/522d1ecc-9814-40cd-a21f-48a9aa9a2940-utilities\") pod \"redhat-marketplace-gtbpr\" (UID: \"522d1ecc-9814-40cd-a21f-48a9aa9a2940\") " pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.434687 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/522d1ecc-9814-40cd-a21f-48a9aa9a2940-catalog-content\") pod \"redhat-marketplace-gtbpr\" (UID: \"522d1ecc-9814-40cd-a21f-48a9aa9a2940\") " pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.457757 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-878m8\" (UniqueName: \"kubernetes.io/projected/522d1ecc-9814-40cd-a21f-48a9aa9a2940-kube-api-access-878m8\") pod \"redhat-marketplace-gtbpr\" (UID: \"522d1ecc-9814-40cd-a21f-48a9aa9a2940\") " pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:09:51 crc kubenswrapper[4724]: I0226 15:09:51.509504 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:09:52 crc kubenswrapper[4724]: I0226 15:09:52.054100 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gtbpr"] Feb 26 15:09:52 crc kubenswrapper[4724]: I0226 15:09:52.460608 4724 generic.go:334] "Generic (PLEG): container finished" podID="522d1ecc-9814-40cd-a21f-48a9aa9a2940" containerID="ea40c5eea5806d4859c6a94b5f2265f96de3f5a7c64b88c9896ff261a981d5bb" exitCode=0 Feb 26 15:09:52 crc kubenswrapper[4724]: I0226 15:09:52.460875 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gtbpr" event={"ID":"522d1ecc-9814-40cd-a21f-48a9aa9a2940","Type":"ContainerDied","Data":"ea40c5eea5806d4859c6a94b5f2265f96de3f5a7c64b88c9896ff261a981d5bb"} Feb 26 15:09:52 crc kubenswrapper[4724]: I0226 15:09:52.460910 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gtbpr" event={"ID":"522d1ecc-9814-40cd-a21f-48a9aa9a2940","Type":"ContainerStarted","Data":"fe31a39e2c00e2d7b4bce0a79798c41cd00a58db546fd9442cb507213b74b563"} Feb 26 15:09:53 crc kubenswrapper[4724]: I0226 15:09:53.472369 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gtbpr" event={"ID":"522d1ecc-9814-40cd-a21f-48a9aa9a2940","Type":"ContainerStarted","Data":"70d8bb1f542a9c8b755f8e373ba750588be11bf32a85d37e603d6acb671f657d"} Feb 26 15:09:54 crc kubenswrapper[4724]: I0226 15:09:54.484096 4724 generic.go:334] "Generic (PLEG): container finished" podID="522d1ecc-9814-40cd-a21f-48a9aa9a2940" containerID="70d8bb1f542a9c8b755f8e373ba750588be11bf32a85d37e603d6acb671f657d" exitCode=0 Feb 26 15:09:54 crc kubenswrapper[4724]: I0226 15:09:54.484243 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gtbpr" event={"ID":"522d1ecc-9814-40cd-a21f-48a9aa9a2940","Type":"ContainerDied","Data":"70d8bb1f542a9c8b755f8e373ba750588be11bf32a85d37e603d6acb671f657d"} Feb 26 15:09:55 crc kubenswrapper[4724]: I0226 15:09:55.493451 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gtbpr" event={"ID":"522d1ecc-9814-40cd-a21f-48a9aa9a2940","Type":"ContainerStarted","Data":"9c817ccd9bb597ba770ed2b54a72b065240097913cb13e1be81f258cde0e50ba"} Feb 26 15:10:00 crc kubenswrapper[4724]: I0226 15:10:00.271902 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gtbpr" podStartSLOduration=6.881828055 podStartE2EDuration="9.271880643s" podCreationTimestamp="2026-02-26 15:09:51 +0000 UTC" firstStartedPulling="2026-02-26 15:09:52.463645028 +0000 UTC m=+14659.119384153" lastFinishedPulling="2026-02-26 15:09:54.853697596 +0000 UTC m=+14661.509436741" observedRunningTime="2026-02-26 15:09:55.51434823 +0000 UTC m=+14662.170087355" watchObservedRunningTime="2026-02-26 15:10:00.271880643 +0000 UTC m=+14666.927619768" Feb 26 15:10:00 crc kubenswrapper[4724]: I0226 15:10:00.280775 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535310-2vwg5"] Feb 26 15:10:00 crc kubenswrapper[4724]: I0226 15:10:00.282453 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535310-2vwg5" Feb 26 15:10:00 crc kubenswrapper[4724]: I0226 15:10:00.291342 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:10:00 crc kubenswrapper[4724]: I0226 15:10:00.292376 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:10:00 crc kubenswrapper[4724]: I0226 15:10:00.293085 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 15:10:00 crc kubenswrapper[4724]: I0226 15:10:00.297368 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535310-2vwg5"] Feb 26 15:10:00 crc kubenswrapper[4724]: I0226 15:10:00.409447 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x2kl\" (UniqueName: \"kubernetes.io/projected/38e7dcfc-4859-43be-939d-d17ba754143e-kube-api-access-9x2kl\") pod \"auto-csr-approver-29535310-2vwg5\" (UID: \"38e7dcfc-4859-43be-939d-d17ba754143e\") " pod="openshift-infra/auto-csr-approver-29535310-2vwg5" Feb 26 15:10:00 crc kubenswrapper[4724]: I0226 15:10:00.511866 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x2kl\" (UniqueName: \"kubernetes.io/projected/38e7dcfc-4859-43be-939d-d17ba754143e-kube-api-access-9x2kl\") pod \"auto-csr-approver-29535310-2vwg5\" (UID: \"38e7dcfc-4859-43be-939d-d17ba754143e\") " pod="openshift-infra/auto-csr-approver-29535310-2vwg5" Feb 26 15:10:00 crc kubenswrapper[4724]: I0226 15:10:00.557374 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x2kl\" (UniqueName: \"kubernetes.io/projected/38e7dcfc-4859-43be-939d-d17ba754143e-kube-api-access-9x2kl\") pod \"auto-csr-approver-29535310-2vwg5\" (UID: \"38e7dcfc-4859-43be-939d-d17ba754143e\") " pod="openshift-infra/auto-csr-approver-29535310-2vwg5" Feb 26 15:10:00 crc kubenswrapper[4724]: I0226 15:10:00.659735 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535310-2vwg5" Feb 26 15:10:01 crc kubenswrapper[4724]: I0226 15:10:01.198884 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535310-2vwg5"] Feb 26 15:10:01 crc kubenswrapper[4724]: I0226 15:10:01.510191 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:10:01 crc kubenswrapper[4724]: I0226 15:10:01.510431 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:10:01 crc kubenswrapper[4724]: I0226 15:10:01.545173 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535310-2vwg5" event={"ID":"38e7dcfc-4859-43be-939d-d17ba754143e","Type":"ContainerStarted","Data":"2dc09d077115a31417e9dd5153d00c195223c45da164248e6561e618cd9f9faf"} Feb 26 15:10:01 crc kubenswrapper[4724]: I0226 15:10:01.588843 4724 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:10:02 crc kubenswrapper[4724]: I0226 15:10:02.613963 4724 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:10:03 crc kubenswrapper[4724]: I0226 15:10:03.168919 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gtbpr"] Feb 26 15:10:04 crc kubenswrapper[4724]: I0226 15:10:04.586536 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535310-2vwg5" event={"ID":"38e7dcfc-4859-43be-939d-d17ba754143e","Type":"ContainerStarted","Data":"14761046c6bd03ad478ce1faaceb2317edcbb51e31bb85291e988906cceefec8"} Feb 26 15:10:04 crc kubenswrapper[4724]: I0226 15:10:04.587336 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gtbpr" podUID="522d1ecc-9814-40cd-a21f-48a9aa9a2940" containerName="registry-server" containerID="cri-o://9c817ccd9bb597ba770ed2b54a72b065240097913cb13e1be81f258cde0e50ba" gracePeriod=2 Feb 26 15:10:04 crc kubenswrapper[4724]: I0226 15:10:04.614706 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535310-2vwg5" podStartSLOduration=3.192988971 podStartE2EDuration="4.614687414s" podCreationTimestamp="2026-02-26 15:10:00 +0000 UTC" firstStartedPulling="2026-02-26 15:10:01.204940786 +0000 UTC m=+14667.860679901" lastFinishedPulling="2026-02-26 15:10:02.626639229 +0000 UTC m=+14669.282378344" observedRunningTime="2026-02-26 15:10:04.605861411 +0000 UTC m=+14671.261600536" watchObservedRunningTime="2026-02-26 15:10:04.614687414 +0000 UTC m=+14671.270426529" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.536368 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.598522 4724 generic.go:334] "Generic (PLEG): container finished" podID="522d1ecc-9814-40cd-a21f-48a9aa9a2940" containerID="9c817ccd9bb597ba770ed2b54a72b065240097913cb13e1be81f258cde0e50ba" exitCode=0 Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.598594 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gtbpr" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.598628 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gtbpr" event={"ID":"522d1ecc-9814-40cd-a21f-48a9aa9a2940","Type":"ContainerDied","Data":"9c817ccd9bb597ba770ed2b54a72b065240097913cb13e1be81f258cde0e50ba"} Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.599610 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gtbpr" event={"ID":"522d1ecc-9814-40cd-a21f-48a9aa9a2940","Type":"ContainerDied","Data":"fe31a39e2c00e2d7b4bce0a79798c41cd00a58db546fd9442cb507213b74b563"} Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.599632 4724 scope.go:117] "RemoveContainer" containerID="9c817ccd9bb597ba770ed2b54a72b065240097913cb13e1be81f258cde0e50ba" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.605577 4724 generic.go:334] "Generic (PLEG): container finished" podID="38e7dcfc-4859-43be-939d-d17ba754143e" containerID="14761046c6bd03ad478ce1faaceb2317edcbb51e31bb85291e988906cceefec8" exitCode=0 Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.605617 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535310-2vwg5" event={"ID":"38e7dcfc-4859-43be-939d-d17ba754143e","Type":"ContainerDied","Data":"14761046c6bd03ad478ce1faaceb2317edcbb51e31bb85291e988906cceefec8"} Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.627077 4724 scope.go:117] "RemoveContainer" containerID="70d8bb1f542a9c8b755f8e373ba750588be11bf32a85d37e603d6acb671f657d" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.632382 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-878m8\" (UniqueName: \"kubernetes.io/projected/522d1ecc-9814-40cd-a21f-48a9aa9a2940-kube-api-access-878m8\") pod \"522d1ecc-9814-40cd-a21f-48a9aa9a2940\" (UID: \"522d1ecc-9814-40cd-a21f-48a9aa9a2940\") " Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.632633 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/522d1ecc-9814-40cd-a21f-48a9aa9a2940-utilities\") pod \"522d1ecc-9814-40cd-a21f-48a9aa9a2940\" (UID: \"522d1ecc-9814-40cd-a21f-48a9aa9a2940\") " Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.632798 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/522d1ecc-9814-40cd-a21f-48a9aa9a2940-catalog-content\") pod \"522d1ecc-9814-40cd-a21f-48a9aa9a2940\" (UID: \"522d1ecc-9814-40cd-a21f-48a9aa9a2940\") " Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.634752 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/522d1ecc-9814-40cd-a21f-48a9aa9a2940-utilities" (OuterVolumeSpecName: "utilities") pod "522d1ecc-9814-40cd-a21f-48a9aa9a2940" (UID: "522d1ecc-9814-40cd-a21f-48a9aa9a2940"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.655868 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/522d1ecc-9814-40cd-a21f-48a9aa9a2940-kube-api-access-878m8" (OuterVolumeSpecName: "kube-api-access-878m8") pod "522d1ecc-9814-40cd-a21f-48a9aa9a2940" (UID: "522d1ecc-9814-40cd-a21f-48a9aa9a2940"). InnerVolumeSpecName "kube-api-access-878m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.665126 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/522d1ecc-9814-40cd-a21f-48a9aa9a2940-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "522d1ecc-9814-40cd-a21f-48a9aa9a2940" (UID: "522d1ecc-9814-40cd-a21f-48a9aa9a2940"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.665431 4724 scope.go:117] "RemoveContainer" containerID="ea40c5eea5806d4859c6a94b5f2265f96de3f5a7c64b88c9896ff261a981d5bb" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.730402 4724 scope.go:117] "RemoveContainer" containerID="9c817ccd9bb597ba770ed2b54a72b065240097913cb13e1be81f258cde0e50ba" Feb 26 15:10:05 crc kubenswrapper[4724]: E0226 15:10:05.731867 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c817ccd9bb597ba770ed2b54a72b065240097913cb13e1be81f258cde0e50ba\": container with ID starting with 9c817ccd9bb597ba770ed2b54a72b065240097913cb13e1be81f258cde0e50ba not found: ID does not exist" containerID="9c817ccd9bb597ba770ed2b54a72b065240097913cb13e1be81f258cde0e50ba" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.731913 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c817ccd9bb597ba770ed2b54a72b065240097913cb13e1be81f258cde0e50ba"} err="failed to get container status \"9c817ccd9bb597ba770ed2b54a72b065240097913cb13e1be81f258cde0e50ba\": rpc error: code = NotFound desc = could not find container \"9c817ccd9bb597ba770ed2b54a72b065240097913cb13e1be81f258cde0e50ba\": container with ID starting with 9c817ccd9bb597ba770ed2b54a72b065240097913cb13e1be81f258cde0e50ba not found: ID does not exist" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.731941 4724 scope.go:117] "RemoveContainer" containerID="70d8bb1f542a9c8b755f8e373ba750588be11bf32a85d37e603d6acb671f657d" Feb 26 15:10:05 crc kubenswrapper[4724]: E0226 15:10:05.732352 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70d8bb1f542a9c8b755f8e373ba750588be11bf32a85d37e603d6acb671f657d\": container with ID starting with 70d8bb1f542a9c8b755f8e373ba750588be11bf32a85d37e603d6acb671f657d not found: ID does not exist" containerID="70d8bb1f542a9c8b755f8e373ba750588be11bf32a85d37e603d6acb671f657d" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.732382 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70d8bb1f542a9c8b755f8e373ba750588be11bf32a85d37e603d6acb671f657d"} err="failed to get container status \"70d8bb1f542a9c8b755f8e373ba750588be11bf32a85d37e603d6acb671f657d\": rpc error: code = NotFound desc = could not find container \"70d8bb1f542a9c8b755f8e373ba750588be11bf32a85d37e603d6acb671f657d\": container with ID starting with 70d8bb1f542a9c8b755f8e373ba750588be11bf32a85d37e603d6acb671f657d not found: ID does not exist" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.732403 4724 scope.go:117] "RemoveContainer" containerID="ea40c5eea5806d4859c6a94b5f2265f96de3f5a7c64b88c9896ff261a981d5bb" Feb 26 15:10:05 crc kubenswrapper[4724]: E0226 15:10:05.733036 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea40c5eea5806d4859c6a94b5f2265f96de3f5a7c64b88c9896ff261a981d5bb\": container with ID starting with ea40c5eea5806d4859c6a94b5f2265f96de3f5a7c64b88c9896ff261a981d5bb not found: ID does not exist" containerID="ea40c5eea5806d4859c6a94b5f2265f96de3f5a7c64b88c9896ff261a981d5bb" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.733060 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea40c5eea5806d4859c6a94b5f2265f96de3f5a7c64b88c9896ff261a981d5bb"} err="failed to get container status \"ea40c5eea5806d4859c6a94b5f2265f96de3f5a7c64b88c9896ff261a981d5bb\": rpc error: code = NotFound desc = could not find container \"ea40c5eea5806d4859c6a94b5f2265f96de3f5a7c64b88c9896ff261a981d5bb\": container with ID starting with ea40c5eea5806d4859c6a94b5f2265f96de3f5a7c64b88c9896ff261a981d5bb not found: ID does not exist" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.734927 4724 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/522d1ecc-9814-40cd-a21f-48a9aa9a2940-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.734950 4724 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/522d1ecc-9814-40cd-a21f-48a9aa9a2940-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.734960 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-878m8\" (UniqueName: \"kubernetes.io/projected/522d1ecc-9814-40cd-a21f-48a9aa9a2940-kube-api-access-878m8\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.956865 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gtbpr"] Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.966986 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gtbpr"] Feb 26 15:10:05 crc kubenswrapper[4724]: I0226 15:10:05.987250 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="522d1ecc-9814-40cd-a21f-48a9aa9a2940" path="/var/lib/kubelet/pods/522d1ecc-9814-40cd-a21f-48a9aa9a2940/volumes" Feb 26 15:10:07 crc kubenswrapper[4724]: I0226 15:10:07.016946 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535310-2vwg5" Feb 26 15:10:07 crc kubenswrapper[4724]: I0226 15:10:07.161894 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x2kl\" (UniqueName: \"kubernetes.io/projected/38e7dcfc-4859-43be-939d-d17ba754143e-kube-api-access-9x2kl\") pod \"38e7dcfc-4859-43be-939d-d17ba754143e\" (UID: \"38e7dcfc-4859-43be-939d-d17ba754143e\") " Feb 26 15:10:07 crc kubenswrapper[4724]: I0226 15:10:07.169314 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e7dcfc-4859-43be-939d-d17ba754143e-kube-api-access-9x2kl" (OuterVolumeSpecName: "kube-api-access-9x2kl") pod "38e7dcfc-4859-43be-939d-d17ba754143e" (UID: "38e7dcfc-4859-43be-939d-d17ba754143e"). InnerVolumeSpecName "kube-api-access-9x2kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:10:07 crc kubenswrapper[4724]: I0226 15:10:07.264816 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x2kl\" (UniqueName: \"kubernetes.io/projected/38e7dcfc-4859-43be-939d-d17ba754143e-kube-api-access-9x2kl\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:07 crc kubenswrapper[4724]: I0226 15:10:07.639891 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535310-2vwg5" event={"ID":"38e7dcfc-4859-43be-939d-d17ba754143e","Type":"ContainerDied","Data":"2dc09d077115a31417e9dd5153d00c195223c45da164248e6561e618cd9f9faf"} Feb 26 15:10:07 crc kubenswrapper[4724]: I0226 15:10:07.639951 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535310-2vwg5" Feb 26 15:10:07 crc kubenswrapper[4724]: I0226 15:10:07.640195 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dc09d077115a31417e9dd5153d00c195223c45da164248e6561e618cd9f9faf" Feb 26 15:10:08 crc kubenswrapper[4724]: I0226 15:10:08.133391 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535304-cxdzm"] Feb 26 15:10:08 crc kubenswrapper[4724]: I0226 15:10:08.142913 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535304-cxdzm"] Feb 26 15:10:09 crc kubenswrapper[4724]: I0226 15:10:09.992768 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4947b58-8051-4d46-8de7-05973c9428ea" path="/var/lib/kubelet/pods/f4947b58-8051-4d46-8de7-05973c9428ea/volumes" Feb 26 15:10:16 crc kubenswrapper[4724]: I0226 15:10:16.908667 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:10:16 crc kubenswrapper[4724]: I0226 15:10:16.909208 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:10:24 crc kubenswrapper[4724]: I0226 15:10:24.822010 4724 generic.go:334] "Generic (PLEG): container finished" podID="cf705be9-0e89-49db-aa47-c709a3f7c82c" containerID="4f4fd577dd0762cfc170fef528b65163b8f5bf6ec0e4412bb147252841411f0e" exitCode=0 Feb 26 15:10:24 crc kubenswrapper[4724]: I0226 15:10:24.822279 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pk8nj/must-gather-r9677" event={"ID":"cf705be9-0e89-49db-aa47-c709a3f7c82c","Type":"ContainerDied","Data":"4f4fd577dd0762cfc170fef528b65163b8f5bf6ec0e4412bb147252841411f0e"} Feb 26 15:10:24 crc kubenswrapper[4724]: I0226 15:10:24.822982 4724 scope.go:117] "RemoveContainer" containerID="4f4fd577dd0762cfc170fef528b65163b8f5bf6ec0e4412bb147252841411f0e" Feb 26 15:10:25 crc kubenswrapper[4724]: I0226 15:10:25.202155 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pk8nj_must-gather-r9677_cf705be9-0e89-49db-aa47-c709a3f7c82c/gather/0.log" Feb 26 15:10:37 crc kubenswrapper[4724]: I0226 15:10:37.571107 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pk8nj/must-gather-r9677"] Feb 26 15:10:37 crc kubenswrapper[4724]: I0226 15:10:37.586820 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pk8nj/must-gather-r9677"] Feb 26 15:10:37 crc kubenswrapper[4724]: I0226 15:10:37.587077 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-pk8nj/must-gather-r9677" podUID="cf705be9-0e89-49db-aa47-c709a3f7c82c" containerName="copy" containerID="cri-o://16b0304d71d80fb6806a6d1c03a18ee7193b299921ffb04aa7ada07e848268bf" gracePeriod=2 Feb 26 15:10:38 crc kubenswrapper[4724]: I0226 15:10:38.023249 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pk8nj_must-gather-r9677_cf705be9-0e89-49db-aa47-c709a3f7c82c/copy/0.log" Feb 26 15:10:38 crc kubenswrapper[4724]: I0226 15:10:38.023904 4724 generic.go:334] "Generic (PLEG): container finished" podID="cf705be9-0e89-49db-aa47-c709a3f7c82c" containerID="16b0304d71d80fb6806a6d1c03a18ee7193b299921ffb04aa7ada07e848268bf" exitCode=143 Feb 26 15:10:38 crc kubenswrapper[4724]: I0226 15:10:38.214139 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pk8nj_must-gather-r9677_cf705be9-0e89-49db-aa47-c709a3f7c82c/copy/0.log" Feb 26 15:10:38 crc kubenswrapper[4724]: I0226 15:10:38.214475 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/must-gather-r9677" Feb 26 15:10:38 crc kubenswrapper[4724]: I0226 15:10:38.287251 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf705be9-0e89-49db-aa47-c709a3f7c82c-must-gather-output\") pod \"cf705be9-0e89-49db-aa47-c709a3f7c82c\" (UID: \"cf705be9-0e89-49db-aa47-c709a3f7c82c\") " Feb 26 15:10:38 crc kubenswrapper[4724]: I0226 15:10:38.287516 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm6fk\" (UniqueName: \"kubernetes.io/projected/cf705be9-0e89-49db-aa47-c709a3f7c82c-kube-api-access-mm6fk\") pod \"cf705be9-0e89-49db-aa47-c709a3f7c82c\" (UID: \"cf705be9-0e89-49db-aa47-c709a3f7c82c\") " Feb 26 15:10:38 crc kubenswrapper[4724]: I0226 15:10:38.295462 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf705be9-0e89-49db-aa47-c709a3f7c82c-kube-api-access-mm6fk" (OuterVolumeSpecName: "kube-api-access-mm6fk") pod "cf705be9-0e89-49db-aa47-c709a3f7c82c" (UID: "cf705be9-0e89-49db-aa47-c709a3f7c82c"). InnerVolumeSpecName "kube-api-access-mm6fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:10:38 crc kubenswrapper[4724]: I0226 15:10:38.389676 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm6fk\" (UniqueName: \"kubernetes.io/projected/cf705be9-0e89-49db-aa47-c709a3f7c82c-kube-api-access-mm6fk\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:38 crc kubenswrapper[4724]: I0226 15:10:38.436923 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf705be9-0e89-49db-aa47-c709a3f7c82c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cf705be9-0e89-49db-aa47-c709a3f7c82c" (UID: "cf705be9-0e89-49db-aa47-c709a3f7c82c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:10:38 crc kubenswrapper[4724]: I0226 15:10:38.491818 4724 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf705be9-0e89-49db-aa47-c709a3f7c82c-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:39 crc kubenswrapper[4724]: I0226 15:10:39.088565 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pk8nj_must-gather-r9677_cf705be9-0e89-49db-aa47-c709a3f7c82c/copy/0.log" Feb 26 15:10:39 crc kubenswrapper[4724]: I0226 15:10:39.089151 4724 scope.go:117] "RemoveContainer" containerID="16b0304d71d80fb6806a6d1c03a18ee7193b299921ffb04aa7ada07e848268bf" Feb 26 15:10:39 crc kubenswrapper[4724]: I0226 15:10:39.089392 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pk8nj/must-gather-r9677" Feb 26 15:10:39 crc kubenswrapper[4724]: I0226 15:10:39.155050 4724 scope.go:117] "RemoveContainer" containerID="4f4fd577dd0762cfc170fef528b65163b8f5bf6ec0e4412bb147252841411f0e" Feb 26 15:10:39 crc kubenswrapper[4724]: I0226 15:10:39.986167 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf705be9-0e89-49db-aa47-c709a3f7c82c" path="/var/lib/kubelet/pods/cf705be9-0e89-49db-aa47-c709a3f7c82c/volumes" Feb 26 15:10:40 crc kubenswrapper[4724]: I0226 15:10:40.047381 4724 scope.go:117] "RemoveContainer" containerID="c7eacefbf2101237f34adf361505d80be3bbfedd62ec814335798547922c5cc3" Feb 26 15:10:40 crc kubenswrapper[4724]: I0226 15:10:40.086995 4724 scope.go:117] "RemoveContainer" containerID="a6a53b9d68b730a83db611f3012963d89e9c4750c79b3a183c0cd6369c9a52aa" Feb 26 15:10:40 crc kubenswrapper[4724]: I0226 15:10:40.131845 4724 scope.go:117] "RemoveContainer" containerID="e315be8bf9d4ec81a6c49869422c6ee416f97b62bf15063d9c5ae928b975836f" Feb 26 15:10:46 crc kubenswrapper[4724]: I0226 15:10:46.906945 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:10:46 crc kubenswrapper[4724]: I0226 15:10:46.907652 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:11:07 crc kubenswrapper[4724]: I0226 15:11:07.410822 4724 generic.go:334] "Generic (PLEG): container finished" podID="c834cdec-42c9-43cf-93ed-975a34f0a532" containerID="23791348c4653785d87246dc6859319a59d188710c3bf9d89f35132896d2b33c" exitCode=0 Feb 26 15:11:07 crc kubenswrapper[4724]: I0226 15:11:07.410919 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gpvcm/must-gather-btd5b" event={"ID":"c834cdec-42c9-43cf-93ed-975a34f0a532","Type":"ContainerDied","Data":"23791348c4653785d87246dc6859319a59d188710c3bf9d89f35132896d2b33c"} Feb 26 15:11:07 crc kubenswrapper[4724]: I0226 15:11:07.413459 4724 scope.go:117] "RemoveContainer" containerID="23791348c4653785d87246dc6859319a59d188710c3bf9d89f35132896d2b33c" Feb 26 15:11:08 crc kubenswrapper[4724]: I0226 15:11:08.050458 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gpvcm_must-gather-btd5b_c834cdec-42c9-43cf-93ed-975a34f0a532/gather/0.log" Feb 26 15:11:16 crc kubenswrapper[4724]: I0226 15:11:16.906768 4724 patch_prober.go:28] interesting pod/machine-config-daemon-5gv7d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:11:16 crc kubenswrapper[4724]: I0226 15:11:16.907766 4724 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:11:16 crc kubenswrapper[4724]: I0226 15:11:16.907814 4724 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" Feb 26 15:11:16 crc kubenswrapper[4724]: I0226 15:11:16.913027 4724 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a"} pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 15:11:16 crc kubenswrapper[4724]: I0226 15:11:16.913096 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerName="machine-config-daemon" containerID="cri-o://b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" gracePeriod=600 Feb 26 15:11:17 crc kubenswrapper[4724]: E0226 15:11:17.041875 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:11:17 crc kubenswrapper[4724]: I0226 15:11:17.517568 4724 generic.go:334] "Generic (PLEG): container finished" podID="b2405c92-e87c-4e60-ac28-0cd51800d9df" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" exitCode=0 Feb 26 15:11:17 crc kubenswrapper[4724]: I0226 15:11:17.517658 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" event={"ID":"b2405c92-e87c-4e60-ac28-0cd51800d9df","Type":"ContainerDied","Data":"b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a"} Feb 26 15:11:17 crc kubenswrapper[4724]: I0226 15:11:17.518113 4724 scope.go:117] "RemoveContainer" containerID="1413d2ccbd104e8150cde8d90f88242e089bd6ca48f9c203576affea50184696" Feb 26 15:11:17 crc kubenswrapper[4724]: I0226 15:11:17.519055 4724 scope.go:117] "RemoveContainer" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" Feb 26 15:11:17 crc kubenswrapper[4724]: E0226 15:11:17.519553 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:11:22 crc kubenswrapper[4724]: I0226 15:11:22.606713 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gpvcm/must-gather-btd5b"] Feb 26 15:11:22 crc kubenswrapper[4724]: I0226 15:11:22.607224 4724 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-gpvcm/must-gather-btd5b" podUID="c834cdec-42c9-43cf-93ed-975a34f0a532" containerName="copy" containerID="cri-o://a336043198365ed895cb23a6918da3d2cba63d2680539e5a75c66af6b601f3bf" gracePeriod=2 Feb 26 15:11:22 crc kubenswrapper[4724]: I0226 15:11:22.614819 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gpvcm/must-gather-btd5b"] Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.565163 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gpvcm_must-gather-btd5b_c834cdec-42c9-43cf-93ed-975a34f0a532/copy/0.log" Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.566392 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gpvcm/must-gather-btd5b" Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.585072 4724 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gpvcm_must-gather-btd5b_c834cdec-42c9-43cf-93ed-975a34f0a532/copy/0.log" Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.585438 4724 generic.go:334] "Generic (PLEG): container finished" podID="c834cdec-42c9-43cf-93ed-975a34f0a532" containerID="a336043198365ed895cb23a6918da3d2cba63d2680539e5a75c66af6b601f3bf" exitCode=143 Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.585491 4724 scope.go:117] "RemoveContainer" containerID="a336043198365ed895cb23a6918da3d2cba63d2680539e5a75c66af6b601f3bf" Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.585665 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gpvcm/must-gather-btd5b" Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.637020 4724 scope.go:117] "RemoveContainer" containerID="23791348c4653785d87246dc6859319a59d188710c3bf9d89f35132896d2b33c" Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.689530 4724 scope.go:117] "RemoveContainer" containerID="a336043198365ed895cb23a6918da3d2cba63d2680539e5a75c66af6b601f3bf" Feb 26 15:11:23 crc kubenswrapper[4724]: E0226 15:11:23.689873 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a336043198365ed895cb23a6918da3d2cba63d2680539e5a75c66af6b601f3bf\": container with ID starting with a336043198365ed895cb23a6918da3d2cba63d2680539e5a75c66af6b601f3bf not found: ID does not exist" containerID="a336043198365ed895cb23a6918da3d2cba63d2680539e5a75c66af6b601f3bf" Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.689902 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a336043198365ed895cb23a6918da3d2cba63d2680539e5a75c66af6b601f3bf"} err="failed to get container status \"a336043198365ed895cb23a6918da3d2cba63d2680539e5a75c66af6b601f3bf\": rpc error: code = NotFound desc = could not find container \"a336043198365ed895cb23a6918da3d2cba63d2680539e5a75c66af6b601f3bf\": container with ID starting with a336043198365ed895cb23a6918da3d2cba63d2680539e5a75c66af6b601f3bf not found: ID does not exist" Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.689923 4724 scope.go:117] "RemoveContainer" containerID="23791348c4653785d87246dc6859319a59d188710c3bf9d89f35132896d2b33c" Feb 26 15:11:23 crc kubenswrapper[4724]: E0226 15:11:23.690107 4724 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23791348c4653785d87246dc6859319a59d188710c3bf9d89f35132896d2b33c\": container with ID starting with 23791348c4653785d87246dc6859319a59d188710c3bf9d89f35132896d2b33c not found: ID does not exist" containerID="23791348c4653785d87246dc6859319a59d188710c3bf9d89f35132896d2b33c" Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.690125 4724 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23791348c4653785d87246dc6859319a59d188710c3bf9d89f35132896d2b33c"} err="failed to get container status \"23791348c4653785d87246dc6859319a59d188710c3bf9d89f35132896d2b33c\": rpc error: code = NotFound desc = could not find container \"23791348c4653785d87246dc6859319a59d188710c3bf9d89f35132896d2b33c\": container with ID starting with 23791348c4653785d87246dc6859319a59d188710c3bf9d89f35132896d2b33c not found: ID does not exist" Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.729662 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plkdf\" (UniqueName: \"kubernetes.io/projected/c834cdec-42c9-43cf-93ed-975a34f0a532-kube-api-access-plkdf\") pod \"c834cdec-42c9-43cf-93ed-975a34f0a532\" (UID: \"c834cdec-42c9-43cf-93ed-975a34f0a532\") " Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.729929 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c834cdec-42c9-43cf-93ed-975a34f0a532-must-gather-output\") pod \"c834cdec-42c9-43cf-93ed-975a34f0a532\" (UID: \"c834cdec-42c9-43cf-93ed-975a34f0a532\") " Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.743393 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c834cdec-42c9-43cf-93ed-975a34f0a532-kube-api-access-plkdf" (OuterVolumeSpecName: "kube-api-access-plkdf") pod "c834cdec-42c9-43cf-93ed-975a34f0a532" (UID: "c834cdec-42c9-43cf-93ed-975a34f0a532"). InnerVolumeSpecName "kube-api-access-plkdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.832017 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plkdf\" (UniqueName: \"kubernetes.io/projected/c834cdec-42c9-43cf-93ed-975a34f0a532-kube-api-access-plkdf\") on node \"crc\" DevicePath \"\"" Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.905686 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c834cdec-42c9-43cf-93ed-975a34f0a532-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c834cdec-42c9-43cf-93ed-975a34f0a532" (UID: "c834cdec-42c9-43cf-93ed-975a34f0a532"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.933527 4724 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c834cdec-42c9-43cf-93ed-975a34f0a532-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 26 15:11:23 crc kubenswrapper[4724]: I0226 15:11:23.987666 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c834cdec-42c9-43cf-93ed-975a34f0a532" path="/var/lib/kubelet/pods/c834cdec-42c9-43cf-93ed-975a34f0a532/volumes" Feb 26 15:11:30 crc kubenswrapper[4724]: I0226 15:11:30.975679 4724 scope.go:117] "RemoveContainer" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" Feb 26 15:11:30 crc kubenswrapper[4724]: E0226 15:11:30.976442 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:11:45 crc kubenswrapper[4724]: I0226 15:11:45.977571 4724 scope.go:117] "RemoveContainer" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" Feb 26 15:11:45 crc kubenswrapper[4724]: E0226 15:11:45.981907 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:11:59 crc kubenswrapper[4724]: I0226 15:11:59.979584 4724 scope.go:117] "RemoveContainer" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" Feb 26 15:11:59 crc kubenswrapper[4724]: E0226 15:11:59.986847 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.263586 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535312-nl4px"] Feb 26 15:12:00 crc kubenswrapper[4724]: E0226 15:12:00.269022 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522d1ecc-9814-40cd-a21f-48a9aa9a2940" containerName="registry-server" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.269048 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="522d1ecc-9814-40cd-a21f-48a9aa9a2940" containerName="registry-server" Feb 26 15:12:00 crc kubenswrapper[4724]: E0226 15:12:00.269076 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c834cdec-42c9-43cf-93ed-975a34f0a532" containerName="gather" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.269084 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c834cdec-42c9-43cf-93ed-975a34f0a532" containerName="gather" Feb 26 15:12:00 crc kubenswrapper[4724]: E0226 15:12:00.269113 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e7dcfc-4859-43be-939d-d17ba754143e" containerName="oc" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.269120 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e7dcfc-4859-43be-939d-d17ba754143e" containerName="oc" Feb 26 15:12:00 crc kubenswrapper[4724]: E0226 15:12:00.269135 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522d1ecc-9814-40cd-a21f-48a9aa9a2940" containerName="extract-utilities" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.269142 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="522d1ecc-9814-40cd-a21f-48a9aa9a2940" containerName="extract-utilities" Feb 26 15:12:00 crc kubenswrapper[4724]: E0226 15:12:00.269154 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c834cdec-42c9-43cf-93ed-975a34f0a532" containerName="copy" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.269160 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="c834cdec-42c9-43cf-93ed-975a34f0a532" containerName="copy" Feb 26 15:12:00 crc kubenswrapper[4724]: E0226 15:12:00.269212 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf705be9-0e89-49db-aa47-c709a3f7c82c" containerName="copy" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.269222 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf705be9-0e89-49db-aa47-c709a3f7c82c" containerName="copy" Feb 26 15:12:00 crc kubenswrapper[4724]: E0226 15:12:00.269234 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf705be9-0e89-49db-aa47-c709a3f7c82c" containerName="gather" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.269242 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf705be9-0e89-49db-aa47-c709a3f7c82c" containerName="gather" Feb 26 15:12:00 crc kubenswrapper[4724]: E0226 15:12:00.269257 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522d1ecc-9814-40cd-a21f-48a9aa9a2940" containerName="extract-content" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.269265 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="522d1ecc-9814-40cd-a21f-48a9aa9a2940" containerName="extract-content" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.271700 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf705be9-0e89-49db-aa47-c709a3f7c82c" containerName="gather" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.271734 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c834cdec-42c9-43cf-93ed-975a34f0a532" containerName="gather" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.271746 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e7dcfc-4859-43be-939d-d17ba754143e" containerName="oc" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.271766 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf705be9-0e89-49db-aa47-c709a3f7c82c" containerName="copy" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.271777 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="c834cdec-42c9-43cf-93ed-975a34f0a532" containerName="copy" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.271789 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="522d1ecc-9814-40cd-a21f-48a9aa9a2940" containerName="registry-server" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.275902 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535312-nl4px" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.289152 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.289133 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.289135 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.334360 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-472lx\" (UniqueName: \"kubernetes.io/projected/65790d3d-cd35-45dc-87c0-cd4e327b6968-kube-api-access-472lx\") pod \"auto-csr-approver-29535312-nl4px\" (UID: \"65790d3d-cd35-45dc-87c0-cd4e327b6968\") " pod="openshift-infra/auto-csr-approver-29535312-nl4px" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.338665 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535312-nl4px"] Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.435988 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-472lx\" (UniqueName: \"kubernetes.io/projected/65790d3d-cd35-45dc-87c0-cd4e327b6968-kube-api-access-472lx\") pod \"auto-csr-approver-29535312-nl4px\" (UID: \"65790d3d-cd35-45dc-87c0-cd4e327b6968\") " pod="openshift-infra/auto-csr-approver-29535312-nl4px" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.466994 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-472lx\" (UniqueName: \"kubernetes.io/projected/65790d3d-cd35-45dc-87c0-cd4e327b6968-kube-api-access-472lx\") pod \"auto-csr-approver-29535312-nl4px\" (UID: \"65790d3d-cd35-45dc-87c0-cd4e327b6968\") " pod="openshift-infra/auto-csr-approver-29535312-nl4px" Feb 26 15:12:00 crc kubenswrapper[4724]: I0226 15:12:00.604806 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535312-nl4px" Feb 26 15:12:01 crc kubenswrapper[4724]: I0226 15:12:01.723417 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535312-nl4px"] Feb 26 15:12:02 crc kubenswrapper[4724]: I0226 15:12:02.015723 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535312-nl4px" event={"ID":"65790d3d-cd35-45dc-87c0-cd4e327b6968","Type":"ContainerStarted","Data":"cc37b69be134e981c832ab74cbbd85864e9d9c43f4bb8204a43c73e9443978be"} Feb 26 15:12:05 crc kubenswrapper[4724]: I0226 15:12:05.046011 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535312-nl4px" event={"ID":"65790d3d-cd35-45dc-87c0-cd4e327b6968","Type":"ContainerStarted","Data":"737f9edff01fb8dbd604b92f152002252bb737471261f2a636ac504b1773ea61"} Feb 26 15:12:05 crc kubenswrapper[4724]: I0226 15:12:05.068915 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535312-nl4px" podStartSLOduration=3.091775991 podStartE2EDuration="5.066841638s" podCreationTimestamp="2026-02-26 15:12:00 +0000 UTC" firstStartedPulling="2026-02-26 15:12:01.778944868 +0000 UTC m=+14788.434683983" lastFinishedPulling="2026-02-26 15:12:03.754010475 +0000 UTC m=+14790.409749630" observedRunningTime="2026-02-26 15:12:05.060258642 +0000 UTC m=+14791.715997797" watchObservedRunningTime="2026-02-26 15:12:05.066841638 +0000 UTC m=+14791.722580763" Feb 26 15:12:07 crc kubenswrapper[4724]: I0226 15:12:07.070699 4724 generic.go:334] "Generic (PLEG): container finished" podID="65790d3d-cd35-45dc-87c0-cd4e327b6968" containerID="737f9edff01fb8dbd604b92f152002252bb737471261f2a636ac504b1773ea61" exitCode=0 Feb 26 15:12:07 crc kubenswrapper[4724]: I0226 15:12:07.070785 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535312-nl4px" event={"ID":"65790d3d-cd35-45dc-87c0-cd4e327b6968","Type":"ContainerDied","Data":"737f9edff01fb8dbd604b92f152002252bb737471261f2a636ac504b1773ea61"} Feb 26 15:12:08 crc kubenswrapper[4724]: I0226 15:12:08.479480 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535312-nl4px" Feb 26 15:12:08 crc kubenswrapper[4724]: I0226 15:12:08.510852 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-472lx\" (UniqueName: \"kubernetes.io/projected/65790d3d-cd35-45dc-87c0-cd4e327b6968-kube-api-access-472lx\") pod \"65790d3d-cd35-45dc-87c0-cd4e327b6968\" (UID: \"65790d3d-cd35-45dc-87c0-cd4e327b6968\") " Feb 26 15:12:08 crc kubenswrapper[4724]: I0226 15:12:08.523702 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65790d3d-cd35-45dc-87c0-cd4e327b6968-kube-api-access-472lx" (OuterVolumeSpecName: "kube-api-access-472lx") pod "65790d3d-cd35-45dc-87c0-cd4e327b6968" (UID: "65790d3d-cd35-45dc-87c0-cd4e327b6968"). InnerVolumeSpecName "kube-api-access-472lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:12:08 crc kubenswrapper[4724]: I0226 15:12:08.613141 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-472lx\" (UniqueName: \"kubernetes.io/projected/65790d3d-cd35-45dc-87c0-cd4e327b6968-kube-api-access-472lx\") on node \"crc\" DevicePath \"\"" Feb 26 15:12:09 crc kubenswrapper[4724]: I0226 15:12:09.095347 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535312-nl4px" event={"ID":"65790d3d-cd35-45dc-87c0-cd4e327b6968","Type":"ContainerDied","Data":"cc37b69be134e981c832ab74cbbd85864e9d9c43f4bb8204a43c73e9443978be"} Feb 26 15:12:09 crc kubenswrapper[4724]: I0226 15:12:09.095392 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc37b69be134e981c832ab74cbbd85864e9d9c43f4bb8204a43c73e9443978be" Feb 26 15:12:09 crc kubenswrapper[4724]: I0226 15:12:09.095404 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535312-nl4px" Feb 26 15:12:09 crc kubenswrapper[4724]: I0226 15:12:09.171886 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535306-hsqnv"] Feb 26 15:12:09 crc kubenswrapper[4724]: I0226 15:12:09.180645 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535306-hsqnv"] Feb 26 15:12:09 crc kubenswrapper[4724]: I0226 15:12:09.993557 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="226418ca-21f5-40bb-9864-9f7f1cd2b562" path="/var/lib/kubelet/pods/226418ca-21f5-40bb-9864-9f7f1cd2b562/volumes" Feb 26 15:12:11 crc kubenswrapper[4724]: I0226 15:12:11.975807 4724 scope.go:117] "RemoveContainer" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" Feb 26 15:12:11 crc kubenswrapper[4724]: E0226 15:12:11.976373 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:12:24 crc kubenswrapper[4724]: I0226 15:12:24.975610 4724 scope.go:117] "RemoveContainer" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" Feb 26 15:12:24 crc kubenswrapper[4724]: E0226 15:12:24.977351 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:12:37 crc kubenswrapper[4724]: I0226 15:12:37.975659 4724 scope.go:117] "RemoveContainer" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" Feb 26 15:12:37 crc kubenswrapper[4724]: E0226 15:12:37.976720 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:12:40 crc kubenswrapper[4724]: I0226 15:12:40.372400 4724 scope.go:117] "RemoveContainer" containerID="0e4114b2aa49fbf316363875311209c11c305440f5b255e6d69931206eeb73f5" Feb 26 15:12:52 crc kubenswrapper[4724]: I0226 15:12:52.976023 4724 scope.go:117] "RemoveContainer" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" Feb 26 15:12:52 crc kubenswrapper[4724]: E0226 15:12:52.977428 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:13:03 crc kubenswrapper[4724]: I0226 15:13:03.989840 4724 scope.go:117] "RemoveContainer" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" Feb 26 15:13:03 crc kubenswrapper[4724]: E0226 15:13:03.991072 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:13:16 crc kubenswrapper[4724]: I0226 15:13:16.975339 4724 scope.go:117] "RemoveContainer" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" Feb 26 15:13:16 crc kubenswrapper[4724]: E0226 15:13:16.976101 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:13:29 crc kubenswrapper[4724]: I0226 15:13:29.975764 4724 scope.go:117] "RemoveContainer" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" Feb 26 15:13:29 crc kubenswrapper[4724]: E0226 15:13:29.976767 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:13:40 crc kubenswrapper[4724]: I0226 15:13:40.975787 4724 scope.go:117] "RemoveContainer" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" Feb 26 15:13:40 crc kubenswrapper[4724]: E0226 15:13:40.977029 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:13:51 crc kubenswrapper[4724]: I0226 15:13:51.976307 4724 scope.go:117] "RemoveContainer" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" Feb 26 15:13:51 crc kubenswrapper[4724]: E0226 15:13:51.977333 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:14:00 crc kubenswrapper[4724]: I0226 15:14:00.226591 4724 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535314-dngmw"] Feb 26 15:14:00 crc kubenswrapper[4724]: E0226 15:14:00.227768 4724 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65790d3d-cd35-45dc-87c0-cd4e327b6968" containerName="oc" Feb 26 15:14:00 crc kubenswrapper[4724]: I0226 15:14:00.227791 4724 state_mem.go:107] "Deleted CPUSet assignment" podUID="65790d3d-cd35-45dc-87c0-cd4e327b6968" containerName="oc" Feb 26 15:14:00 crc kubenswrapper[4724]: I0226 15:14:00.228280 4724 memory_manager.go:354] "RemoveStaleState removing state" podUID="65790d3d-cd35-45dc-87c0-cd4e327b6968" containerName="oc" Feb 26 15:14:00 crc kubenswrapper[4724]: I0226 15:14:00.229426 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535314-dngmw" Feb 26 15:14:00 crc kubenswrapper[4724]: I0226 15:14:00.233040 4724 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lp2vz" Feb 26 15:14:00 crc kubenswrapper[4724]: I0226 15:14:00.235854 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:14:00 crc kubenswrapper[4724]: I0226 15:14:00.236131 4724 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:14:00 crc kubenswrapper[4724]: I0226 15:14:00.260488 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535314-dngmw"] Feb 26 15:14:00 crc kubenswrapper[4724]: I0226 15:14:00.289166 4724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jr6x\" (UniqueName: \"kubernetes.io/projected/63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf-kube-api-access-2jr6x\") pod \"auto-csr-approver-29535314-dngmw\" (UID: \"63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf\") " pod="openshift-infra/auto-csr-approver-29535314-dngmw" Feb 26 15:14:00 crc kubenswrapper[4724]: I0226 15:14:00.391395 4724 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jr6x\" (UniqueName: \"kubernetes.io/projected/63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf-kube-api-access-2jr6x\") pod \"auto-csr-approver-29535314-dngmw\" (UID: \"63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf\") " pod="openshift-infra/auto-csr-approver-29535314-dngmw" Feb 26 15:14:00 crc kubenswrapper[4724]: I0226 15:14:00.419160 4724 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jr6x\" (UniqueName: \"kubernetes.io/projected/63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf-kube-api-access-2jr6x\") pod \"auto-csr-approver-29535314-dngmw\" (UID: \"63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf\") " pod="openshift-infra/auto-csr-approver-29535314-dngmw" Feb 26 15:14:00 crc kubenswrapper[4724]: I0226 15:14:00.559477 4724 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535314-dngmw" Feb 26 15:14:01 crc kubenswrapper[4724]: I0226 15:14:01.153988 4724 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535314-dngmw"] Feb 26 15:14:01 crc kubenswrapper[4724]: I0226 15:14:01.169641 4724 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 15:14:01 crc kubenswrapper[4724]: I0226 15:14:01.316289 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535314-dngmw" event={"ID":"63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf","Type":"ContainerStarted","Data":"4b99e3f85e66cf781e96d5aeff8bc05d632d28b33951c7ec1e3ac77dfbc2e029"} Feb 26 15:14:03 crc kubenswrapper[4724]: I0226 15:14:03.338108 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535314-dngmw" event={"ID":"63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf","Type":"ContainerStarted","Data":"2d9183de9b851e7f03f114ac1a170df36ef574f17d6e0d353b30779f617f42a5"} Feb 26 15:14:03 crc kubenswrapper[4724]: I0226 15:14:03.366224 4724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535314-dngmw" podStartSLOduration=2.257927116 podStartE2EDuration="3.366132896s" podCreationTimestamp="2026-02-26 15:14:00 +0000 UTC" firstStartedPulling="2026-02-26 15:14:01.168011778 +0000 UTC m=+14907.823750893" lastFinishedPulling="2026-02-26 15:14:02.276217558 +0000 UTC m=+14908.931956673" observedRunningTime="2026-02-26 15:14:03.353776515 +0000 UTC m=+14910.009515690" watchObservedRunningTime="2026-02-26 15:14:03.366132896 +0000 UTC m=+14910.021872061" Feb 26 15:14:04 crc kubenswrapper[4724]: I0226 15:14:04.350916 4724 generic.go:334] "Generic (PLEG): container finished" podID="63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf" containerID="2d9183de9b851e7f03f114ac1a170df36ef574f17d6e0d353b30779f617f42a5" exitCode=0 Feb 26 15:14:04 crc kubenswrapper[4724]: I0226 15:14:04.351010 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535314-dngmw" event={"ID":"63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf","Type":"ContainerDied","Data":"2d9183de9b851e7f03f114ac1a170df36ef574f17d6e0d353b30779f617f42a5"} Feb 26 15:14:05 crc kubenswrapper[4724]: I0226 15:14:05.715814 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535314-dngmw" Feb 26 15:14:05 crc kubenswrapper[4724]: I0226 15:14:05.815224 4724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jr6x\" (UniqueName: \"kubernetes.io/projected/63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf-kube-api-access-2jr6x\") pod \"63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf\" (UID: \"63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf\") " Feb 26 15:14:05 crc kubenswrapper[4724]: I0226 15:14:05.821431 4724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf-kube-api-access-2jr6x" (OuterVolumeSpecName: "kube-api-access-2jr6x") pod "63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf" (UID: "63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf"). InnerVolumeSpecName "kube-api-access-2jr6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:14:05 crc kubenswrapper[4724]: I0226 15:14:05.918258 4724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jr6x\" (UniqueName: \"kubernetes.io/projected/63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf-kube-api-access-2jr6x\") on node \"crc\" DevicePath \"\"" Feb 26 15:14:06 crc kubenswrapper[4724]: I0226 15:14:06.380403 4724 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535314-dngmw" event={"ID":"63fa354e-cb69-4a9c-8a1b-b4b92a1b46bf","Type":"ContainerDied","Data":"4b99e3f85e66cf781e96d5aeff8bc05d632d28b33951c7ec1e3ac77dfbc2e029"} Feb 26 15:14:06 crc kubenswrapper[4724]: I0226 15:14:06.380707 4724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b99e3f85e66cf781e96d5aeff8bc05d632d28b33951c7ec1e3ac77dfbc2e029" Feb 26 15:14:06 crc kubenswrapper[4724]: I0226 15:14:06.380449 4724 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535314-dngmw" Feb 26 15:14:06 crc kubenswrapper[4724]: I0226 15:14:06.432933 4724 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535308-mjbmn"] Feb 26 15:14:06 crc kubenswrapper[4724]: I0226 15:14:06.443309 4724 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535308-mjbmn"] Feb 26 15:14:06 crc kubenswrapper[4724]: I0226 15:14:06.975098 4724 scope.go:117] "RemoveContainer" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" Feb 26 15:14:06 crc kubenswrapper[4724]: E0226 15:14:06.975436 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df" Feb 26 15:14:07 crc kubenswrapper[4724]: I0226 15:14:07.994127 4724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="863b6279-0e0b-4216-be32-a72df2eb498e" path="/var/lib/kubelet/pods/863b6279-0e0b-4216-be32-a72df2eb498e/volumes" Feb 26 15:14:19 crc kubenswrapper[4724]: I0226 15:14:19.976796 4724 scope.go:117] "RemoveContainer" containerID="b9047d0eee52a41e6d655152ea0f7588562373fcb03e232e85dbab266d35325a" Feb 26 15:14:19 crc kubenswrapper[4724]: E0226 15:14:19.979244 4724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gv7d_openshift-machine-config-operator(b2405c92-e87c-4e60-ac28-0cd51800d9df)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gv7d" podUID="b2405c92-e87c-4e60-ac28-0cd51800d9df"